In the 2016 science fiction drama Arrival about first contact with aliens, the movie’s two protagonists, a linguist and a physicist, meet in a military helicopter on their way to attempt to decipher and understand why the aliens came to earth and what they want. The physicist, Ian Donnelly, introduces himself to the linguist, Louise Banks, by quoting from a book she published: ‘Language is the cornerstone of civilization. It is the glue that holds a people together. It is the first weapon drawn in a conflict.’ That scene sets the tone and pace for the rest of the movie, as Louise and Ian work against the clock to understand the alien’s highly complex language in order to communicate with them.
We instinctively associate the use of language to communicate ideas, concepts, thoughts, and even emotions, with understanding and intelligence. Even more so when sophisticated grammars and syntax are able to communicate concepts and ideas that are abstract, creative, imaginative, or nuanced.
Last week, the influential American linguist Noam Chomsky, along with two colleagues, Ian Roberts, and Jeffrey Watumull, published an opinion essay in the New York Times attempting to explain why existing machine learning and artificial intelligence (AI) systems, in particular, large language models (LLM’s) such as ChatGPT “ … diﬀer profoundly from how humans reason and use language.” And why “these diﬀerences place signiﬁcant limitations on what these programs can do, encoding them with ineradicable defects.”
They go on to argue that “Their deepest ﬂaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”
By the time The New York Times closed the comments section, there were 2050 comments and opinions logged. Not surprisingly, the reactions from readers cut across a wide range of ideological spectrums and priorities. Many readers expressed agreement or disagreement with the technical arguments the authors’ attempted to make refuting the ‘intelligence’ of systems like ChatGPT. Much of the commentary focused on the societal, ethical, and political implications of emerging AI technologies.
Others expressed concerns about the erosion such machine learning and AI tools might precipitate other humanistic endeavors. One reader wrote: “Meanwhile, at many universities, humanities departments are being hollowed out. What Chomsky is describing here is a fundamental need for human-centered learning in history, philosophy, political science, languages, anthropology, sociology, psychology, literature, writing, and speaking. Those exact programs are being slashed right now by presidents, provosts, and deans at many universities. These corporate-minded administrators care more about the bottom line than actually educating students for the world they will live in. AI will be a useful tool, but it’s not a replacement for a human mind and an education in the humanities.”
And just today, OpenAI released GPT-4. This next evolution of GPT will be able to handle images in addition to text inputs, and OpenAI claims that it displays “human-level performance on various professional and academic benchmarks”.
In an attempt to explore this further I reached out to several experts and asked them what they thought about the Chomsky essay, and what intelligence is, more broadly. [Continue reading…]