When Machines Speak
Natural Language Processing (NLP) is a field of artificial intelligence (AI) that focuses on the development of computer models that can process and analyze human language. Some newer, more complex neural language models, such as GPT-3 created by OpenAI, can generate paragraphs of original and convincing text to complete any prompt, question, or assignment. Some of the text produced by GPT-3 is convincing enough to fool humans into thinking that another human wrote it. After digesting vast amounts of text, these models learn how to predict the next word in a sentence with impressive results. Humans, however, need to learn about the world as well as comprehend the language before we can write intelligibly about a topic. These differences raise questions regarding AI that intersect fields of computer science, psychology, neurolinguistics, and philosophy.
Join Presidential Scholar in Society and Neuroscience Raphaël Millière along with Ev Fedorenko, Professor of Neuroscience at the Massachusetts Institute of Technology; Brenden Lake, Assistant Professor of Psychology and Data Science at New York University; and Melanie Mitchell, Professor of Complexity at the Santa Fe Institute, on March 29 for When Machines Speak: Language Processing in Computers and Humans to discover how computers and humans learn language.
Before the event, we asked Millière and Mitchell about their research and the intersections between philosophy and computer science.
Conceptual abstraction is at the core of our own intelligent behavior. We abstract from, say, specific objects, to the more abstract notion of “object” itself, and perhaps to “object in the road that we can ignore while driving” versus “object in the road that the car needs to stop for.” A self-driving car needs such a basic ability! This is just one example—in my talk, I will argue that abstraction (and the analogy-making process that drives abstraction) is the most important missing ingredient in creating AI systems with the robustness and generality of humans.
I have been interested in the progress of Natural Language Processing (NLP) for many years. In 2016, I started using NLP algorithms to model the effects of various psychoactive compounds on consciousness, using a corpus of 25,000 reports, as a side project. I have since explored the use of such algorithms, including state-of-the-art language models, for the analysis of various corpora describing philosophically-relevant alterations of consciousness. In the process, I became increasingly interested in philosophical issues regarding the interpretation of these models themselves, which led me to start working on a new project about language processing in models like GPT-3.
There are many different approaches. For example, neural networks have been used to learn to represent individual words as numerical vectors in a way such that semantically related words are mapped to vectors that are close to one another. This is called “word embedding.” This process of mapping words to vectors has been extended to mapping entire sentences or longer passages to vectors. These vectors are what neural networks operate on; the networks process the vectors in various ways that can then be mapped back to text. The field of “deep learning,” in which neural networks with many layers of simulated “neurons” and neural connections learn to perform tasks, has almost completely taken over the field of natural language processing over the last decade. Today’s powerful language models, such as BERT or GPT-3, use a special kind of deep neural network called a “transformer architecture.”
In the past few years, language models have mainly been used for text classification, text summarization, language translation (e.g. Google Translate), information retrieval (e.g. Google search), sentiment analysis (e.g. analyzing the valence of customer reviews), and text completion (e.g. Gmail “smart compose”). Autoregressive language models like GPT-3 go one step further: they can generate several paragraphs of coherent text at a time, given a prompt written by a human. This progress unlocks new potential applications, including semantic search engines that respond to questions, chatbots that can convincingly interact with humans, interactive experiences and video games whose content is generated in real time, assistance for writers to compose creative fiction, and assistance for programmers to write code based on a natural language description of its function. However, many potential uses of these models could also be socially nefarious, from cheating on a class assignment by generating an essay, to the mass production of fake news articles.
The capacities of state-of-the-art language models are often misrepresented. On the one hand, the impressive performance of GPT-3 has garnered a lot of attention from the media, including sensationalist articles exaggerating its competence. On the other, some academic commentators have suggested that "GPT-3 is [...] as intelligent [...] as an old typewriter" (Floridi & Chiriatti, 2020), which is also misleading. I hope this event can rectify common misconceptions about language models, showing that they fall short of human capacities such as common-sense reasoning and perceptually-grounded conceptual abstraction, yet present surprising functional similarities to language-processing areas in the human brain.
A better understanding of how these large language models work, what they can and cannot do, what they “understand,” and how we should be thinking about their possible benefits and dangers.
Learn more from experts in computer science, psychology, neurolinguistics and philosophy at the Seminar, When Machines Speak: Language Processing in Computers and Humans on March 29 at 4:00 PM ET on Zoom. This event is free and open to the public, but RSVP is required via Eventbrite.