A guy named Blake Lemoine made news this week by claiming that Google’s LaMDA chatbot is sentient and getting written about in The Washington Post. He was more specific: “It has feelings, emotions and subjective experiences.”
Those claims are false.
I say this from the perspective of the scientific fields that study feelings, emotions, perceptions, and experiences: cognitive psychology and cognitive neuroscience (my Ph.D. field). In brief, emotions and perceptions are specific phenomena we experience, that can be measured by specific psychological tests, and have specific neurophysiological and biochemical underpinnings, which LaMDA does not have, and is not simulating. Emotions and experiences are very different from producing and consuming language, which is what LaMBDA can do.
What we hear on the news
Later this week, my co-founder Maya got an excited email from Roch Urbaniak, the Polish painter of fantastical towns. Maya has licensed half a dozen of his paintings to make them into Artifact Puzzles, including the one below, Floating Town.
Roch knows that Maya is also a machine learning researcher. “Is LaMDA a hoax?” he wrote. “A huge exaggeration? Media scoop with no big meaning? Or is it a real deal?”
Maya wrote back, “LaMDA is just a chatbot with very good ability to pattern match to everything ever posted on the internet. That makes it easy for it to sound like a real person. Technically it is very impressive and could not have been built until recently, but it's no more sentient than your calculator or the public library.”
I was struck by something Roch said later. “After having a global pandemic followed by Russia’s attack on Ukraine [Ukraine borders on Poland, and Ukranian refugees are now almost 10% of the people in Poland], my and my friends’ threshold of accepting the unthinkable is quite low:). So if we had sentient AI in June and aliens making first contact in November I wouldn’t be that surprised.”
Why do people all over the world hear these claims about artificial intelligence and wonder if they’re true? By comparison, the experts are skeptical. Credulous writing in newspapers seems the biggest factor. The Post story was “balanced” in the same way its stories about Donald Trump were: here’s a story, that you didn’t need to hear in the first place, about a person making some false, unsubstantiated, or weird claims, with the claims put right up front, in the same way you would if your job was to advertise those claims. The “balance” part is where other people’s reactions are presented, but that doesn’t undo the damage of presenting the false claims in the first place. They could have skipped the story, and that would have been fine.
Where do these claims come from? It’s a well-documented phenomenon that people hallucinate human and human-like influence behind naturally occurring phenomena, going back to Zeus and Hera, and beyond, and people now react to built things that way too. In the 1960s the chatbot ELIZA was built – a very simple program that prints out lines of text that appear to be things a pscyhotherapist would say – and according to Gary Marcus even it fooled some people into thinking it was a person.
Should we be amazed? Afraid?
We should not be afraid of artificial intelligence. It is extremely unlikely that AI will have a huge and sudden negative consequence. AI is just a mundane tool, like steel or electricity, that is used in a wide range of products. As Steve Pinker put it:
It makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle.
AI will continue to be used for a lot of good, and perhaps sometimes for harm, as part of products that use AI as a tool. It is really those products that bring about the benefits and harms. Google search is a better product than it would be without AI. Supply-chain management is more effective, and thus costs of consumer goods are less, than they would be without AI. If you’re under 50, there’s a good chance your life will eventually be saved at least once by AI drug discovery. Will other negative consequences come close to these positive consequences? Probably not.
Should we be amazed? Here the answer is mixed. LaMDA isn’t publicly available, but it probably does an impressive job of generating complex on-topic responses to text inputs. There are some amazing AI tools, including self-driving cars, the AlphaFold protein shape predictor, the GPT-3 text generator, and the DALL-E-2 image generator that I’ve written a lot about in previous editions of this newsletter.
What is LaMDA doing? What are conscious humans doing?
But we shouldn’t think that AI programs have feelings, emotions or subjective experiences. How do we know? Cognitive neuroscience studies emotions and experiences. They have specific and complex neurophysiological and biochemical implementations in our body. The components have names like dopamine, serotonin, L-Phenylalanine, the amygdala, the visual cortex, and the synapse. Science knows a lot about some of these components, less about others. AI programs like LaMDA don’t have any of those things, and they don’t simulate them.
What does LaMDA do? Something much simpler than a person. The smartest aspects of it are the same as the GPT-3 architecture: it has an Attention Is All You Need model of language trained on huge piles of text that fell off the back of the truck which is the internet. It is in some ways a very sophisticated technology and it does an amazing job of representing all the text on the internet, but as far as I can tell the smarts mainly come from a simple predict-the-blank step: take a sequence of words, drop one or more of them, train a neural net to fill in the missing words, then repeat billions of times.
This statistical model of language includes a lot of complex phrases and how they relate to each other, but it is lacking a huge number of other things that neuroscience and molecular biology tell us conscious beings -- everyone reading this, me, labradors, cats, mice -- have. We have abundant mental models of things that aren't language at all, like pain, itching, despair, frustration, happiness, our individual friends and family members, and our visual perceptions. The surface area of this human experience is huge. Our mental models are implemented by the above-mentioned biological subsystems: dopamine, serotonin, L-Phenylalanine, the amygdala, the visual cortex, the synapse, and many thousands of others.
Probably the biggest reason that AIs don’t have these things is that the sciences of psychology, neuroscience, and biology are not yet sophisticated enough to understand them fully. But there’s another very important one: no one is trying. Researchers might try to build a conscious AGI that has real emotions and experiences based on a partial understanding of how human mental experiences work, but this is still a hard task, and its not the subject of very much research.
An artificially intelligent model of human emotion and experience would need another thing that I have no idea how to get: training data. That is, digital recordings of inputs and outputs of real human emotions and experiences. Perhaps this can be built from camera recordings and recordings of people speaking openly about their emotional experiences, but it seems like a very big research project, and no one’s working on it either.
In conclusion, some AI programs are wondrous things, but they’re not wondrous in the same way as adult humans, or babies, or labradors, or this cat.
I enjoyed your post, thank you for writing!
Nevertheless, I think there's some specious reasoning going on that demands some unpacking. From what I can gather, this piece makes the following argument:
1) Humans/mammals have feelings/emotions/subjective experiences
2) From neuroscience, we have good evidence that feelings/emotions/subjective experiences in (1) are implemented by networks of neurons interacting via chemo-electric communication
3) LaMDA does not implement or simulate (2)
Therefore: LaMDA does not have feelings/emotions/subjective experiences
To illustrate the problem with this argument, let's reformulate it as a conditional proposition:
(i)"IF an entity has networks of neurons interacting via chemo-electric communication or a simulation of these processes
THEN that entity can have feelings/emotions/subjective experiences"
(ii) LaMDA does not have networks of neurons interacting via chemo-electric communication or a simulation of these processes
Therefore: LaMDA does not have feelings/emotions/subjective experiences
Now we can see the fallacy: this argument depends on Denying the Antecedent [https://en.wikipedia.org/wiki/Denying_the_antecedent]
To make this argument work, a much stronger conditional premise (which is not well-founded scientifically) is necessary:
"entity can have feelings/emotions/subjective experiences IF AND ONLY IF that entity has networks of neurons interacting via chemo-electric communication or a simulation of these processes"