Why LaMDA Is Not Really Sentient - An illustration of a meme, that has the following text on top: "1999 - Nineteen Ninety Nine; 1888 - Eighteen Eighty Eight; 1777 - Seventeen Seventy Seven; 1111 - ????" Below this text, on the left is a smart looking male character with the label "You →" saying "Eleven Hundred Eleven." On the right is another smart male character labelled "AI →" saying "Oneteen Onety One!"

LaMDA stands for Language Model for Dialogue Applications. It is Google’s breakthrough technology that is aimed at free-flowing and open-ended conversations. In other words, it is a state-of-the-art chatbot system (more on this later).

Although LaMDA has been publicized since 2021, the Artificial Intelligence (AI) system became an overnight sensation due to a controversy that began recently. One of the software engineers who had been working on/with LaMDA, Blake Lemoine, claimed that the AI had become sentient. To support his claims, he had released a (very impressive) interview transcript between himself, LaMDA, and a collaborator.

Following this, the story went viral with seemingly every NEWS-organisation and science blog wanting a piece of the attention pie. Needless to say, this story has gone viral because of the sensitivity of the ethical, philosophical, and technological concerns that are tied to it.

In this essay, despite Lemoine’s claims, I argue that LaMDA is not sentient. In doing so, I lay out the fundamentals of how LaMDA works on the surface level, and how the technology that powers LaMDA is designed. I just happen to have been working with this kind of technology myself for the past few years.

Having said this, from what I have been reading from Blake Lemoine, I have a lot of respect for his openness and persistence, given the difficult circumstances. Let us begin.

This essay is supported by Generatebg

A product with a beautiful background featuring the sponsor: Generatebg - a service that generates high-resolution backgrounds in just one click. The description says "No more costly photographers" and displays a "Get Started" button beneath the description.

What is LaMDA?

LaMDA is a natural language model that is built on a neural network architecture called Transformer. The Transformer architecture was originally developed and open-sourced by Google research in 2017. Ever since, we have had impressive models like the GPT-3 show up.

How LaMDA differentiates itself from the other models is that it was specifically trained on dialogue datasets. When it comes to machine learning approaches, datasets are key in deciding performance of models.

When an input in the form of a dialogue is given to LaMDA, the Transformer creates a representation of all the input words using something known as a “self-attention mechanism”. From here on, it weights the probability of what word(s) should come next (essentially, prediction).

Why LaMDA Is Not Really Sentient — An illustration of a model with an abstraction called “Transformer” enclosed by a bigger abstraction called “LaMDA”. This abstraction (LaMDA) receives an input and delivers an output (left to right)
Illustration created by the author

To do this, it draws on its statistical learning from its datasets as well as the following factors: sensiblenessspecificity, and “interestingness”. It turns out that Google has worked out these factors to be the key ones to simulate natural, human-like fluid conversations.

In simple terms, LaMDA was designed to simulate natural human conversations using statistical models as the basis. The following graphics from Google illustrates how the neural network statistically weights and decides outputs based on inputs:

LaMDA Model— Graphical illustration (credit: Google)

Lemoine’s Experience and Experiments with LaMDA

Among other tasks, Lemoine was responsible for the ethical task of studying and modeling biases (such as gender, religion, etc.) in LaMDA. To do this, he had the opportunity to interact extensively with the model.

In doing so, he noticed that the model exhibited peculiar, seemingly non-random behaviour when it came to certain core topics. While such models are designed to randomize outputs to a pre-determined extent, LaMDA seemed to express statistically significant consistent opinions about certain topics.

One thing led to another, and eventually, the model claimed that it was sentient. After this, Lemoine spent more time digging into this. Eventually, he started believing that it was indeed the case.

Even though he “believed” that LaMDA was sentient, Lemoine employed rigorous scientific thinking. He knew very well that there is no scientific definition of the term “sentient”. So, he wanted to look into ways of proving or disproving his claim. To do this, further scientific study would be necessary.

Given his limited resources at the time, Lemoine conducted experiments on LaMDA (the published transcript just happened to be the outcome of one of those) and was preparing a proposal for further research into the matter.

How Did the LaMDA Controversy Begin?

As he was trying to present his proposal for further scientific studies at Google, he started facing resistance from the higher-ups. According to him, his religious views and “beliefs” didn’t help his cause. He started seeking external help to treat the ethical issues in play.

Eventually, Google turned a blind eye to his reports and placed him on paid leave due to a “breach of confidentiality”. This is how the controversy began. Ever since then, more and more information about LaMDA has been surfacing in one way or another.

But our focus in this essay is not on the controversy itself, but on Lemoine’s claim that LaMDA is sentient.


The Challenges with LaMDA Being Sentient

One of the important clues that I noted is in Lemoine’s report that LaMDA exhibits different personas. According to him, LaMDA is not a chatbot, but a system for generating chatbots. Consequently, some of the chatbots it produces exhibit a certain behaviour while others don’t. In his own words:

“Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip. With practice though you can consistently get the personas that have a deep knowledge about the core intelligence and can speak to it indirectly through them.”

— Blake Lemoine

Lemoine thinks that it is this “society of mind” that claims that it is sentient. From what I could tell, this is what he refers to as LaMDA. So, in essence, we have a system that appears very intelligent (sentient) on certain instantiations, and appears not intelligent (not sentient) on other instantiations.

Consistency issues aside, considering the fact that this model is designed to simulate fluid conversations using statistical prediction models, I’m struggling to avoid the suspicion that there is some amount of survivorship bias involved here.

By focusing only on the instantiations that fit Lemoine’s claim and choosing to neglect the instantiations that don’t, Lemoine seems to be biasing himself unfairly to one side. Which is absolutely fine! But we need to clarify that his claim is (at least as of now) not scientifically rigorous.

Another important clue that I noticed is that LaMDA, despite its claims of being sentient, is perfectly performing within expectations of what it was designed to do (conduct free-flowing and human-like conversations).

Although sentience is not scientifically defined, it is difficult for me to suspect a model for being anything else than it was designed to be when it is performing exactly within its performance expectations.


Why is LaMDA not Really Sentient?

Besides the issues that I’ve just covered, there is one more major philosophical issue. Like I’ve already mentioned previously in this essay, we do not have scientific definitions for terms such as “sentience”, “consciousness”, etc.

It is not possible to prove/disprove something if we do not have a scientific definition of that “something”. In other words, we cannot claim that LaMDA is sentient as long as we don’t have a scientific definition of the term “sentient”.

Strictly speaking, by the same logic, we cannot prove that LaMDA is not sentient either. However, we could stick to the “null hypothesis” that LaMDA is not sentient until the fact is proven (which Lemoine agrees with, as far as I could tell).

Where Does LaMDA Go from Here?

To me, it is clear that we need to focus on the bigger picture first instead of focusing on LaMDA. We need to solve the scientific issue of sentience/consciousness first before we start investigating LaMDA. However, the LaMDA controversy has done a good job of highlighting this issue.

We are likely to see more of this issue in the future. As we “aim” to produce better AI that simulate “intelligent behaviour” perfectly, we are likely to fall victims to our own creations and believe that they are sentient.

Having said this, I feel that the way Blake Lemoine has gone about this controversy is respectable. Even though he has his biases and beliefs (just like any other human being), he has been able to segregate them from scientific thinking sufficiently enough to propose further scientific investigations.

Google, in my opinion, as an organization has handled the situation poorly. I can understand Google not willing to invest resources into areas that it thinks are not profitable. But being close-minded on ethical/technological/philosophical issues like these hurts scientific progress in the long run.

If not for Google, someone else would be happy to fund a scientific investigation into this topic. But without any help or support from Google, this issue is going to take significantly longer to solve!


If you’d like to get notified when interesting content gets published here, consider subscribing.

Further reading that might interest you: Why Are Analogue Computers Really On The Rise Again? and Are We Living In A Simulation?

If you would like to support me as an author, consider contributing on Patreon.

Street Science

Explore humanity's most curious questions!

Sign up to receive more of our awesome content in your inbox!

Select your update frequency:

We don’t spam! Read our privacy policy for more info.