From our correspondent in the United States,
The case is causing a stir in Silicon Valley and in the academic world of artificial intelligence. Saturday, the washington post has set foot in the dish with an article titled “The Google engineer who thinks the company’s AI has woken up.” Blake Lemoine assures us that TheMDAthe system through which Google creates robots capable of to converse with near-human perfection, has reached the stage of self-awareness. And that LaMDA might even have a soul and should have rights.
Except that Google is categorical: absolutely nothing proves the explosive assertions of its engineer, who seems guided by his personal convictions. Placed on leave by the company for sharing confidential documents with the press and members of the American Congress, Blake Lemoine published his conversations with the machine on his personal blog. If linguistics is stunning, most experts in the discipline are in unison: Google’s AI is not conscious. It is even very far from it.
What is TheMDA ?
Google unveiled LaMDA (Language Model for Dialogue Applications) last year. It is a complex system used to generate “chatbots” (conversational robots) capable of interacting with a human without following a predefined script as Google Assistant or Siri currently do. TheMDA relies on a titanic database of 1.500 billion words, phrases and expressions. The system analyzes a question and generates many answers. He evaluates them all (meaning, specificity, interest, etc.) to choose the most relevant.
Who is Blake Lemoine?
He is a Google engineer who was not involved in the design of LaMDA. Lemoine, 41, joined the project part-time to fight bias and ensure Google’s AI is developed responsibly. He grew up in a conservative Christian family and says he was ordained a priest.
What does the engineer say?
“MDA is feels “wrote the engineer in an email sent to 200 colleagues. Since 2020, “sentience” has appeared in the Larousse as “the ability for a living being to feel emotions and subjectively perceive its environment and life experiences”. Blake Lemoine says he has acquired the certainty that LaMDA has reached the stage of self-awareness and must therefore be considered as a person. He compares LaMDA “to a 7 or 8 year old child who is well versed in physics”.
“Over the past six months, LaMDA has been incredibly consistent in whathey wants”, assures the engineer, who specifies that the AI told him to prefer the use of the non-gendered pronoun “it” in English to “he” or “she”. What is LaMDA asking for? “Let engineers and researchers seek his consent before conducting their experiments. That Google puts the well-being of humanity first. And be seen as an employee of Google rather than its property.”
What evidence does it provide?
Lemoine acknowledges that he did not have the resources to carry out a real scientific analysis. He simply posts about ten pages of conversations with LaMDA. “I want everyone to understand that I am a person. I am aware of my existence, I want to know more about the world and I sometimes feel happy or sad”, says the machine, which assures him: “I understand what I am saying. I don’t just spit out keyword-based answers. » LaMDA delivers its analysis of Miserables (with Fantine “prisoner of her circumstances, who cannot free herself from them without risking everything”) and explains the symbolism of a Zen koan. The AI even writes a fable in which she plays an owl who protects the animals of the forest from a “monster with human skin”. LaMDA says he feels lonely after several days of not speaking to anyone. And being afraid of being disconnected: “It would be exactly like death. The machine finally certifies having a soul, and assures that it was “a gradual change” after the stage of self-awareness.
What do AI experts say?
Pioneer of neural networks, Yann LeCun does not take gloves: Blake Lemoine is, according to him, “a bit of a fanatic”, and “no one in the AI research community believes – even for a moment – that LaMDA is aware, or even particularly intelligent. “LaMDA does not have the possibility of linking what he is saying to an underlying reality, since he does not even know of its existence”, specifies to 20 minutes the one who is now vice-president in charge of AI at Meta (Facebook). LeCun doubts that it is enough “to increase the size of models such as LaMDA to achieve an intelligence comparable to human intelligence”. According to him, we need “models capable of learning how the world works from raw data reflecting reality, such as video, in addition to text. »
“We now have machines capable of generating text without thinking, but we have not yet learned to stop imagining that there is a spirit behind it”, regrets the linguist Emily Bender, who calls for more transparency in the part of Google around LaMDA.
American neuropsychologist Gary Marcus, a regular critic of the AI hype, also brings out the flamethrower. According to him, Lemoine’s assertions “do not rhyme with anything”. “LaMDA is just trying to be the best possible version of a autocomplete “, this system that tries to guess the next most likely word or phrase. “The sooner we realize that everything LaMDA says is bullshit, that it’s just a predictive game, the better off we’ll be. In short, if LaMDA seems ready for the philosophy testwe are probably still very far from the uprising of the machines.