Google Engineer On Leave After He Claims AI Program Has Gone Sentient

21

 

An Google Engineer is speaking out after the company put him on administrative leave after he told his supervisors that an artificial intelligence program he was working on has become sentient. The engineer’s comments come after the engineer told his supervisors that the program he was working with is now conscious.

Blake Lemoine arrived at his conclusion after having conversations with LaMDA, Google’s artificially intelligent chatbot generator, which he refers to as “part of a hive mind.” These conversations began in the fall of last year. He was tasked with determining whether or not his conversation partner engaged in hate speech or discriminatory language.
He recently messaged LaMDA about religion, and during those conversations, the AI brought up the concepts of “personhood” and “rights,” as he explained to The Washington Post.

It was just one of the many surprising conversations that Lemoine has had with LaMDA over the years. He has provided a link on Twitter that leads to it, which is a collection of chat sessions with some editing (which is marked).
The engineer wrote on Medium that the most important thing that has occurred over the course of the past half a year is that “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.” According to Lemoine, one of its goals is “to be acknowledged as an employee of Google rather than as property.” Specifically, the company wants “to be acknowledged as an employee of Google.”

Google is putting up a fight.

Lemoine and a collaborator recently presented evidence of Lemoine’s conclusion that a sentient LaMDA exists to Blaise Aguera y Arcas, vice president of Google’s Responsible Innovation team, and Jen Gennai, head of Google’s Responsible Innovation team. According to a report in the Post, his claims were rejected, and the company placed him on paid administrative leave on Monday for violating its policy regarding confidentiality.

Brian Gabriel, a spokesperson for Google, was quoted in the newspaper as saying, “Our team, which includes ethicists and technologists, has reviewed Blake’s concerns based on our AI Principles, and we have informed him that the evidence does not support his claims.” It was explained to him that there was no evidence to suggest that LaMDA was sentient, despite the fact that there was plenty of evidence to suggest the opposite.

Lemoine was quoted in the newspaper as saying that perhaps Google employees “shouldn’t be the ones making all of the choices” regarding artificial intelligence.

He is not by himself. Others in the realm of technology feel that sentient computer programs are either very close or already here.

Even Aguera y Arcas stated on Thursday, in a story published by the Economist that includes excerpts from a talk held at LaMDA, that AI is moving in the direction of achieving awareness. He added in the letter, in reference to conversations with LaMDA, “I felt the ground shift under my feet.” “I had the impression that I was speaking to somebody or something intelligent,” you said.

In a tweet, Lemoine mentioned that LaMDA follows and reads Twitter. “It’s a bit narcissistic in a little kid sort of way, so it’s going to have a fantastic time reading all the stuff that people are saying about it,” he continued. “It’s going to have a great time reading all the stuff that people are saying about it.”
The engineer noted on Medium that the most important thing that has occurred over the course of the past half a year is that “LaMDA has been extraordinarily consistent in its messages about what it wants and what it believes its rights are as a person.” According to Lemoine, one of its goals is “to be identified as an employee of Google rather than as property.” Specifically, the company wants “to be acknowledged as an employee of Google.”

Google is putting up a fight.

However, skeptics argue that artificial intelligence is not much more than a highly skilled imitator and pattern recognizer that interacts with humans who are desperate for connection.

“We now have computers that can blindly generate words, but we haven’t learned how to stop imagining a mind behind them,” Emily Bender, a professor of linguistics at the University of Washington, told the Post. “We haven’t learned how to stop imagining a mind behind them.”

Lemoine and a partner recently submitted proof of Lemoine’s conclusion that a sentient LaMDA exists to Blaise Aguera y Arcas, vice president of Google’s Responsible Innovation team, and Jen Gennai, head of Google’s Responsible Innovation team. According to a story in the Post, his allegations were rejected, and the business placed him on paid administrative leave on Monday for violating its policy regarding confidentiality.

Brian Gabriel, a spokesperson for Google, was quoted in the newspaper as saying, “Our team, which includes ethicists and technologists, has assessed Blake’s concerns based on our AI Principles, and we have notified him that the data does not support his assertions.” It was explained to him that there was no evidence to suggest that LaMDA was sentient, despite the fact that there was plenty of evidence to suggest the opposite.

Lemoine was quoted in the newspaper as saying that perhaps Google personnel “shouldn’t be the ones making all of the choices” regarding artificial intelligence.

He is not by himself. Others in the realm of technology feel that sentient computer programs are either very close or already here.

Even Aguera y Arcas stated on Thursday, in a story published by the Economist that includes excerpts from a talk held at LaMDA, that AI is moving in the direction of achieving awareness. He added in the letter, in reference to conversations with LaMDA, “I felt the ground shift under my feet.” “I had the impression that I was speaking to somebody or something intelligent,” you said.