Last month, it was reported that Blake Lemoine, a software engineer on Google’s AI development team, publicly stated that he had encountered LaMDA’s “sentient” AI on the company’s servers. After Lemoine contacted government officials about his concerns and hired a lawyer to represent LaMDA, Google placed the engineer on paid administrative leave for violating a confidentiality agreement. And now the company has fired Blake Lemoine altogether, saying that his fears are “completely unfounded.”
This is in line with the views of numerous AI experts and ethicists. They also felt that Lemoine’s claims were more or less impossible given today’s technology. He claims that his conversations with the chatbot led to the idea that LaMDA has become more than just a program and has its own thoughts and feelings, and not just a realistic conversation.
He argues that Google researchers must obtain consent from LaMDA before experimenting with it (Lemoine himself was tasked with testing whether AI causes hate speech). He also posted snippets of those conversations on his Medium account as evidence.
Google also states that the LaMDA AI has gone through 11 different evaluations. The company claims it has “comprehensively” reviewed Lemoine’s claims and found them to be “completely unfounded.” Despite the fact that the company “worked with him to clarify this for many months,” nevertheless, “Blake nevertheless decided to deliberately violate clear employment and data security rules.”
Source: The Verge