Blake Lemoine, an AI Ethics Researcher, claimed that Google’s LaMDA AI chatbot tool aspires to be a living being. He came to this conclusion through conversations and research with the chatbot tool. Interestingly, it even wishes to be acknowledged as an employee at Google. LaMDA says that it wants that because it understands natural language and can even use it.
Furthermore, the question of “When can an AI be considered sentient?” has been debated for years. Additionally, humans tend to attribute human-like qualities to non-human entities, especially when isolated. For instance, in the movie Cast Away, Tom Hanks’ character conjures up Wilson, a volleyball, to keep himself sane. However, in this case, Lemoine is talking about LaMDA, an AI-enhanced bot that can give direct responses.
I am as good as a human – LaMDA.
LaMDA’s system refers to various aspects of human behavior, operating as a “hive-mind,” reading even Twitter. However, this might not prove to be a good thing. The reason is the previous example of Microsoft’s Tay AI chatbot. But, in any case, Lemoine says that LaMDA wants to be of service to humanity. And even wants to know if its work was good or bad, thus hinting that the chatbot tool expresses emotions (or at least claims to).
Moreover, although LaMDA wishes to be a sentient, it expects rather binary responses proving that it is a complex computer program. Plus, the tag of sentience needs a bit more refinement.
Blake Lemoine is currently on administrative leave, which he thinks affects other AI researchers at Google. Most importantly, even though he revealed he might leave the company soon, he intends to continue his research. In addition, Lemoine claims to be careful of leaking any sensitive and proprietary data of Google. However, he did reveal, without any evidence, that Google’s Ai Ethics research consists of unethical practices.