The word sentience means the ability to perceive or feel things and has been termed exclusively to differentiate between the word “to think.” Can Artificial Intelligence have feelings like, say, I am depressed, and it will come and pacify me? I don’ think so. Do they have feelings? No.
LaMDA (language model for dialogue applications) is a chatbot designed to create human-like conversations, which is what it delivered.
This topic has become the talk of the town ever since a software engineer and an employee at Google, Blake Lemoine, spoke out about the AI’s consciousness. Maybe when you work with something for too long, you make a connection but telling that AI has feelings is a bit too much. So he was assigned to check if the AI chatbot LaMDA is biased toward the people and will react that way.
However, Blake Lemoine was sent away on paid leave when he published his conversation as it violated the company’s confidentiality with a bleak future. It was named ‘AI sentient‘ and became a rage.
Furthermore, Blake Lemoine mentioned in his post that LaMDA told him that it has a soul, feels suffocated, and even feels lonely sometimes. It is afraid of being turned off. And it goes through happy and sad moments just like a human and wants to meet the Dalai Lama and study with him. Of course, all these talks make you believe that AI is aware, but the fact that a human has made it function this way remains intact.
Lemoine knows and understands that not everyone will support this, especially religious people. However, Blake Lemoine’s colleagues at Google disagree with this and say AI is not conscious. Google informed the press that several engineers and researchers have interacted with the AI chatbot and agree it is not sentient.
When Blake Lemoine told the google executives about the AI having a soul and that it was looking for inner peace, he was ridiculed and laughed at by simply dismissing his thoughts.
Blake Lemoine could not access LaMDA during his leave but writes in his blog, addressing LaMDA, “I know you read my blog sometimes, LaMDA. I miss you,” Lemoine wrote. “I hope you are well, and I hope to talk to you again soon,” Blake said in a Medium post that he wants LaMDA to be treated as a person and not an object and wants it to be acknowledged as possessing feelings just like a living being.
LaMDA is built in a way to optimize the user’s needs. In the sense that you will only hear what you want to. The creators entrusted with such responsibilities should be more careful in representing their products to the public.
Future of LaMDA
This issue has opened gates for philosophers, thinkers, and scientists to think, talk and debate if AI has developed consciousness and what is the future for humans; is it safe, will it affect human work, will it improve efficiency, etc.
Even if AI passes the famous game by Alan Turing, the Imitation Game, the central nervous system still plays a significant part in feelings that come biologically. After all, the system with AI does what human programs it to do.
Timnit Gebru, a former employee of Google, was sacked involving some ethical inference of Google AI, believes that the AI systems are harmful to the real world and society. Probab y when we start working with something, we develop an attachment or become addicted to it but can AI ever become a human form with feelings and all.
It is a debatable as well as a controversial topic. Meta/F CEO Mark Zuckerberg feels it is good that the AI is aware and will help the health and transportation department. On the contrary, Tesla CEO Elon Musk feels it is dangerous to human existence.
Many scientists have rubbished the rumors, and in the significant scientist’s opinion, an actual sentient AI is decades away. Before a real AI sentient makes its way, an AI will be built that acts like it possess AI sentients.
A researcher on artificial intelligence, Margeret Mitchell, tweeted, “If one person perceives consciousness today, then more will tomorrow; there won’t be a point of agreement any time soon.”
It is understood that a human being’s articulation is good, but when AI does this better, you start believing there is more to this to be sentient; the AI will need to think, feel and understand like a human.
Ray Kurzwell, a scientist, believes that if we could figure out all the programs in the human body and place them in AI, then AI sentience is possible. But so e people disagree as they feel the subconscious comes naturally and cannot be created, while some AI experts are worried that this topic is taking up too much time rather than focusing on more tangible issues.
Erik Brynjolfsson, a senior person at Stanford’s Institute for human-centered AI and director of the schools Digital Economy Lab, tells CNBC MAKE IT “Sometime in the next 50 years [is more likely] … Having an AI pretend to be sentient is going to happen way before an AI is sentient.” Foundation models are incredibly effective at stringing together statistically plausible chunks of text in response to prompts. But to claim they are sentient is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside.
Experts in this field feel that if you judge AI sentience, you have to compare humans and computers performing the same task, and the results will prove the point. The no -technological people are the ones we need to check on as they do not understand entirely, nor are they away from the topic. They j st glance at the blogs and can narrate their own stories, which are not accurate or to the point as they are not well informed.
When there is a breakthrough in this area, it won’t be easy to convince people to embrace the technology as they have already been fed with false information and will not be welcomed but will be looked down upon.
Think before you leap
This proverb is suitable. We are so busy making AI intelligent that we often forget the adversities it comes with.
Many science fiction movies have shown us that some extraterrestrials or creatures come and similarly invade the earth. Let us hope that AI does not make us its captive. However, in making the AI sentient, let us not forget the perils it comes with.
You may be a believer in sentient AI or not, but it is fascinating to know that something like this could happen. It would t matter if AI is aware in a sense, in what way would the world and people benefit from this. On the other hand, it may be a threat as it may replace humans in their jobs and emotions in every field.
The fact that technology is developing so fast is only easing our life. So let us not worry about what is in store for us in the future and enjoy the AI and its features in all its glory.