Image credit : PetaPixel
Our view of OpenAI’s massive language models, including the well-known chatbot ChatGPT, is, in the opinion of robotics researcher and AI expert Rodney Brooks, highly overstated.
Brooks contends in an interview with IEEE Spectrum that these technologies are significantly less intelligent than is often believed and are incapable of matching human cognition in any activity. He contends that we have repeatedly miscalculated while attempting to foresee AI’s potential.
The main topic of discussion throughout the conversation was whether or not artificial general intelligence (AGI) was close to matching human levels of intellectual prowess. “No,” responds Brooks to the publication’s question. “It doesn’t have any connection to the world. It has different correlations between languages.”
#FPTech2: #ChatGPT-like #AIbots are more stupid than people realise, says AI expert #RodneyBrooks, a #robotics researcher. He believes that #AI bots are good with language and some reasoning, but are unable to infer proper meaning.https://t.co/gHQiLipxgH
— Tech2 (@tech2eets) May 22, 2023
That’s all. Excellent with some logic and language.
The remarks made by Brooks are a crucial reminder of the current limitations of AI technology and the ease with which we interpret their outputs as having meaning, despite the fact that their primary goal was to sound like humans rather than rational beings.
“When we observe a person’s actions, we quickly assess their broader capabilities and make judgments. However, our methods of generalizing from performance to competence do not apply to AI systems,” Brooks explains to IEEE Spectrum.
A false sense of significance
In other words, despite giving the impression that they can, existing language models cannot logically infer meaning, which could confuse users.
Also read : People won’t realise how quickly Twitter will change: Elon Musk
Brooks emphasizes, “What large language models excel at is producing answers that sound correct, which is different from actually being correct.” The researcher acknowledges his own experiments with large language models to assist him with intricate coding tasks but encountered significant difficulties. “It provides an answer with unwavering confidence, and I tend to trust it,” Brooks confesses to IEEE Spectrum. “However, half the time, it is completely wrong. I spend two or three hours following that hint, only to realize it didn’t work, and then it suggests an entirely different approach.” He adds, “Now, that is not intelligence. It is not interaction. It is simply looking up information.”