Image credit : Wikipedia
The so-called “Godfather of AI,” Geoffrey Hinton, resigned from Google, according to a May 1 article in The New York Times. He cited the freedom to talk openly about the dangers of artificial intelligence (AI) as the justification for this decision.
Both startling and predictable, his choice. The former because he has spent his whole life working to enhance AI technology, and the latter because of his increasing worries that he has recently revealed in interviews.
The date of the announcement is symbolic. May Day, observed on May 1, is a holiday honoring laborers and the beginning of spring. Ironically, generative AI based on deep learning neural networks and AI in general may displace a sizable portion of the workforce. For instance, IBM is already beginning to feel the effects of this.
AI taking up humans’ jobs
Others will undoubtedly follow, as the World Economic Forum predicts that over the next five years, 25% of jobs might see disruption, with AI playing a part. In terms of spring’s blossoming, generative AI may herald a new era of symbiotic intelligence, in which humans and machines collaborate to create a renaissance of opportunity and plenty.
Hinton’s primary concern, according to the Times article, is how AI can create text, video, and image content with human-quality and how bad actors might use that capability to spread false information and disinformation so that the general public “will not be able to know what is true anymore.”
He also thinks that the day when machines surpass even the most clever humans in intelligence is getting closer and closer. This topic has received a lot of discussion, and the majority of AI experts believe that this will not happen for at least 40 years.
Hinton was on the list. On the other hand, Ray Kurzweil, a former Google director of engineering, has long asserted that this day will come in 2029 when AI will easily pass the Turing Test. Before, Kurzweil’s opinions on this chronology were unusual, but not anymore.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
According to Hinton’s May Day interview: “The idea that this stuff [AI] could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
The wolf is getting close to the door, therefore those 30 to 50 years may have been spent to prepare businesses, governments, and society through governance practices and regulations.
Artificial general intelligence
The debate about artificial general intelligence (AGI), the goals of OpenAI, DeepMind, and other organizations, is a related subject. The majority of AI systems in use today excel at performing specialized, limited tasks, including playing video games or reading radiological pictures. Both types of tasks cannot be handled well by a single algorithm. AGI, on the other hand, is capable of thinking like a human would, including reasoning, problem-solving, and creativity, and as a single algorithm or network of algorithms, it would be able to complete a variety of tasks across various disciplines at a human level or above.
Predictions regarding when AGI will be accomplished range greatly, from a few years to several decades or centuries, or even never, much like the argument about when AI will be smarter than humans – at least for specific tasks. New generative AI applications like ChatGPT, which use Transformer neural networks, are increasing these timeframe predictions as well.
These generative AI systems are designed to produce human-like text responses to questions and convincing visuals from text prompts, but they also have the extraordinary potential to demonstrate emergent behaviors. This implies that the AI is capable of displaying innovative, complex, and surprising behaviors.
Also read : AGI: What is it? Perhaps sooner than you think, AI will become self-aware
For instance, since code generation was not specified in the design specification, the capacity of GPT-3 and GPT-4, the models that underlie ChatGPT, to perform this function is seen as an emergent behavior. Instead, this characteristic was a consequence of training the model. certain models’ creators are unable to properly explain how or why certain behaviors arise. It may be inferred that these abilities result from big data, the transformer design, and the models’ strong pattern recognition ability.