Home » News » OpenAI Wants to Prevent AI from Lying and Hallucinating

OpenAI Wants to Prevent AI from Lying and Hallucinating

(Image Credit Google)
(Image credit- Yahoo Finance) The creators of ChatGPT, OpenAI, announced on Wednesday that they are enhancing the chatbot's capacity to solve mathematical problems in order to minimize AI hallucinations. In a post, OpenAI stated that "mitigating hallucinations is a critical step towards building aligned AGI." The most recent version of ChatGPT, GPT-4, was released in March and is helping to advance artificial intelligence's acceptance in society. However, historically, generative AI chatbots have struggled with facts and had a tendency to spew out incorrect information also referred to as "hallucinations." AI hallucinations are when an AI produces unexpected, false outcomes that aren't supported by real-world facts. AI hallucinations may contain erroneous information, news, or details about people, places, or things. [caption id="" align="aligncenter" width="750"]OpenAI Exec: It'll Be Easy to Cause 'Great Deal of Harm' With AI Image credit- Business Insider[/caption] OpenAI clearly cautions users against uncritically believing ChatGPT by displaying a disclaimer that reads, "ChatGPT may produce inaccurate information about people, places, or facts." Although OpenAI did not provide any specific cases that inspired the most current research into hallucinations, two recent events did provide illustrations of the problem in the actual world. Microsoft, a global leader in technology, gave journalists a preview of Bing's chatbot features in February, which included financial data, hoover cleaner parameters, and trip arrangements. The outcomes weren't exactly fantastic. Despite these problems, Microsoft is placing a lot of faith in ChatGPT and has integrated the technology into its Bing web browser after investing $13 billion in OpenAI. In its study, OpenAI contrasted "process supervision," which provides feedback for each stage in a thought process, with "outcome supervision," which provides feedback based on the final output. [caption id="" align="aligncenter" width="800"]GPT-4 Hires And Manipulates Human Into Passing CAPTCHA Test | IFLScience Image credit- IFLScience[/caption] Using issues from the math test set, we assess our process-supervised and outcome-supervised reward models, according to OpenAI. "We generate a variety of solutions for each issue, and we choose the one that each reward model rates highest." Jonathan Turley, a criminal defense lawyer and law professor from the United States, asserted in April that ChatGPT had accused him of committing sexual assault. Also read: Microsoft Bing AI Chatbot Can Currently Only Respond Five Times in a Session: Here’s Why Earlier this week, Steven A. Schwartz, an attorney in the Mata v. Avianca Airlines case, acknowledged "consulting" the chatbot as a source when doing research. The issue? All of the results ChatGPT gave Schwartz were false. In the evidence filed to the court, Schwartz stated that it was his mistake for failing to verify the sources for the legal opinions that Chat GPT had provided. He also expressed his "great regret" for having used generative AI to support the research. Schwartz vowed never to do so again without complete assurance of its veracity.

By Raulf Hernes

If you ask me raulf means ALL ABOUT TECH!!

RELATED NEWS

Elon Musk revealed his newest project, XMail, an e...

news-extra-space

Prepare to navigate your friends' Stories using a ...

news-extra-space

Apple faces a challenge from the Cash program, the...

news-extra-space

Remember how difficult it was to Shazam a catchy T...

news-extra-space

Following the viral popularity of its AI selfies, ...

news-extra-space

The days of awkward keyword searches and never-end...

news-extra-space
2
3
4
5
6
7
8
9
10