Home » News » Bard AI chatbot updates will be coming soon, according to Google CEO Sundar Pichai: "We obviously have better models"

Bard AI chatbot updates will be coming soon, according to Google CEO Sundar Pichai: "We obviously have better models"

(Image Credit Google)
Image Credit: Wikipedia The company's experimental AI chatbot Bard has drawn criticism, and Google CEO Sundar Pichai has replied by pledging to upgrade Bard soon. Pichai stated in an audio interview with The New York Times Hard Fork, "We certainly have more capable models." "We will be updating Bard to some of our more competent PaLM models soon, possibly as this [podcast] goes online. This will provide greater capabilities, be it in reasoning, or coding, or it can answer math questions more accurately. As a result, throughout the coming week, you will see progress. Pichai mentioned that Bard employs a "lightweight and efficient version of LaMDA," an artificial intelligence language model that prioritizes dialog delivery. In some ways, Pichai remarked, "I feel like we placed a souped-up Civic in a race with more powerful automobiles." Compared to PaLM, which is more current and has a greater scale, Google argues that PaLM is more capable of handling tasks like logical thinking and code issues. On March 21, Bard was initially made available to the general public, however it was not as well received as OpenAI's ChatGPT and Microsoft's Bing chatbot. The Verge's own evaluations of these devices revealed that Bard routinely outperformed its competitors in terms of utility. It can answer a variety of inquiries, like any general-purpose chatbots, but typically its responses lack fluency and creativity and fail to rely on trustworthy data sources. Google Image Credit: Investopedia Pichai stated that Google's cautious attitude may have contributed to Bard's restricted functionality. In his opinion, it was crucial to wait until we were confident we could handle a more capable model before releasing it. Sergey has been hanging out with our engineers for a while now, according to Pichai, who also confirmed that he was discussing the project with Google co-founders Larry Page and Sergey Brin. Pichai also stated that although he never personally issued the infamous "code red" to halt development, there were likely others within the organization who "sent emails saying there is a code red." Pichai also addressed worries that the present pace of AI research is too rapid and might be dangerous for society. Several members of the AI and tech communities have issued warnings over the potentially harmful race dynamic currently in play between OpenAI, Microsoft, and Google, among other businesses. Elon Musk and leading AI experts demanded a six-month halt on the development of sophisticated AI systems in an open letter published earlier this week. Google Image Credit: Enterpreneur In response to the open letter calling for the delay, Pichai said, "In this area, I think it's vital to hear concerns." "And I believe there is reason to worry about it... He also said that "AI is too important an area not to regulate," but that it would be preferable to simply apply regulations in existing industries, such as privacy regulations and regulations in healthcare, rather than creating new laws to address AI specifically. No one knows all the answers, and no one company can get it right. Some experts are concerned about immediate risks, such as chatbots' propensity to disseminate false information, while others are concerned about existential threats, suggesting that these systems are so challenging to control that they could be used destructively once they are connected to the wider web. Some claim that existing programs are also getting closer to systems that are at least as capable as a human is across a wide range of tasks, often known as artificial generally intelligence, or AGI. Read More: In order to avoid a failure, Google AI Chatbot will compete with ChatGPT for its search engine It virtually doesn't matter whether you've attained AGI or not, according to Pichai, since these systems are going to be very, very capable. "Is it possible to create an AI system that spreads false information widely? Yes. AGI, perhaps? It doesn't matter at all. Why is the safety of AI a concern? So you have to prepare for this and change to fit the situation.

By Omal J

I worked for both print and electronic media as a feature journalist. Writing, traveling, and DIY sum up her life.

RELATED NEWS

Elon Musk's new X marketing campaign was a risk th...

news-extra-space

Apple has expanded the options for its satellite-b...

news-extra-space

Artificial intelligence (AI) can help us find info...

news-extra-space

If you use Facebook and Instagram in Europe, there...

news-extra-space

Canva, the famous design platform, has unveiled an...

news-extra-space

Anthropic is a research and safety firm for AI. Th...

news-extra-space
2
3
4
5
6
7
8
9
10