Google invests $300mn in artificial intelligence start-up and ChatGPT rival Anthropic
February 04, 2023 By Awanish Kumar
(Image Credit Google)
The Financial Times has just reported that Google has invested $300 million in Anthropic, one of the most talked-about OpenAI competitors, whose recently unveiled generative AI model Claude is thought to be competitive with ChatGPT.
Google will reportedly acquire a share of approximately 10%. The additional cash will give the San Francisco-based business a $5 billion market value.
The development, which comes only a few days after Microsoft reportedly invested $10 billion in OpenAI, foreshadows a fiercely competitive Big Tech race in the field of generative AI.
Anthropic founded by OpenAI researchers
Anthropic, which was established in 2021 by a number of researchers who left OpenAI, received greater notoriety in April when, less than a year after its founding, it abruptly revealed a stunning $580 million in investment. It turns out that Sam Bankman-Fried and the employees of FTX, the now-defunct cryptocurrency company suspected of fraud, are responsible for the majority of that money. It has been questioned whether a bankruptcy court would be able to retrieve that money.
Anthropic and FTX have both been connected to the Effective Altruism movement, which former Google researcher Timnit Gebru recently criticized as a "hazardous brand of AI safety" in a Wired opinion piece.
Also read: Microsoft invests billions in a new partnership with the company behind ChatGPT
Google will have access to Claude
Anthropic's AI chatbot, Claude, is supposedly similar to ChatGPT and has even shown advancements. It is now accessible in closed beta through a Slack integration. Anthropic produced Claude using a method dubbed "Constitutional AI," which it claims is based on ideas like beneficence, non-maleficence, and autonomy. Anthropic sees itself as "trying to build dependable, interpretable, and steerable AI systems."
A supervised learning phase and a reinforcement learning phase are both involved in the process, according to an Anthropic paper outlining Constitutional AI: "As a result, we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them."
By Awanish Kumar
I keep abreast of the latest technological developments to bring you unfiltered information about gadgets.