Image credit : Business Insider
The US and the UK have stepped in to stop the race to create ever more powerful artificial intelligence technologies. The White House reminded tech companies of their core duty to create safe goods, and the British competition watchdog has initiated a study of the industry.
Elon Musk is one of the nearly 30,000 signatories to a letter published last month urging a pause in significant projects, and regulators are under increasing pressure to act as concerns about the potential spread of misinformation, an increase in fraud, and the impact on the jobs market are raised by the emergence of AI-powered language generators like ChatGPT.
The UK Competition and Markets Authority (CMA) announced on Thursday that it would examine the foundational models or underlying technologies that support AI products. The initial review will publish its findings in September. One legal expert called it a “pre-warning” to the industry.
On the same day that Kamala Harris, the vice-president, met with top executives at the forefront of the industry’s rapid advancements, the US government announced measures to address the risks in AI research. The White House issued a statement in which it stated that technology-development companies had a “fundamental responsibility to ensure that their products are safe before they are deployed or made public.”
The gathering marked the end of a week that saw a slew of academics and business executives give dire predictions about how quickly emerging technologies would upend long-established sectors of the economy. The “godfather of AI,” Geoffrey Hinton, left Google on Monday to speak more openly about the risks posed by the technology. Sir Patrick Vallance, the UK government’s departing scientific adviser, urged ministers to “get ahead” of these profound social and economic changes by stating that the impact on employment could be comparable to that of the Industrial Revolution.
AI has the ability to “transform” how businesses compete, but Sarah Cardell insisted that customers must be safeguarded.
The CMA chief executive said: “AI has burst into the public consciousness over the past few months but has been on our radar for some time. It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.”
While concerns have been raised about AI-generated voice scams, ChatGPT and Google’s competing Bard service are prone to providing false information in response to user prompts. This week, the anti-misinformation organization NewsGuard claimed that almost 50 AI-generated “content farms” were being run by chatbots posing as journalists. A song with bogus AI-generated vocals that claimed to be from Drake and the Weeknd was removed from streaming providers last month.
The CMA review will examine potential market developments for foundation models as well as consumer and competitive possibilities and concerns. It will then develop “guiding principles” to promote competition and safeguard consumers.
The top AI companies include Anthropic and Stability AI, the British firm behind Stable Diffusion, as well as Microsoft, ChatGPT creator OpenAI, in which Microsoft is an investor, and Google parent Alphabet, which owns a prominent AI business in UK-based DeepMind.
Alex Haffner, competition partner at the UK law firm Fladgate, said: “Given the direction of regulatory travel at the moment and the fact the CMA is deciding to dedicate resource to this area, its announcement must be seen as some form of pre-warning about aggressive development of AI programs without due scrutiny being applied.”
In the US, Harris met the chief executives of OpenAI, Alphabet and Microsoft at the White House, and outlined measures to address the risks of unchecked AI development. In a statement following the meeting, Harris said she told the executives that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products”.
In order to advance artificial intelligence that is “ethical, trustworthy, responsible, and serves the public good,” the administration announced it will invest $140 million (£111 million) in seven new national AI research centers. The IT industry, which produces 32 significant machine-learning models year compared to just three from academics, dominates AI development.
Leading AI developers have also consented to having their systems openly reviewed at the Defcon 31 cybersecurity conference this year. Participating businesses include OpenAI, Google, Microsoft, and Stability AI.
“This independent exercise will provide critical information to researchers and the public about the impacts of these models,” said the White House.
The announcement from the White House was lauded as a “useful step” by Robert Weissman, president of the consumer rights non-profit Public Citizen, but he insisted that more assertive action is required. According to Weissman, this should include a ban on the use of new “generative AI technologies,” which includes programs like ChatGPT and Stable Diffusion.
Also read : The UK will look into Broadcom’s proposed $61 billion acquisition of VMware in more detail.
“At this point, Big Tech companies need to be saved from themselves. The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitive arms race and each believes themselves unable to slow down,” he said.
Additionally, the EU was warned on Thursday that it must safeguard open-source AI research or else risk ceding control of the field’s advancement to US corporations.