Google bans deepfake-generating AI from its Colab Platform

Google

Google has prohibited the training of AI systems that can be used to generate deepfakes on its Google Colaboratory platform. The updated terms of use, spotted over the weekend by Unite.ai and BleepingComputer, include deepfakes-related work in the list of disallowed projects.

Colab spun out of a Google Research initiative in late 2017. It is designed to allow anybody to create and execute Python code using a web browser, especially for machine learning, teaching, and data analysis. Google offers free and paid Colab users access to hardware including GPUs and AI-accelerating tensor processing units.

Within the AI research community, Colab has gained a reputation as the de-facto platform for demos in recent years. Researchers who’ve built code often post links to Colab sites in GitHub repositories. Google hasn’t been extremely rigorous with Colab content, which might open the door for unscrupulous activities.

Last week, DeepFaceLab users received an error notice while trying to run the program in Colab. “You may be running forbidden code, which may limit your use of Colab in the future.”

However, the warning isn’t always triggered. This reporter executed one of the popular deepfake Colab experiments without any issue, and Reddit users said FaceSwap is still working. This implies that the enforcement is blacklist-based rather than keyword-based. In this case, the Colab community should take the onus of reporting the violating code.

“We regularly monitor avenues for abuse in Colab that violates Google’s AI principles, while supporting our purpose to give our users access to valuable resources such as TPUs and GPUs. Deepfakes were added to our list of activities prohibited from Colab runtimes last month in response to our regular reviews of abusive patterns,” according to a Google spokesperson.

He further added,  “Deterring abuse is an ever-evolving game, and we cannot disclose specific methods as counterparties can use them to dodge detection systems. In general, we have automated systems that identify and prevent many types of abuse.”

Deepfakes come in various forms, but one of the most prevalent is videos where a person’s face has been convincingly pasted on top of another person’s face. Unlike the crude Photoshop jobs of yesteryear, AI-generated deepfakes are akin to a person’s body movements, microexpressions, and skin tones better than Hollywood-produced CGI in some situations.

Viral videos exhibit that deepfakes may be harmless and enjoyable. But hackers generally misuse them to extort and scam social media users. They have also been used to peddle political propaganda, such as producing films of Ukrainian President Volodymyr Zelenskyy delivering a speech about the country’s civil war that he didn’t give. 

GadgetAny
Awanish Kumar

By awanish

I keep abreast of the latest technological developments to bring you unfiltered information about gadgets.

Related news