Image credit : DeepMind
With a view to stopping the spread of false information, Google DeepMind on Tuesday announced a beta version of SynthID to watermark & detect computer-generated images by embedding a digital watermark right into the image pixels that will be undetectable to the human eye however detectable for identification.
The computer-generated label remains intact regardless of the changes made like added filters or altered colors.
The SynthID tool is also able to scan incoming images and detect the possibility they were created by Imagen, one of Google’s latest text-to-image generators, by scanning for the watermark with three different categories of certainty: detected, not detected and possibly detected.
In cases of heavily edited images, SynthID might not be able to detect the watermark accurately. The AI tool can categorize such images as possibly AI-generated.
“While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.
Kris Bondi, CEO and founder of Mimoto, a company working to identify and mitigate security breaches, said that while Google’s SynthID is a starting place, the issue of deep fakes will not be addressed by a single solution.
“People forget that bad actors are also in business. Their tactics and technologies continuously evolve, become available to more bad actors, and the cost of their techniques, such as deep fakes, comes down,” said Bondi.
- Google has now come with a tool to detect AI generated images.
- The tool can help stop the spread of false information.
- Unlike traditional watermarks that get easily changed or removed, SynthID’s watermark retains the image quality.
“The cybersecurity ecosystem needs multiple approaches to address deep fakes, with collaboration to develop flexibly architected approaches that will evolve to meet and surpass the bad actors’ technology,” added Bondi.