Google may be a step closer to limiting the spread of deepfakes with its new AI tool known as SynthID, which can embed a digital watermark in AI-generated images. This is in a bid to help distinguish between AI-generated images and real images.
As advancements in generative AI tools continue, with the ability to create images, videos, and audios that sound real, the need to distinguish between real and AI-generated content is paramount to helping curb the spread of misinformation.
Part of the solution
According to an article by The Washington Post, Google said their watermarking technique is very difficult to temper, making it a step towards limiting the spread of deepfakes and misinformation.
While this cannot be detected by the naked eye, Google has indicated it can easily be picked up by computers trained to do so, as per suggestions by AI researchers. The development by Google could address one of the pitfalls of generative AI: the spread of deepfakes, a challenge that has spurred the need for urgent regulation to tame the industry.
In May this year, AI-generated images of an explosion at the Pentagon went viral, causing a momentary stir on the stock markets, while other fake images showing former US President Donald Trump tussling with police also made headlines. While companies have tried to label AI-generated images, the techniques available are easy to get around, which means AI images can still be passed around as real ones.
Guardrails inevitable in election season
Google DeepMind CEO Demis Hassabis has come out and stressed the importance of creating systems that can detect AI images as the US heads into presidential elections scheduled for 2024. Hassabis said the new tool can embed the watermark in the pixels of the image without altering the image itself or its quality.
“It doesn’t change the image, the quality of the image, or the experience of it. But it’s robust to various transformations—cropping, resizing, all the things that you might do to try and get around normal, traditional, simple watermarks,” he said.
Currently, SynthID is only available to paying customers who are using Google’s Cloud computing services, and it is reported to only work on images that are generated using Google’s Imagen image generator. Reports indicate that the tool is still experimental; therefore, Google is requesting that customers be patient.
More still to be done
DeepMind vice president Pushmeet Kohli says while this is an attempt at creating a system that can easily fake images using embedded watermarks, users should also know the new watermarking tool is not fully proofed yet.
“There are other techniques that are out there for embedded watermarking, but we don’t think they are that reliable,” he said.
Tech companies have already pledged at the White House to create tools that can watermark AI-generated content. Microsoft has spearheaded a coalition of tech companies to create standards to guide the development of watermarking tools. OpenAI’s image generator, Dalle-E, also adds a visible watermark, according to The Washington Post. This has been seen as one way of allowing innovation in a responsible manner.
Google’s SynthID tool was launched at the Cloud Next Conference, where Google showcases new features to its business customers. It seems there are a lot of companies that are putting an effort into the identification of AI-generated content as the development of AI tools is improving, with image generators expected to get even better. Tools like SynthID will come in handy in the future to help curb misinformation.