Search

Bad Actors Can Still Bypass Watermarks

Bad Actors Can Still Bypass Watermarks

Watermarking content is still not enough to distinguish between real and AI-generated content, as cybercriminals can also bypass the security features.

This comes as there has been growing adoption of AI technologies, especially generative AI, creating fertile ground for unscrupulous individuals to use it to spread misinformation, hate, and other forms of malicious content.

And watermarking has emerged as one of the key solutions to distinguish between AI-generated content and real-life content. Yet it has its shortcomings too.

Also read: Meta’s Audio2Photoreal Now Allows Voice-Powered Photorealistic Avatars

The manipulation

Experts say the AI growth also comes with bad actors taking advantage of the technology to spread disinformation and malice and cause panic with fake news.

Although watermarking has been identified to enable users to distinguish between AI-generated content and real human-made content, that too is prone to manipulation.

According to The Register, some visible signatures, such as the Dall-E text-to-image model, are one of the easiest to bypass the AI detection tool. The article further states that unscrupulous individuals can crop out the watermarks or copy the image in ways that bypass direct downloading.

A computer science professor at the University of Buffalo in the US, Siwei Lyu, said there are experts too when it comes to breaking the barriers.

“Watermarking technology has been used with a grain of salt because it is not so hard for someone with knowledge of watermarks and AI to be able to break it and remove the watermark or manufacture one,” said Lyu.

AI expert and executive director for a non-profit organization that helps people use technology to promote human rights, Sam Gregory, thinks watermarking can still be manipulated by bad actors.

To him, it is “a triage tool for harm reduction,” adding that “it’s not a 100% solution.”

The types of watermarks

However, it’s not all rosy for unscrupulous, bad actors. Some watermarks that are invisible, like Google DeepMind’s SynthID, are difficult to get rid of because they are “embedded directly into Imagen’s system outputs.”

This kind of watermark can’t be seen by the naked eye but can be detected using special software or algorithms.

Lyu, however, still thinks it’s possible for scoundrels with some “technical know-how to scrub that out.”

“I think watermarks mostly play on people’s unawareness of their existence. So, if they know they can, they will find a way to break it,” said Lyu.

Another type of watermark is the visible one, which is added to videos, text, or images by companies like Google, Getty, and OpenAI to verify the validity of content.

The third type of watermark is cryptographic metadata. This one shows when content was created and how it has been edited.

“Washing out” watermarks

A study by the University of Maryland led by Soheil Feizi, a computer science professor, said AI watermarks were not foolproof. They tested all the types of watermarks and “broke them all.”

“We don’t have any reliable watermarking at this point,” he said.

The professor also explained how easy it is for bad actors to manipulate watermarks in what he described as “washing out” the watermark.

Another study co-authored by researchers at the University of California, Santa Barbara, and Carnegie Mellon also had the same findings.

“All visible watermarks are not reliable,” reads the paper.

Time-sensitive problem

A US Senate staffer involved in the drafting of legislation relating to AI told Fedscope of the need to “keep pace with the bad actors,” adding the Senate was currently in an “education and defining the problem” era.

“It’s like Olympic athletes—now that I know you’re looking for this drug, I’ll just take another drug.”

With many countries going to the polls this year, the need for measures to minimize deepfakes and misinformation cannot be overlooked. Last September, Senate Majority Leader Chuck Schumer said tackling the authenticity of content was a time-sensitive issue given that elections are just around the corner in many countries.

“There’s the issue of actually having deepfakes, where people really believe… that a candidate is saying something when they’re totally a creation of AI,” he said after his first closed-door AI insight forum.

Tech firms take on the challenge

In the meantime, the Coalition for Content Provenance and Authenticity (C2PA), the brainchild of media and tech companies, is now in place. This identifies the source of the image, whether it’s a camera or an AI program that was used to create it.

It also gives details on the time, location, and method of creation of the image, allowing people to verify its provenance. TikTok is reportedly among the top tech firms to take a lead on watermarking.

“TikTok shows you the audio track that was used; it shows you the stitch that was made; it shows you the AI effects used,” said Gregory.

Other big tech companies like Alphabet, Amazon, Meta, and OpenAI pledged last year to develop watermarking technology to tackle misinformation.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×