Search

Artists “Poison” Generative AI to Protect their Work

Artists “Poison” Generative AI to Protect their Work

Artists have devised ways of leveraging technology like Nightshade to “poison” generative AI algorithms to combat their threat to their work.

This comes as generative AI has become popular, making unskilled artists or musicians produce music and art with little or no effort, hurting the industry. Image generators such as Midjourney and Stable Diffusion have been used to produce images that look almost real.

However, the new technology allows artists to adapt their images by a few pixels “to look fine to humans, ‘but poison the well’ for AI tools.”

Also read: India’s Epic Mahabharata Comes Alive in the Metaverse

Nightshade to the rescue

Other AI image models, like Adobe’s Firefly and those from Google or Meta, use licensed material when generating content. Adobe, for instance, has a deal with Shutterstock and Getty Images after building its own AI generator trained on library images.

However, many other AI image-generating tools thrive and are trained by scraping the open web for source material.

As a result, the practice has divided opinion within the art industry and caused disgruntlement among creatives who make artwork and share it online.

On the other hand, technologists have equated AI with a human art student who simply takes inspiration from artwork.

Although there have been efforts to halt the scraping through no-follow commands on webpages, these have not yielded much fruit, as this is not always obeyed. However, a new tool has been developed called Nightshade, which allows artists to turn the way AI is trained to their advantage.

According to Tom’s Guide, this leads to “unintended consequences when generating images.”

Using Nightshade

Nightshade alters the pixels of an image in subtle ways but damages the AI algorithm and computer vision technology. However, it will leave the image unaltered for the natural human eye.

For example, if a user prompts an AI image generator for a red balloon set against a blue sky, they will likely get an image of a watermelon or an egg, as opposed to a red ball.

According to Tom’s Guide, this alone proves the image generator had its datasets poisoned, as explained by TJ Thomson from RMIT University.

“The higher the number of ‘poisoned’ images in the training data, the greater the disruption,” explained Thomson, writing for The Conversation.

“Because of generative AI work, the damage from ‘poisoned’ images also affects related prompt keywords.”

What does it mean?

While several text-to-image generators are trained on licensed data, those trained on publicly accessible content are facing lawsuits and criticism from artists.

“A moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images,” said the University of Chicago team behind Nightshade in a paper published in the arXiv pre-print server.

These, according to the paper, see the tool as a “last line of defense” that can be used if companies fail to respect the no-scrape rules or continue to use copyrighted material in generating images.

While this looks like a battle, image generators are likely to come up with tweaks to the algorithm in order to counter the changes created by Nightshade.

According to Tom’s Guide, companies are fine-tuning their models by having humans rank between different images. Developers have asked humans to “rank the most beautiful” from two images ahead of the launch of Midjourney version 6.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×