As generative AI developers continue to fine-tune their models to sophistication, it is also increasingly becoming difficult to distinguish between real images and AI creations, researchers reveal.
A study by researchers at the University of Waterloo established that people are having a hard time figuring out a picture of a real person and that of an AI-generated person.
Below expected threshold
In carrying out the study, researchers at University of Waterloo provided 20 unlabeled pictures to 260 participants. Half of the pictures were of real people taken from Google Images while the other half were AI generated using tools like Dall-E and Stable Diffusion.
Participants were then asked to label the images, if they were real or AI-generated, justifying their answers.
Although 61% of the participants could tell the difference between real images and AI generated images, this was still way below the expected 85% threshold.
While scrutinizing the images, participants paid attention to details like eyes, fingers, teeth, and other indicators that indicators when looking for AI created images, but “their assessments weren’t always correct.”
“People are not as adept at making the distinction as they think they are,” said the study’s lead author Andrea Pocol, who is also a PhD candidate in computer Science at the university.
While internet users may look at images in passing, the Waterloo researchers said they allowed participants to take their time to analyze the images.
“People who are just doomscrolling or don’t have time won’t pick up on these cues,” said Pocol.
Also read: Leonardo CEO Highlights User Stupidity Over AI Threats at Davos
Abusing the technology
Their study, titled “Seeing Is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media,” which appears in the journal Advances in Computer Graphics, also highlights the risks associated with the advances in generative AI technology.
The researchers also noted with concern the rate at which the generative AI technology is rapidly changing, getting more sophisticated each passing day. Academic research and legislation are failing to keep pace with the developments in the generative industry.
Pocol said AI images are becoming more real making it difficult for some people to easily distinguish between real and AI generated images. This, Pocol said, is creating a fertile ground for bad actors who are taking advantage of the technology to spread malice and perpetuate disinformation or to dupe people.
“Disinformation isn’t new, but the tools of disinformation have been constantly shifting and evolving,” Pocol said.
“It may get to a point where people, no matter how trained they will be, will still struggle to differentiate real images from fakes. That’s why we need to develop tools to identify and counter this. It’s like a new AI arms race.”
The researchers have also acknowledged the challenges the technology poses especially when it comes to spread of deepfakes.
The technology in wrong hands
With difficulties experienced by ordinary people to distinguish between real and fake images, there are growing fears of an increase in spread of deepfakes. Experts have warned bad actors will take advantage of the generative AI tech to spread election falsehoods in their attempts to mislead voters.
The Center for Countering Digital Hate (CCDH), a nonprofit that monitors online hate speech have already warned in their report that AI-powered image generators are worsening the 2024 election misinformation.
“The potential for such AI-generated images to serve as ‘photo evidence’ could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections,” said CCDH researchers in their report.
This however comes amid an announcement that about 20 big tech companies including OpenAI, Microsoft, and Stability AI signed an agreement to work together to prevent AI deceptive content from “interfering with the election taking place globally this year.