A video posted on Chinese social media platform Weibo last week shows American pop star Taylor Swift speaking fluent Mandarin. Only it wasn’t the 33-year-old singer in the clip. It was a deepfake video generated by an AI tool from Chinese start-up HeyGen.
Since Oct. 21, when Swift’s deepfake was first shared, the video has garnered more than six million views. But it has also triggered debate over the potential pitfalls that come as AI gets more advanced, making it difficult to distinguish between real and fake identities or content.
Deepfakes—realistic but fake images or videos used to impersonate someone else, including their voice—can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud. The images and videos look and talk exactly like the person targeted.
In May, for example, a Chinese businessman lost 4.3 million yuan [$612,000] after a fraudster used face-swapping AI to impersonate his friend. There was no loss of money in the Taylor Swift deepfake, but here’s what to look out for to avoid getting swindled on social media.
AI and Social Engineering: The Silent Threat
— You Think, I Think (@uthkithk) October 27, 2023
Verify AI celebrity endorsements
Scammers will typically use AI deepfakes of trusted individuals to lure their victims. In recent months, countless fake AI celebrities have emerged to trick people with false endorsements. Fake versions of icons like Elon Musk and Beyonce are targeted to promote fake brands.
Apparently, a lot of the false ads rank higher in search results on Google, likely because the company doesn’t do enough to filter fraudulent content.
However, in the age of AI-generated fake content, it’s crucial to approach too-good-to-be-true videos with a critical eye. Extraordinary claims require extraordinary evidence. If you come across viral videos that seem sensational, take time to verify their authenticity and sources.
“If you want advice about a product or service, look at reviews or find an expert who’s well-versed and can prove it,” says consumer tech expert and radio host Kim Komando.
“Another smart step: Google the product and actor in the ad, along with the word ‘review.’ If someone’s getting paid to endorse a product, there’s not just one random ad floating around social media,” she added in an article published by the New York Post.
Pay attention to detail
Deepfake technology is particularly scary because it is so realistic. AI tools such as Stable Diffusion can manipulate voices and mouth movements, making it easy for people to believe that a video or audio recording is authentic.
Remember Ghostwriter’s AI-created Drake and The Weeknd song, which fooled millions of people, including music streaming services, into thinking it was a new release? To spot a deepfake, Alex Kim, co-founder and CEO of AI-driven e-commerce accelerator Rice, suggests to:
“Pay attention to any unusual inconsistencies in the video you’re watching. Content creators who use deepfakes are usually trying to save time, so fine-tuning details is not something they will invest time in,” Kim told MetaNews.
“That means that any mismatched facial expressions, unnatural movements, strange artefacts, voice mismatch, or lip-syncing will likely be present on a deepfaked video,” Kim said, adding:
“Pay specific attention to the eyes, as deepfakes struggle with these the most.”
The case of the Chinese businessman is a reminder that deepfake technology is a powerful tool that can be used for good or evil. Regulators have started to pay more attention to the dangers of AI since OpenAI launched its viral chatbot ChatGPT in November, sparking a global AI race.
As MetaNews previously reported, experts suggest developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users online.
Ready to make your selfies untouchable?@MIT's PhotoGuard is your digital bodyguard against AI tricksters.
Small, invisible tweaks keep your pics safe from deepfakes and nasty edits—a promising gold standard in online safety.
☞ Are you worried about the safety of your photos? pic.twitter.com/blCBYw8REq
— Ｍ (@emilios_eth) October 27, 2023
Check background for clues
While some apps use watermarks to identify AI-generated content, it is not so obvious with others. Users will have to scan the background in images or videos to find clues about AI deepfake material, according to Alex Kim, the Rice.win CEO.
“If the background is moving unnaturally or the lighting isn’t matching up with the shadows on the foreground, it’s probably a deepfake,” Kim said. “Details like texture on materials, or lack thereof, are other signs.”
“Look for pixelation or blurriness where there should be none, especially if it happens on a human subject. Natural details like hairlines, ears, noses, or facial features are difficult to pull off in a deepfake, at least convincingly.”
AI technology, which for the most part remains free and easily accessible, is being used by bad actors in many ways. Images of high-profile female celebrities, including Taylor Swift and Emma Watson, have been manipulated using AI to create deepfake pornographic content.
A new AI tool from the Massachusetts Institute of Technology (MIT) in the U.S. promises to curb deepfakes. People can use PhotoGuard to make “small, invisible tweaks to keep [their] pictures safe from deepfakes and nasty edits—a promising gold standard in online safety.”
Alex Kim told MetaNews that the most obvious and common way to identify fake AI videos on social media is to consider the channels on which they are hosted.
“You’ll want to look at how much the channel has been posting lately and if there were any big spikes or surges in content creation,” Kim said. “If there is a big spike in posted videos that seem off, low-quality, or strange, it’s a pretty sure sign the creator is using deepfakes.”