Search

AI Manipulates Trusted News Figures Into Delivering Fabricated Stories: Forbes

AI Manipulates Trusted News Figures Into Delivering Fabricated Stories: Forbes

In an era where the line between reality and fiction is increasingly blurred, deepfake technology is thrusting trusted news figures into a concerning spotlight.

According to a recent Forbes report, renowned journalists and prominent TV networks now unwittingly find themselves in the center of fabricated news stories, intensifying worries about the proliferation of misinformation, particularly as the U.S. approaches its 2024 elections.

Mimicking the familiar to misinform

The alarming capability of deepfake technology lies in its ability to seamlessly manipulate the likenesses and voices of known public figures, creating content that appears authentic.

These videos can be indistinguishable from legitimate news broadcasts for the unsuspecting viewer. This is because the figure presenting the news, more often than not, is someone the public trusts or recognizes.

Deepfake news segments, including those featuring TikTok and YouTube star Krishna Sahay, have utilized this trust factor to create misleading content. Sahay’s videos, which reportedly gained traction for mocking grave events like school shootings and the Sept. 11 attacks, featured the appearances of various renowned news anchors.

Despite platforms like TikTok taking action against such misleading content, these videos often find their way back online, reposted by different users.

Digital duping’s real-world impact

Such manipulated media has not just been restricted to social media stars. Recently, a genuine video of CNN’s Clarissa Ward was tampered with, warping the audio to misrepresent the situation she reported live near the Israel-Gaza border. The result was a distorted representation of reality that posed serious risks.

Additionally, the influence of these deepfake videos is alarming. For instance, one of Sahay’s fabricated segments featuring Face The Nation’s Margaret Brennan gathered over 300,000 likes on TikTok, dwarfing the most popular genuine video on Face The Nation’s official account, which accumulated just 7,000 likes.

A glimpse into the future of elections?

With the potential for these deepfakes to drastically impact public opinion, there are growing concerns about their implications for future elections. Kevin Goldberg, a First Amendment specialist, voiced apprehensions over using such videos as a potent misinformation tool, particularly as the U.S. gears up for the 2024 elections. However, he also emphasizes the need for a balanced approach, warning against overreaction.

As deepfakes become more sophisticated, ensuring that the public remains informed and critical of their content becomes paramount. As Hany Farid, a UC Berkeley professor specializing in deepfakes, pointed out,

“We are going to have to get more serious about protecting the rights of the people whose likeness and voice are being co-opted.”

Legal tackling of deepfakes

Responding to the mounting concerns surrounding unauthorized AI replications, a bipartisan group of U.S. senators introduced a draft bill, the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. The proposed legislation aims to impose penalties on entities creating or hosting unauthorized AI replications, with fines starting at $5,000 per violation. Nevertheless, the bill does make exceptions for specific First Amendment-protected cases, such as news reporting or satire.

Senator Chris Coons encapsulated the sentiment, emphasizing the need for Congress to strike a balance. He aptly stated that it’s crucial to “defend individual rights, abide by the First Amendment, and foster AI innovation and creativity.”

Deepfakes present a double-edged sword. While they showcase the advancements in AI and technology, they also pose a significant threat to the integrity of news and public figures.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×