Search

Lawmakers Press Meta and X for Rules on AI-Generated Political Deepfakes

Lawmakers Press Meta and X for Rules on AI-Generated Political Deepfakes

As the 2024 U.S. presidential election looms, deepfakes, especially AI-generated political ads, are under the microscope. Consequently, tech giants Meta and X face scrutiny from lawmakers demanding greater transparency.

However, Google has already committed to labeling deceptive AI-driven political content. Hence, lawmakers now seek answers from X and Meta on why they have not followed suit. Besides, the onus of preserving the integrity of democratic processes lies heavily on these platforms, given their massive outreach.

Congressional Concerns

Democratic stalwarts, U.S. Sen. Amy Klobuchar of Minnesota and U.S. Rep. Yvette Clarke of New York, voiced their anxieties by penning a letter to Mark Zuckerberg of Meta and Linda Yaccarino of X.

Their primary contention in the letter was the potential harm posed by unchecked AI-generated political advertisements to the sanctity of free and impartial elections. Significantly, they emphasized the need for proactive measures, especially as the 2024 polls inch closer.

In her conversation with The Associated Press, Klobuchar elaborated on the urgency of the matter. She said, “It’s clearly technologically possible.” She questioned the reluctance of the internet moguls to implement preventative measures by asking, “Can’t you do this? Why aren’t you engaging in this activity?”

Regulatory Measures in the Pipeline

Additionally, the lawmakers are more than just seeking voluntary compliance. Legislative measures are in the works. Clarke’s House bill proposes mandatory disclaimers on AI-altered election ads. Moreover, Klobuchar’s Senate version aims to reinforce this standard.

However, Google’s decisive stance has set a benchmark. Beginning in mid-November, it pledges to distinctly label AI-altered election ads across its platforms, including YouTube. This mandate will apply to the U.S. and other nations where Google oversees election ads.

Meta, however, currently lacks a dedicated policy for AI-based political content. However, it does prohibit the use of manipulated audio or visuals for disseminating misinformation.

A bipartisan Senate bill, co-sponsored by Klobuchar, promises to introduce stringent rules. The proposed regulation seeks to outlaw “materially deceptive” AI-generated content associated with federal contenders, though parodies and satires are exceptions.

The Reality of AI in Politics

The potential repercussions of unregulated AI deepfakes are alarming. A fake ad from the Republican National Committee showcased a dystopian America, depicting scenes of chaos and panic. Such misleading portrayals, under the proposed regulations, would face a ban. Likewise, a fabricated image of Donald Trump and Dr. Anthony Fauci and a fictitious video of Sen. Elizabeth Warren would fall under the scanner.

“How are you ever going to know the difference?” questioned Klobuchar, emphasizing the gravity of the situation.

Despite the growing concerns, some believe the deepfakes that have emerged so far have yet to sway voter sentiment profoundly. Ari Cohn, an attorney at TechFreedom, argued that the decision about the truth in politics should rest with the voters.

Regardless, with the closing date for public comments on the AI-generated deepfakes regulation petition set for Oct. 16, it’s evident that this issue is garnering the attention it deserves. Moreover, the collective hope is to ensure a future where technology fortifies, not undermines, democratic principles.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×