Meta, the tech firm that owns Facebook and Instagram, has declared that political advertisers will be barred from employing its generative artificial intelligence (AI) tools for creating ad campaigns.
This significant step, unfolding amidst a broader industry and regulatory focus on the ethical use of AI, underscores Meta’s cautious stance on AI’s intersection with sensitive societal issues.
A Proactive Step Towards Ethical Advertising
Meta’s revised policy, embedded within their help center on November 6, explicitly restricts the use of generative AI features for campaigns that deal with housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, or financial services.
“By taking this approach, we aim to thoroughly understand potential risks and implement the necessary safeguards for the use of Generative AI in ads that touch on potentially sensitive topics in regulated industries,” a company spokesperson explained.
Moreover, while Meta’s advertising principles don’t solely address AI, they firmly prohibit ads that its network of fact-checkers has discredited. This measure is a proactive attempt to ensure that content on its platforms remains trustworthy and verifiable, particularly in politically charged discourse.
This is a step in the right direction, but we can’t just rely on voluntary commitments. I’m working to implement guardrails so AI-manipulated ads don’t upend our elections.
Reuters: Meta bars political advertisers from using generative AI ads toolshttps://t.co/50WmolzsrJ
— Amy Klobuchar (@amyklobuchar) November 8, 2023
Ensuring Transparency and Trust in Digital Campaigning
Simultaneously, Google has updated its policies, necessitating all verified election advertisers to disclose any AI-generated content within their campaign material starting in November 2023. Google’s policy specifically targets “synthetic content that inauthentically depicts real or realistic-looking people or events” and requires these disclosures to be “clear and conspicuous.”
Meanwhile, the U.S. Federal Election Commission (FEC) is advancing a petition to regulate AI-generated deepfakes in political advertising. Deepfakes, hyper-realistic video or audio content manipulated by AI, could present a “significant threat to democracy,” as Robert Weissman, president of the advocacy organization Public Citizen, described it. As such, the FEC has initiated a 60-day public comment period, signaling a move towards possibly stringent oversight.
US regulators consider regulating political deep fakes ahead of 2024 election
The U.S. Federal Election Commission unanimously voted to advance a petition to regulate political ads that use artificial intelligence. pic.twitter.com/tBr5OkSSNi
— Trading Expert 🇹🇷 (@TradingExp_) August 13, 2023
This collective caution from Meta, Google, and U.S. regulators comes against increasing concerns about AI’s ability to influence voter sentiment and the production of fake news, deepfakes, and misinformation. The democratic process appears particularly vulnerable to these sophisticated technological interventions.
Combatting Bias and Upholding Objective Discourse
Additionally, discussions around AI’s role in public discourse were stirred following claims about the popular AI chatbot ChatGPT, suggesting a left-leaning bias in its responses. These claims, while contested, have sparked debates about objectivity, the inherent biases within AI algorithms, training data, and the need for transparency in their development and deployment.
As AI permeates various sectors, tech behemoths like Google and Meta are tightening their grip on AI tools and services. Google’s CEO has been contemplating pivoting the company to an “AI-first company” since 2015, reflecting the strategic importance AI holds for the future of technology firms.
ChatGPT 'Politically Biased Toward Left' In The US & Beyond: New Research Finds https://t.co/ghic7ZEZw1
— zerohedge (@zerohedge) August 18, 2023
The emergence of AI tools like OpenAI’s ChatGPT has indeed made it easier to create and circulate synthetic content, prompting tech companies and regulators alike to evaluate and shape the guidelines that govern their use carefully. The landscape of political advertising is transforming as these AI-generated content policies are implemented, marking a pivotal moment in the interplay between technology, politics, and the integrity of democratic systems.
As tech firms set the tone for responsible AI use, the focus now turns to how political campaigners will adapt to these new rules and the broader implications for the authenticity and transparency of future elections. The ongoing discussions and regulations will likely play a critical role in safeguarding democratic processes from the potential perils posed by the unchecked use of AI in political discourse.