In the ever-evolving landscape of global democratic elections, a new and formidable adversary has emerged: artificial intelligence. As nations gear up for pivotal elections in 2023 and 2024, the threat posed by AI-driven propaganda campaigns becomes increasingly evident.
From Russian interference in the 2016 U.S. presidential election to subsequent efforts by China and Iran, foreign actors have harnessed the power of social media to sway elections worldwide.
What makes this new era particularly daunting is the deployment of generative AI and large language models, such as ChatGPT and GPT-4, which can effortlessly churn out vast quantities of persuasive content in any tone, style, or perspective.
Conjunction of elections worldwide
The democratic world is bracing for a surge of elections in the coming months and years. 71% of people living in democracies will vote in national elections between now and the end of 2024.
These elections span the globe, from Argentina and Poland to Taiwan, Indonesia, India, the European Union, Mexico, the United States, and numerous African nations. The outcomes of these elections hold great significance for both the countries and foreign powers seeking to exert influence.
China, for instance, keenly observes elections in Taiwan, Indonesia, India, and various African nations, as these outcomes can directly impact its strategic interests.
Meanwhile, Russia keeps a watchful eye on elections in the U.K., Poland, Germany, the European Union, and the United States, which remains a perennial focus for many nations.
The democratization of the alleged disinformation tools, driven by AI like ChatGPT, has made it more affordable and accessible for a broader range of countries to engage in election interference. This shift is a critical factor in understanding the changing landscape of global elections.
The dawn of AI-enhanced disinformation
The advent of generative AI introduces a new dimension of complexity to the world of election interference. Military strategists in countries like Russia and China are undoubtedly exploring advanced tactics that leverage AI capabilities. Historically, these nations have tested cyberattacks and information operations on smaller countries before scaling them up. In this evolving landscape, fingerprinting and identifying AI-produced propaganda becomes imperative.
In computer security, knowledge-sharing among researchers is vital for building robust defensive systems. A similar approach is required to combat information campaigns. Extensive research into the techniques employed in foreign disinformation campaigns is essential for early detection and mitigation.
AI-powered disinformation campaigns in the generative AI era will likely surpass the sophistication of those witnessed in 2016. To counter these evolving threats, the United States and other nations must proactively identify AI-generated propaganda in countries like Taiwan, where deepfake audio recordings have already had significant repercussions. Unfortunately, researchers attempting to shed light on these threats often find themselves targeted and harassed.
The growing threat of AI-generated disinformation
As the 2024 presidential election approaches, experts warn that AI advancements could breathe new life into traditional disinformation tactics. AI-generated disinformation not only deceives audiences but also undermines the integrity of the information ecosystem. Trust in media sources erodes, making it increasingly difficult for journalists and credible sources to disseminate accurate information.
The rise of AI tools capable of creating photorealistic images, mimicking voice audio, and producing human-like text has opened new avenues for political content creation. Recent incidents, such as an AI-generated image of an explosion at the Pentagon causing stock market fluctuations and AI audio parodies of U.S. presidents going viral, underscore the impact of these technologies. The proliferation of manipulated images and videos, combined with the ease of generating personalized content for social media, poses a grave challenge to the information landscape.