Is AI Becoming an Election Weapon?
As the world witnessed the 2024 election year unfold, concerns mounted that deepfakes and AI-generated content could be weaponized to manipulate election results and undermine democratic trust. With over 4 billion voters across 60 countries heading to the polls, these fears were more relevant than ever. The widespread accessibility of multimodal generative AI, which allows anyone to create convincing fake images, videos, and sound, added to the anxiety surrounding the potential misuse of AI in elections.
[See our previous report: AI's New Frontier: Influencing Global Elections]
AI-Generated Disinformation: Insights from the UK Election
A thorough analysis of the UK general election was conducted held on July 4 to assess the impact of AI-generated disinformation. Surprisingly, the UK saw only a limited number of AI-driven fakes during the campaign. While these examples did not seem to influence significant numbers of voters, they did contribute to spikes in online harassment against those targeted by the fakes and caused confusion among the electorate about the authenticity of the content they encountered.
[See our previous report: Elon Musk's Shared Video Sparks Controversy and Concern]
The Chilling Effect: Long-Term Risks to Democracy
The findings from the UK election point to concerning trends that could have long-term implications for democracy. The online harassment fueled by AI-generated disinformation may create a ‘chilling’ effect, discouraging political candidates from participating in future elections. Additionally, as voters become increasingly unsure of what content is real, trust in the online information landscape may erode, threatening the foundations of democratic processes.
[See our previous report: Elon Musk’s Grok-2 Unrestricted Political Imagery - A Double-Edged Sword?]
A Global Issue: AI Misuse in Elections Worldwide
The UK election is not an isolated case. Similar concerns have been reported in 18 other elections since January 2023, as detailed in a recent CETaS briefing paper. From India to the US, the potential for AI-generated content to disrupt elections is a global issue. With many of 2024’s elections carrying significant geopolitical stakes, the risk of AI being used as a tool for disinformation has never been higher.
The Dual Threat: State Actors and Public Misuse
Generative AI’s ability to produce highly realistic content at scale has empowered both sophisticated state actors and the general public to engage in disinformation. Australia’s Director-General of Security recently highlighted this dual threat, noting that individuals with violent intent are increasingly using AI in ways not seen before. This growing accessibility of AI tools has expanded the threat landscape, making it easier for even non-experts to spread misleading content.
[See our previous report: AI in the Wrong Hands: Landmark Case Exposes Dark Side of Technology]
Traditional Threats Persist: The Role of Bots and Fake News
Despite the new challenges posed by AI, traditional threats remain significant. During the UK election, much of the disinformation stemmed from bot accounts, some with possible links to Russia, which sought to inflame divisions over issues like immigration. These bots employed established tactics such as astroturfing, where numerous fake comments are used to create the illusion of widespread support for specific political positions.
[See our previous report: Deepfake Dilemma: How AI-Generated Abuse Is Challenging Society's Norms]
The Doppelganger Network: A Case of Information Laundering
A notable disinformation campaign during the UK election was linked to the Russian-affiliated Doppelganger network. Dubbed ‘CopyCop’, this operation spread fictitious articles about the war in Ukraine, aiming to confuse the public and diminish support for military aid. Although many of these articles were poorly executed, with clear signs of AI editing, some managed to reach wider audiences through Russian media influencers, showcasing the ongoing threat of information laundering.
[See our previous report: AI Warfare: Autonomous Weapons Changing the Battlefield]
The Public’s Role in AI-Generated Disinformation
Interestingly, much of the viral AI-generated content during the UK election originated from the public. Deepfakes that falsely implicated political candidates in controversial statements were widely shared, often by individuals who claimed their intent was satirical or for ‘trolling’. This reflects a shift in the disinformation landscape, where individuals with access to generative AI systems can now play a significant role in spreading misleading content.
[See our previous report: The Deepfake Dilemma: Navigating the AI Apocalypse]
Preparing for Future Elections: Mitigating AI Risks
As we look ahead to future elections, such as the upcoming US election in November and Australia’s federal election in the next nine months, it is crucial for platforms and governments to take proactive steps to mitigate the risks posed by AI. Measures such as clear labeling of AI-generated political adverts, collaboration with fact-checking organizations, and conducting red-teaming exercises to anticipate malicious uses of AI are vital to safeguarding democratic processes.
[See our previous report: Redefining AI Art: Meta's New 'AI Info' Labels and the Ongoing Debate in Digital Creativity]
Source: https://www.aspistrategist.org.au/ai-disinformation-lessons-from-the-uks-election/