AI's Role in 2024 U.S. Elections: Regulatory Actions and Actual Impact

AI-generated Image. Credit: Jacky Lee
License This Image

The 2024 United States election marked a significant moment in the intersection of technology and politics, as it unfolded against a backdrop of widespread access to artificial intelligence tools capable of generating images, audio, and video. Concerns about AI-driven misinformation spurred rapid legislative and regulatory responses. However, as Election Day approached and passed, the feared surge of AI-induced misinformation largely failed to materialize, relying instead on traditional deceptive tactics.

[Read More: AI Takes Center Stage in U.S. Presidential Race: Harris and Trump Shape the Future of Technology]

Regulatory Crackdown on AI-Generated Political Content

In the weeks following a deceptive robocall in New Hampshire that featured an AI-generated voice mimicking President Joe Biden, the Federal Communications Commission (FCC) swiftly banned the use of AI-generated voices in political robocalls. This incident highlighted the potential misuse of AI in electoral processes and acted as a catalyst for broader regulatory actions.

State-Level Legislation: Sixteen states enacted laws specifically targeting the use of AI in elections and campaigns. These laws often mandated disclaimers for synthetic media disseminated near election periods, aiming to inform voters about the nature of the content they encounter.

Federal Initiatives: The Election Assistance Commission introduced an “AI toolkit” designed to help election officials communicate effectively in an age of fabricated information. Additionally, various states launched resources to assist voters in identifying AI-generated content, enhancing public awareness and resilience against misinformation.

[Read More: Could a Hypothetical Deepfake Scandal on Election Eve Threaten Democracy?]

Experts Predict Minimal AI-Driven Misinformation Impact

Despite early warnings, the anticipated flood of AI-generated misinformation did not significantly influence the 2024 elections. Experts from academia and policy institutes noted that traditional misinformation methods remained dominant.

Paul Barrett’s Insights: Paul Barrett, deputy director of NYU Stern's Center for Business and Human Rights, remarked, “The use of generative AI turned out not to be necessary to mislead voters. This was not ‘the AI election.’”

Daniel Schiff’s Observations: Daniel Schiff, assistant professor of technology policy at Purdue University, pointed out the absence of large-scale AI-driven campaigns aimed at misleading voters about polling places or affecting turnout. He emphasized that misinformation during the election was smaller in scope and unlikely to have been a determinative factor in the presidential race.

[Read More: TikTok’s AI Algorithms Under Scrutiny for Election Interference in Romania]

Traditional Misinformation Techniques Remain Prevalent

As Election Day approached, misinformation continued to spread primarily through established channels rather than AI-generated content.

Viral Misinformation: Claims about vote counting irregularities, mail-in ballots, and voting machines dominated the misinformation landscape. These narratives relied on text-based social media posts, manipulated videos, and out-of-context images, rather than sophisticated AI-generated deepfakes.

Reinforcement of Existing Narratives: AI-generated content that did gain traction tended to support pre-existing political narratives rather than creating new deceptive claims. For instance, after false statements by Donald Trump and JD Vance about Haitians eating pets in Springfield, Ohio, AI-generated images and memes depicting animal abuse proliferated online, reinforcing the false narrative rather than introducing new misinformation.

[Read More: AI's New Frontier: Influencing Global Elections]

Tech Companies Implement Robust Safeguards

Major technology platforms took proactive measures to limit the spread of AI-driven misinformation during the election cycle.

Meta’s Policies: Meta, the parent company of Facebook, Instagram, and Threads, required advertisers to disclose the use of AI in any political or social issue advertisements. This transparency aimed to help voters identify the origins of the content they encounter.

TikTok’s Labeling Mechanism: TikTok introduced an automatic labeling system for certain AI-generated content, providing users with information about the authenticity of the media they view.

OpenAI’s Restrictions: OpenAI, the developer behind ChatGPT and DALL-E, prohibited the use of its AI services for political campaigns and prevented users from generating images of real individuals, further curbing potential misuse.

[Read More: Meta's Bold Move: Facebook and Instagram to Label AI-Generated Content]

Limited Influence of Deepfakes and Partisan AI Use

While AI-generated deepfakes were present, their impact was limited and often intertwined with existing political biases.

Satirical and Harmful Deepfakes: Most deepfakes identified during the election were created for satire or entertainment purposes. A smaller subset aimed to damage reputations by portraying politicians in misleading or false contexts, but these were extensions of traditional political narratives rather than entirely new fabrications.

Political Exploitation of AI Concerns: Some politicians, including Donald Trump, used AI as a rhetorical tool to discredit opposing narratives. For example, Trump falsely claimed that a montage of his gaffes was AI-generated and alleged that a crowd of Harris supporters was artificially created. These claims were largely baseless but served to divert attention from the real sources of misinformation.

[Read More: Trump Claims Kamala Harris is 'AI-ing' Crowds: A New Conspiracy or Political Desperation?]

Foreign Influence Operations Remain Actor-Driven

Concerns about foreign adversaries leveraging AI for influence operations persisted, but AI did not become a central tool in these efforts.

Traditional Tactics Prevail: The Foreign Malign Influence Center reported that foreign actors continued to rely on human-driven tactics, such as staged videos and deceptive narratives, rather than AI-generated content. For instance, a fabricated video alleging that Vice President Harris caused a car crash was traced back to a Russian network known as Storm-1516, which employed similar human-driven misinformation strategies in previous elections.

Intelligence Agency Interventions: U.S. intelligence agencies, including the FBI and the Cybersecurity and Infrastructure Security Agency, successfully flagged and countered these traditional influence operations, reinforcing that AI had not revolutionized foreign efforts to undermine election integrity.

[Read More: Paris Peace Forum 2024: Navigating a Divided World Towards a Functional Global Order]

Effectiveness of Platform and Legislative Safeguards

The combined efforts of state legislation and platform-level safeguards played a crucial role in mitigating the potential misuse of AI in the 2024 elections.

Meta and OpenAI’s Impact: According to Meta’s president for global affairs, Nick Clegg, AI-related content in elections and politics accounted for less than 1 percent of all fact-checked misinformation. OpenAI’s restrictions further limited the ability of malicious actors to generate harmful political content.

Ongoing Challenges: Despite these successes, challenges remained. The Washington Post reported that ChatGPT could still generate targeted campaign messages when prompted, and PolitiFact found that Meta’s AI tools could produce images supporting false narratives, such as the claim that Haitians were eating pets.

[Read More: Is AI Becoming an Election Weapon?]

Looking Ahead: Continued Vigilance Necessary

Experts agree that while the 2024 elections demonstrated the effectiveness of current safeguards against AI-driven misinformation, ongoing vigilance is essential as AI technologies continue to evolve.

Future Precautions: Daniel Schiff emphasized the importance of maintaining and enhancing strategies like deepfake detection, public awareness campaigns, and legislative measures to stay ahead of potential AI misuse.

Evolving AI Capabilities: As AI technology advances, the potential for more sophisticated and harder-to-detect misinformation increases. Continuous adaptation and innovation in regulatory and technological defenses will be crucial to protect the integrity of future elections.

[Read More: AI Deepfakes at the Met Gala: The Fine Line Between Fun and Fraud]

License This Article

Source: Aljazeera

TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

AI Pioneers as Santa: Stuart Russell, Sam Altman & More Celebrate Christmas with X Grok 2

Next
Next

10 US Stocks in AI Worth Watching for Growth in 2025