Brennan Center Urges AI Safeguards to Protect U.S. Democracy Post-2024 Election
Image Credit: Joshua Hoehne | Splash
The Brennan Center for Justice, a nonpartisan law and policy institute at New York University School of Law, has released a detailed report titled "An Agenda to Strengthen U.S. Democracy in the Age of AI". Published on February 13, 2025, the 39-page document, authored by Mekela Panditharatne, Lawrence Norden, Joanna Zdanys, Daniel I. Weiner, and Yasmin Abusaif, outlines strategies to harness artificial intelligence (AI) for democratic benefit while mitigating its risks. Drawing from the 2024 election cycle—labeled the nation’s first “AI election”—the report offers recommendations for federal, state, and local policymakers.
[Read More: Could a Hypothetical Deepfake Scandal on Election Eve Threaten Democracy?]
Insights from 2024: AI’s Role in Elections
The report examines AI’s impact during the 2024 U.S. elections, where generative AI, capable of creating text, images, audio, and video, saw varied use. Foreign entities employed it to amplify interference via fake news sites, campaigns crafted misleading deepfake ads, and activists used it to support voter suppression efforts. Candidates leveraged AI for outreach and content creation, while election officials tested it for voter communication. Though fears of widespread disruption proved overstated—“the worst-case scenarios did not come to pass”, the authors note—the technology’s growing sophistication signals a need for proactive governance as its adoption is projected to peak later this decade.
[Read More: TikTok’s AI Algorithms Under Scrutiny for Election Interference in Romania]
Enhancing Government Capacity
A key proposal is strengthening government capacity to manage AI. The report urges state and local governments to form advisory councils, citing models in Georgia and Utah, to assess risks and benefits. It also calls for funding to recruit AI experts—computer scientists, privacy officers, and more—to compete with private-sector opportunities. Training existing staff on AI use and cyber threats, like phishing, is recommended, alongside exploring efficiency gains, provided civil rights are safeguarded. The Biden administration’s 2023 AI governance order, repealed by President Trump in January 2025, leaves states to take the lead, the report argues.
[Read More: AI's New Frontier: Influencing Global Elections]
Transparency and Accountability Measures
The Brennan Center advocates for transparency laws requiring AI developers, social media platforms, and search engines to disclose details about election-related AI content, such as deepfake volumes and training data sources. This aims to counter personalized misinformation. The report cites 2024 Supreme Court rulings suggesting such laws could pass constitutional muster under a relaxed standard for “factual and uncontroversial” disclosures. On accountability, it proposes clarifying that Section 230 liability protections don’t shield generative AI outputs and enacting laws to hold developers liable for foreseeable election harms, alongside robust data privacy rules limiting personal data use.
[Read More: AI's Role in 2024 U.S. Elections: Regulatory Actions and Actual Impact]
Safeguarding Civic Engagement
To protect civic participation, the report suggests updating laws like the Administrative Procedure Act to dismiss AI-generated or misattributed regulatory comments, while expanding authentic feedback avenues like town halls. It also recommends disclosure requirements for deepfakes and AI-powered chatbots in political communications, with targeted bans on highly deceptive election content within 60 days of voting, such as synthetic media misrepresenting polling access.
[Read More: Is AI Becoming an Election Weapon?]
Securing Elections and Countering Suppression
For election integrity, the report calls for increased cybersecurity funding, digital authentication for official content, and voter education on AI’s role. It urges federal oversight of election vendors’ security practices and stronger laws against deceptive AI content that misleads voters about election logistics, including deepfakes and robocalls. Specifically, it proposes banning vote-suppressing deepfakes near elections and closing robocall loopholes allowing unsolicited calls to landlines. These measures aim to bolster trust in a system already strained by disinformation.
[Read More: AI Surge Poses New Threats to Australian Democracy, Electoral Commissioner Warns]
Standards for Election Administration
In election administration, the Brennan Center recommends guidelines and audits for AI use in voter roll maintenance and signature verification, ensuring accuracy and fairness with human oversight. It suggests an AI and Emerging Technologies Elections Lab to assist officials and a database for reporting AI-related incidents. These steps balance innovation with accountability, addressing risks like bias in rights-affecting tasks.
[Read More: Paris Peace Forum 2024: Navigating a Divided World Towards a Functional Global Order]
A Call for Broader Reform
The report concludes that AI’s transformative potential—evident but not fully realized in 2024—requires immediate action to strengthen democracy. With federal AI policy uncertain under the Trump administration, states are urged to lead in 2025. Beyond technical fixes, the authors advocate tackling systemic democratic weaknesses, like disinformation and access barriers, to ensure a resilient, representative government in the AI age.
Source: Brennan Centre for Justice