Trump vs. Biden: The Battle Over U.S. AI Governance

Image Credit: Natilyn Hicks | Splash

On October 30, 2023, former President Joe Biden signed Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Published in the Federal Register on November 1, 2023 (88 FR 75191), this directive—Biden’s 126th executive order—aimed to address the rapid evolution of AI, particularly generative models like ChatGPT. It established three primary goals: fostering competition and innovation in the AI sector, protecting civil rights and national security from AI-related risks, and ensuring U.S. global leadership in AI.

The order emerged amid growing alarm over AI’s unregulated advancement. Experts had flagged concerns ranging from misinformation and bias to potential long-term threats, prompting action. In August 2023, Arati Prabhakar, director of the Office of Science and Technology Policy, confirmed the White House’s expedited efforts on AI policy, a process Biden underscored at signing, noting AI’s “warp speed” development necessitated governance to balance its promise and peril.

[Read More: Nvidia CEO Introduces "Hyper Moore’s Law" to Accelerate AI Computing]

Federal Agencies Gear Up

Executive Order 14110 imposed specific mandates on federal agencies to advance safe and trustworthy AI use. Among its directives, it required major departments to appoint Chief Artificial Intelligence Officers (CAIOs) within 60 days—a deadline of December 29, 2023. This was swiftly confirmed by agency actions: the General Services Administration (GSA) and the Department of Education appointed their CAIOs in mid-November, while the National Science Foundation (NSF) followed later in the month. The Department of Homeland Security (DHS) was charged with developing AI security guidelines, including cybersecurity protocols, and partnering with private firms in critical sectors like energy, a role outlined in the order and affirmed in a DHS press release on November 14, 2023.

The Department of Veterans Affairs (VA) responded to the order by launching a US$1 million AI Tech Sprint to reduce healthcare worker burnout, announced in October 31, 2023. This initiative invited innovators to create AI tools to streamline administrative tasks.

Meanwhile, the National Institute of Standards and Technology (NIST), under the Department of Commerce, released the initial version of its AI Risk Management Framework (AI RMF 1.0) in January 2023. In response to Executive Order 14110, NIST developed a companion resource titled "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile", which was released on July 26, 2024. This profile addresses risks specific to generative AI and assists organizations in managing these risks effectively.

These measures collectively aimed to embed AI responsibly across federal operations, balancing innovation with oversight.

[Read More: UK Advances AI Governance: New Laws, Innovation Office, and Quantum Centre Unveiled]

A Divided Reception

The order’s release sparked widespread debate. The Houston Chronicle editorial board, in a November 2023 piece, called it a “first step toward protecting humanity”, a view echoed by Democratic lawmakers like Senator Richard Blumenthal and Representative Ted Lieu. Representative Don Beyer, vice-chair of the House AI Caucus, lauded its “comprehensive strategy for responsible innovation”, urging legislative follow-up.

Critics pushed back hard. Senator Ted Cruz decried it as “barriers to innovation disguised as safety measures”. Tech groups like the Chamber of Commerce and NetChoice, representing firms like Amazon and Google, warned in Nov 2023 statements that it threatened private-sector progress.

Public support, per an AI Policy Institute poll, conducted in late 2023, stood at 69%. Civil rights advocates, including Maya Wiley of the Leadership Conference on Civil and Human Rights, praised its fairness focus in an interview, “We still need Congress to consider legislation that will regulate AI and ensure that innovation makes us more fair, just, and prosperous, rather than surveilled, silenced, and stereotyped”.

The American Civil Liberties Union (ACLU) raised concerns over the law enforcement provisions in the executive order. Cody Venzke expressed deep concerns” about sections related to national security and law enforcement, particularly regarding the order’s push to “identify areas where AI can enhance law enforcement efficiency and accuracy”.

[Read More: EU Becomes the First Country to Enact Comprehensive AI Law!]

Image Source: AI Policy Institute

Gaps and Limitations

The order’s scope, while broad, omitted key proposals. It did not establish a licensing regime for advanced AI models—supported by OpenAI’s Sam Altman in 2023 congressional testimony—nor ban high-risk uses or mandate training data transparency. These exclusions reflected a cautious approach, balancing regulation with innovation, but left some calling for stronger measures.

[Read More: Shaping the Future: Taiwan's Pioneering Draft AI Law to Safeguard Innovation and Society]

Trump’s Rescission Shifts the Landscape

On January 20, 2025, hours after his inauguration as the 47th president, Donald Trump revoked Executive Order 14110 via a new directive, Initial Rescissions of Harmful Executive Orders and Actions, published in the Federal Register on January 28, 2025 (90 FR 8237). Trump labelled it and other Biden actions “unpopular, inflationary, illegal, and radical practices”, a stance previewed in his campaign and confirmed in a White House statement that day. This rescission, part of a broader rollback of 78 Biden orders, aligned with Trump’s prior AI policies from his first term, which emphasized deregulation.

The move dismantled Biden’s framework, leaving federal AI governance in flux. A subsequent Trump order on January 23, 2025, Removing Barriers to American Leadership in Artificial Intelligence (90 FR 8741), directed agencies to revise all actions tied to EO 14110, signalling a shift toward minimal oversight.

[Read More: Paris AI Summit: US and UK Decline to Sign Global AI Declaration]

What’s Next for AI Policy?

Executive Order 14110’s brief existence underscores the United States’ ongoing struggle to regulate artificial intelligence, a challenge intensified by its rescission on January 20, 2025. With federal AI governance now in flux, the path forward remains uncertain: will Congress, led by figures like Senate Majority Leader Chuck Schumer, who championed AI legislation in 2023, step in to fill the void, or will deregulation prevail under President Trump’s directive?

At TheDayAfterAI News, we observe that legal control has historically lagged behind technological advancement—a pattern evident from the atomic bomb’s invention, where Nobel Prize-winning scientists grappled with its devastating legacy. As AI’s potential for both benefit and harm emerges, we hope humanity can steer its development ethically and morally, striking a balance between necessary regulation and preventing its misuse by malevolent actors.

[Read More: U.S. Copyright Office Issues Guidance on AI-Generated Works]

License This Article

Source: White House

Total per month $1.00
TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

Spain Proposes AI Law with €35M Fines for Unlabelled Deepfakes, Aligning with EU Regulations

Next
Next

U.S. Copyright Office Issues Guidance on AI-Generated Works