Global AI Guidelines: How the US, EU, and China Are Shaping the Future of AI Governance

Image Credit: Growtika | Splash

As artificial intelligence reshapes industries and societies, governments, developers, and organizations worldwide are racing to establish guidelines—both mandatory and voluntary—to ensure its ethical and safe deployment. From watermarking AI-generated content to fostering transparency, these efforts reflect a growing recognition of AI’s potential benefits and risks, prompting diverse approaches tailored to regional priorities and technological landscapes.

[Read More: AI-Generated Receipts Spark Debate Over Verification Systems and Fraud Risks]

United States: A Shift in Policy Direction

In the United States, the landscape for AI compliance guidelines has shifted significantly since the revocation of President Biden’s Executive Order 14110 on January 20, 2025, by President Donald Trump via Executive Order 14148. Signed on October 30, 2023, EO 14110 built on earlier voluntary commitments from tech giants like OpenAI, Google, and Microsoft—announced in July 2023—to watermark AI-generated content and prioritize safety, encouraging further industry action through federal coordination. Its replacement, Trump’s 'Removing Barriers to American Leadership in Artificial Intelligence', issued January 23, 2025, emphasizes deregulation to enhance U.S. AI competitiveness, directing advisors to review prior policies within 180 days (by July 22, 2025) for consistency with this new approach. Meanwhile, the National Institute of Standards and Technology (NIST) continues to promote its voluntary AI Risk Management Framework (AI RMF), first released in January 2023, which guides organizations in managing risks like bias and privacy. Industry adoption of the AI RMF, exemplified by companies like IBM integrating its principles before 2025, appears to persist, though the absence of federal mandates now leaves compliance largely to corporate discretion.

[Read More: Beyond Compliance: Balancing Legal and Ethical Responsibilities in AI]

European Union: Pioneering Mandatory Standards

Across the Atlantic, the European Union is setting a global benchmark with its AI Act, which entered into force on August 1, 2024, and will be fully applicable by August 2, 2026, with some provisions phased in earlier or later. This mandatory framework classifies AI systems by risk level—unacceptable, high, limited, and minimal—imposing strict requirements on high-risk applications, such as those in healthcare or law enforcement, including risk management, transparency, and conformity assessments. The EU has also introduced the AI Pact, a voluntary initiative launched in November 2023, encouraging developers, including major firms and startups, to adopt the Act’s requirements ahead of schedule. A key focus is the General-Purpose AI Code of Practice, expected to address transparency for models like chatbots and generative tools, with a third draft anticipated around March 2025 ahead of its finalization by May 2, 2025, urging providers to embed identifiers in AI outputs. This aligns with the Act’s push for identifiable AI-generated content, such as watermarks on videos, to counter misinformation.

[Read More: EU Blocks Chinese AI App DeepSeek Over GDPR Compliance Concerns]

China: Balancing Innovation and Control

China has adopted a distinct approach to generative AI, blending mandatory regulations with state oversight. In August 2023, its Interim Measures for the Management of Generative Artificial Intelligence Services, effective since August 15, 2023, required providers to label AI-generated content visibly, a policy expanded by the Measures for the Identification of Artificial Intelligence-Generated and Synthetic Content, announced on March 7, 2025, and set to enforce both visible markers and metadata embedding from September 1, 2025. Enforced by the Cyberspace Administration of China (CAC), these rules aim to curb misinformation while fostering AI innovation within the country’s broader digital strategy, as outlined in its 2017 AI Development Plan. Companies like Baidu must comply, integrating these requirements into their development pipelines, such as with its Ernie Bot. Critics, however, contend that this framework underscores Beijing’s intent to tightly control information flows, differing from the EU’s AI Act, which emphasizes individual rights and privacy in its risk-based approach.

[Read More: Meta AI Launches Across Europe with Text-Only Features for GDPR Compliance]

Voluntary Efforts by Developers and Organizations

Beyond government-led initiatives, tech developers and international bodies are advancing voluntary guidelines. Google’s SynthID, launched in August 2023, embeds imperceptible watermarks in AI-generated images, a tool refined following its AI Principles update in June 2023, which emphasized accountability and transparency in generative AI. Similarly, the Coalition for Content Provenance and Authenticity (C2PA), founded in 2021 by Adobe, Microsoft, and others, promotes cryptographic metadata to verify content origins, gaining adoption among media firms like the BBC and Reuters by 2024. On the global stage, the Organization for Economic Co-operation and Development (OECD) champions its 2019 AI Principles, updated in 2024, while UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted in November 2021, encourages voluntary ethical impact assessments, influencing policies across member states. These non-binding efforts are shaping corporate practices, particularly in North America and Europe, as companies seek to build trust amid ongoing regulatory uncertainty.

[Read More: Why Did China Ban Western AI Chatbots? The Rise of Its Own AI Models]

License This Article

Source: Reuters, EU, Science Direct

Total per month $1.00
TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

New AI Flaw Lets Hackers Trick Chatbots Like Google Gemini, Study Finds

Next
Next

AI-Generated Receipts Spark Debate Over Verification Systems and Fraud Risks