When it comes to regulating AI, America seems to be, at best, taking a wait-and-see or perhaps a laissez-faire approach. Meanwhile, other countries (particularly in Europe and China) are actively regulating what AI can do, how it can be used, and how it is to operate. Who is right?
Magic 8 ball says – situation unclear — ask again later.
The problem with regulating AI at this point is a lack of consensus about what basic principles we are attempting to embed – in short, what is “good” about AI, and what is “bad.” In the immortal words of Dr. Peter Venkman, “I’m fuzzy on the whole good/bad thing. What do you mean, ‘bad’?” We all know about potential bad outcomes with AI – bias, prejudice, inaccuracy, Nazi chatbots, robot overlords, you know, “bad” things. But what do we want AI to do, how do we want it to do it, and more importantly, what do we NOT want it to do?
Freedom vs. Free Market
In October 2023, former President Biden issued Executive Order 14110 — the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” directive, telling U.S. agencies to create testing protocols, watermarking standards, safety certifications, and reporting obligations for advanced AI-driven models. It was a sprawling, ambitious framework: Part national strategy, part moral statement. The message was that AI risk was not just an engineering problem—it was a governance problem. It called for transparency of both input, processing and output, and “safety” standards for AI programs.
Two years later, Executive Order 14179, signed by President Trump in January 2025 — titled Removing Barriers to American Leadership in Artificial Intelligence, swung the pendulum in the opposite direction. The new order declares that over-regulation “threatens innovation and national competitiveness.” It calls for deregulation, rescission of conflicting rules, and a “whole-of-government approach to unleash innovation.” Federal agencies that once drafted AI risk frameworks were told to identify and repeal them. What had been an effort to build guardrails has become a campaign to clear the road. TV ads with OpenAI’s Sam Altman as an AI-generated kid in his bedroom coding, which sets up the special relationship between AI and the United States of America.
Congress Talks; The States Act
Fear not! The U.S. Congress is studying the issue. Proposals such as the Algorithmic Accountability Act 2.0, the AI Disclosure Act, and Senator Cruz’s SANDBOX Act have stalled or been fragmented in committee. Attempts to impose a federal moratorium on state AI laws were rejected almost unanimously. The result: No national baseline, no single definition of “high-risk” AI, and no clear regulatory philosophy.
In the absence of federal regulation, some states have tenuously begun to attempt to regulate some aspects of AI. California’s Transparency in Frontier AI Act (TFAIA)—signed in September 2025—requires developers of “frontier models” to document training data, perform risk assessments, and establish post-deployment monitoring. Colorado is enforcing algorithmic accountability and discrimination testing; Texas has proposed its own AI sandbox regime; and Florida, through its Digital Bill of Rights, has extended consumer-privacy protections to algorithmic profiling. So, in the interest of promoting an unregulated AI, we have more regulation.
The question of whether — and how — to regulate AI is a reflection of the cultural and philosophical differences between countries. For example, the U.S. regulates outcomes—fraud, discrimination, deception—but resists ex ante rules. The EU regulates the process — requiring documentation, testing and human oversight before deployment. Finally, China regulates behavior — mandating algorithmic transparency and content control to maintain political equilibrium.
These philosophical differences matter more than the statutes themselves because they shape what each jurisdiction believes AI is. For the U.S., AI is a product; for Europe, a system; for China, an actor. The result is three incompatible legal grammars for the same technology.
Principles of AI Regulation
Despite divergent politics, a global consensus is coalescing around several key principles of AI governance:
Transparency and Explainability – Systems must disclose that they are AI and provide explanations for significant decisions (EU AI Act Arts. 13–15; U.K. DSIT Guidelines; OECD Principles).
Accountability and Human Oversight – There must always be a responsible natural or legal person—no “autonomous scapegoats.”
Fairness and Non-Discrimination – Bias mitigation is not just ethical but statutory in most frameworks.
Safety and Robustness – Pre-deployment testing, continuous monitoring, and incident reporting are required for “high-risk” systems.
Privacy and Data Governance – Alignment with GDPR-style limitations on data use and retention is becoming the global norm.
Security and Resilience – Governments are mandating protections against adversarial manipulation, model inversion, and data poisoning.
Traceability – Auditability of model decisions and provenance of training data form the backbone of emerging certification regimes.
Together, these principles define the modern AI regulatory lexicon, even if implementation varies.
Europe’s Regulatory Jet Stream
The EU AI Act — in force since August 2024 — operationalizes these principles. By February 2025, “unacceptable-risk” systems (social scoring, subliminal manipulation) are banned. By August 2025, general-purpose AI (GPAI) models must maintain technical documentation, disclose training data sources, and implement risk management. By August 2026, high-risk systems in health care, finance, and infrastructure must complete conformity assessments and register in a public EU database.
The European model is procedural and paternalistic: prove safety first, innovate later. Critics call it bureaucratic, but it provides predictability—something U.S. developers increasingly crave.
China’s Algorithmic Recommendation and Deep Synthesis rules emphasize traceability, content moderation, and user identity verification. Canada’s pending Artificial Intelligence and Data Act (AIDA) blends EU-style risk tiers with North American pragmatism. The U.K. continues its “pro-innovation” path, relying on sectoral regulators rather than a central AI agency. Even the OECD and G7 Hiroshima Process are harmonizing guidelines that may eventually become the de facto global floor.
With All Deliberate Speed
America’s fragmented approach may yet produce brilliance. Innovation thrives in chaos, and many argue that Europe’s regulatory gravity will slow its innovators while U.S. companies sprint ahead. But ungoverned speed can also produce catastrophe — and legal entropy. The question is not whether the U.S. should regulate AI, but whether it can do so coherently, across administrations and ideologies. Just remember the important safety tip — don’t cross the streams.
Recent Articles By Author
Mark Rasch ai compliance, AI Disclosure Act, ai ethics, AI Governance, AI governance frameworks, ai law, ai oversight, AI Regulation, AI Risk Management, AI safety standards, Algorithmic Accountability Act, Algorithmic Fairness, artificial intelligence policy, China AI rules, Data Governance, EU AI Act, Executive Order 14110, Executive Order 14179, global AI regulation, innovation vs regulation, state AI laws, transparency and accountability, U.S. AI policy