Is America Behind the Ball When It Comes to AI Regulation?
文章探讨了全球AI监管的不同策略及其背后的文化和哲学差异。美国采取观望或自由放任态度,而欧洲和中国则积极制定法规。尽管各国政治不同,但全球逐渐形成以透明度、问责制、公平性等为核心原则的治理共识。 2025-10-13 09:1:40 Author: securityboulevard.com(查看原文) 阅读量:136 收藏

Avatar photo

When it comes to regulating AI, America seems to be, at best, taking a wait-and-see or perhaps a laissez-faire approach. Meanwhile, other countries (particularly in Europe and China) are actively regulating what AI can do, how it can be used, and how it is to operate. Who is right?

Magic 8 ball says – situation unclear — ask again later.

The problem with regulating AI at this point is a lack of consensus about what basic principles we are attempting to embed – in short, what is “good” about AI, and what is “bad.” In the immortal words of Dr. Peter Venkman, “I’m fuzzy on the whole good/bad thing. What do you mean, ‘bad’?” We all know about potential bad outcomes with AI – bias, prejudice, inaccuracy, Nazi chatbots, robot overlords, you know, “bad” things. But what do we want AI to do, how do we want it to do it, and more importantly, what do we NOT want it to do?

Freedom vs. Free Market

In October 2023, former President Biden issued Executive Order 14110 — the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” directive, telling U.S. agencies to create testing protocols, watermarking standards, safety certifications, and reporting obligations for advanced AI-driven models. It was a sprawling, ambitious framework: Part national strategy, part moral statement. The message was that AI risk was not just an engineering problem—it was a governance problem. It called for transparency of both input, processing and output, and “safety” standards for AI programs.

Two years later, Executive Order 14179, signed by President Trump in January 2025 — titled Removing Barriers to American Leadership in Artificial Intelligence, swung the pendulum in the opposite direction. The new order declares that over-regulation “threatens innovation and national competitiveness.” It calls for deregulation, rescission of conflicting rules, and a “whole-of-government approach to unleash innovation.” Federal agencies that once drafted AI risk frameworks were told to identify and repeal them. What had been an effort to build guardrails has become a campaign to clear the road. TV ads with OpenAI’s Sam Altman as an AI-generated kid in his bedroom coding, which sets up the special relationship between AI and the United States of America.

Congress Talks; The States Act

Fear not! The U.S. Congress is studying the issue. Proposals such as the Algorithmic Accountability Act 2.0, the AI Disclosure Act, and Senator Cruz’s SANDBOX Act have stalled or been fragmented in committee. Attempts to impose a federal moratorium on state AI laws were rejected almost unanimously. The result: No national baseline, no single definition of “high-risk” AI, and no clear regulatory philosophy.

In the absence of federal regulation, some states have tenuously begun to attempt to regulate some aspects of AI. California’s Transparency in Frontier AI Act (TFAIA)—signed in September 2025—requires developers of “frontier models” to document training data, perform risk assessments, and establish post-deployment monitoring. Colorado is enforcing algorithmic accountability and discrimination testing; Texas has proposed its own AI sandbox regime; and Florida, through its Digital Bill of Rights, has extended consumer-privacy protections to algorithmic profiling. So, in the interest of promoting an unregulated AI, we have more regulation.

The question of whether — and how — to regulate AI is a reflection of the cultural and philosophical differences between countries. For example, the U.S. regulates outcomes—fraud, discrimination, deception—but resists ex ante rules. The EU regulates the process — requiring documentation, testing and human oversight before deployment. Finally, China regulates behavior — mandating algorithmic transparency and content control to maintain political equilibrium.

These philosophical differences matter more than the statutes themselves because they shape what each jurisdiction believes AI is. For the U.S., AI is a product; for Europe, a system; for China, an actor. The result is three incompatible legal grammars for the same technology.

Techstrong Gang Youtube

Principles of AI Regulation

Despite divergent politics, a global consensus is coalescing around several key principles of AI governance:

Transparency and Explainability – Systems must disclose that they are AI and provide explanations for significant decisions (EU AI Act Arts. 13–15; U.K. DSIT Guidelines; OECD Principles).
Accountability and Human Oversight – There must always be a responsible natural or legal person—no “autonomous scapegoats.”
Fairness and Non-Discrimination – Bias mitigation is not just ethical but statutory in most frameworks.
Safety and Robustness – Pre-deployment testing, continuous monitoring, and incident reporting are required for “high-risk” systems.
Privacy and Data Governance – Alignment with GDPR-style limitations on data use and retention is becoming the global norm.
Security and Resilience – Governments are mandating protections against adversarial manipulation, model inversion, and data poisoning.
Traceability – Auditability of model decisions and provenance of training data form the backbone of emerging certification regimes.

Together, these principles define the modern AI regulatory lexicon, even if implementation varies.

Europe’s Regulatory Jet Stream

The EU AI Act — in force since August 2024 — operationalizes these principles. By February 2025, “unacceptable-risk” systems (social scoring, subliminal manipulation) are banned. By August 2025, general-purpose AI (GPAI) models must maintain technical documentation, disclose training data sources, and implement risk management. By August 2026, high-risk systems in health care, finance, and infrastructure must complete conformity assessments and register in a public EU database.

The European model is procedural and paternalistic: prove safety first, innovate later. Critics call it bureaucratic, but it provides predictability—something U.S. developers increasingly crave.

China’s Algorithmic Recommendation and Deep Synthesis rules emphasize traceability, content moderation, and user identity verification. Canada’s pending Artificial Intelligence and Data Act (AIDA) blends EU-style risk tiers with North American pragmatism. The U.K. continues its “pro-innovation” path, relying on sectoral regulators rather than a central AI agency. Even the OECD and G7 Hiroshima Process are harmonizing guidelines that may eventually become the de facto global floor.

With All Deliberate Speed

America’s fragmented approach may yet produce brilliance. Innovation thrives in chaos, and many argue that Europe’s regulatory gravity will slow its innovators while U.S. companies sprint ahead. But ungoverned speed can also produce catastrophe — and legal entropy. The question is not whether the U.S. should regulate AI, but whether it can do so coherently, across administrations and ideologies. Just remember the important safety tip — don’t cross the streams.

Recent Articles By Author

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 229 posts and counting.See all posts by mark


文章来源: https://securityboulevard.com/2025/10/is-america-behind-the-ball-when-it-comes-to-ai-regulation/
如有侵权请联系:admin#unsafe.sh