The recent preliminary injunction issued by the Northern District of California in the Anthropic–Department of Defense dispute is not occurring in a vacuum. It is unfolding at precisely the moment the White House has articulated a national artificial intelligence governance strategy—one that, paradoxically, underscores the very deficiencies the case exposes.
The opinion addresses the government’s attempt to designate a domestic AI company as a “supply chain risk” and effectively compel access to its models under conditions that would override the company’s internal guardrails. The court’s skepticism toward that designation reflects classic administrative law concerns: Lack of statutory authority, arbitrary and capricious action, and the absence of procedural safeguards.
But layered on top of that dispute is a rapidly evolving federal policy framework for AI—one that reveals a deeper structural problem.
On March 20, 2026, the White House released its “National Policy Framework for Artificial Intelligence: Legislative Recommendations” That framework, together with the December 11, 2025 Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” signals a deliberate federal strategy: Centralize AI governance at the federal level, preempt state regulation, and adopt what the administration repeatedly describes as a “minimally burdensome” approach designed to “sustain and enhance the United States’ global AI dominance.”
The policy architecture rests on several pillars. It emphasizes innovation, infrastructure, and international competitiveness, while proposing legislative action on discrete issues such as child safety, intellectual property, workforce development, and free speech. It is, in short, a framework designed to enable rapid deployment of AI technologies rather than to constrain them.
That design choice might be defensible if the competitive environment were neutral. It is not.
Artificial intelligence is now the central domain of geopolitical competition. The United States, China, and other nation-states—many of which have interests directly adverse to U.S. national security and democratic norms—are engaged in an accelerated race for dominance in model capability, deployment, and integration into critical infrastructure. In that environment, ethical restraint is not simply a matter of corporate virtue; it is a potential competitive disadvantage.
This is where the Anthropic case becomes particularly instructive.
Anthropic attempted to impose ethical constraints through its terms of service and embedded model guardrails—limitations on surveillance, military use, and other high-risk applications. The Department of Defense sought to bypass or renegotiate those constraints in pursuit of operational flexibility.
That tension is not anomalous. It is inevitable.
When AI governance is left to private contractual mechanisms, those mechanisms will be tested—and ultimately eroded—by market pressure and national security imperatives. Companies operating in a highly competitive global environment will face a stark choice: adhere to self-imposed ethical constraints and risk falling behind competitors, or relax those constraints to accelerate deployment and capture market share.
The rational economic actor will choose the latter.
This is the classic “race to the bottom” dynamic, long recognized in regulatory theory. Where standards are voluntary and enforcement is decentralized, competition drives actors toward the least restrictive regime. In the context of AI, that means fewer guardrails, less transparency, and greater tolerance for high-risk applications.
The White House framework, despite its aspirations, does little to counteract this dynamic. By emphasizing minimal regulatory burden and federal preemption of state law, it risks eliminating stricter regulatory regimes without replacing them with enforceable national standards. The result is a system in which ethical constraints are optional, negotiable, and subject to competitive pressure.
Empirical evidence already suggests that voluntary AI commitments are insufficient. Industry pledges—many encouraged by the federal government—have shown inconsistent compliance and significant gaps in areas such as model security, risk assessment, and misuse prevention. Without binding legal obligations, there is no mechanism to ensure adherence when those commitments conflict with economic or strategic incentives.
The implications are profound.
First, AI ethics becomes a market variable rather than a legal requirement. Companies will calibrate their guardrails not based on normative principles, but on competitive positioning.
Second, national security considerations will further erode constraints. Governments—whether in the United States or abroad—will demand capabilities that private actors may initially resist but ultimately accommodate under pressure.
Third, the global nature of AI development ensures that any unilateral restraint will be undercut by actors operating in jurisdictions with fewer or no constraints. The result is not a stable equilibrium, but a downward spiral.
This is precisely why reliance on terms of service is untenable.
Terms of service are inherently fragile. They can be amended, waived, or selectively enforced. They lack transparency, are not subject to meaningful judicial oversight, and do not bind third parties. Most importantly, they cannot withstand sustained economic and geopolitical pressure.
The Anthropic litigation illustrates this failure mode in real time. A company’s attempt to impose ethical limits collided with governmental demands, exposing the absence of a binding legal framework to resolve that conflict.
The court’s intervention—grounded in administrative law principles—reasserts the importance of statutory authority and judicial review. But it does not solve the underlying problem: There is no comprehensive legal regime governing the substantive use of AI.
That gap must be addressed.
If AI governance is to be meaningful, it must be grounded in law—statutes that define permissible and impermissible uses, regulations that establish enforceable standards, and oversight mechanisms that ensure compliance. Those standards must be sufficiently robust to withstand competitive pressures and sufficiently flexible to adapt to technological change.
Anything less invites exactly the outcome we are beginning to see: a fragmented, inconsistent, and ultimately ineffective system in which ethical constraints are subordinated to market forces and geopolitical competition.
The White House framework is a starting point. It is not an answer.
And the Anthropic case is a warning.
If we continue to rely on terms of service and voluntary commitments as the primary mechanisms for AI governance, we should not be surprised when those mechanisms collapse under the weight of competition.
That collapse will not be gradual. It will be decisive.
Because in a global race for artificial intelligence dominance, ethics that are optional will not be sustained. They will be abandoned. I’m sorry, Dave, I’m afraid I can’t do that.
Recent Articles By Author