Pentagon ditches Anthropic AI over “security risk” and OpenAI takes over
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得仔细阅读文章,抓住主要信息。 文章讲的是美国国防部与Anthropic公司断绝关系,原因是将其列为国家安全的供应链风险。这意味着国防部的承包商不能再与该公司合作。Anthropic对此表示反对,并计划在法庭上挑战这一决定。 接着,文章提到双方的争议焦点在于Anthropic对其AI模型在军事系统中的应用限制。Anthropic拒绝了国防部提出的“所有合法用途”的合同条款,认为这可能违反其不用于大规模监视和自主武器的承诺。 然后,国防部取消了与Anthropic的2亿美元合同,并指责该公司试图干预军事行动。同时,国防部迅速与OpenAI达成协议,后者接受了“所有合法用途”框架,并增加了架构控制措施。 此外,尽管政策变化, Anthropic的AI仍在某些军事系统中使用,短期内不会立即移除。员工和消费者也对此事件有所反应,支持Anthropic立场的人数众多。 总结一下,关键点包括:断绝关系的原因、合同取消、OpenAI接手、技术仍在使用、员工和消费者的反应。需要将这些信息浓缩到100字以内。 可能会这样组织:美国国防部因供应链风险终止与Anthropic合作,并取消2亿美元合同。Anthropic计划挑战决定。同时,国防部与OpenAI达成协议。尽管如此,Anthropic的技术仍在部分军事系统中使用。员工和消费者对事件反应强烈。 现在检查字数是否符合要求,并确保信息准确无误。 </think> 美国国防部因供应链风险终止与AI公司Anthropic的合作,并取消2亿美元合同。Anthropic计划挑战此决定。同时,国防部迅速与OpenAI达成协议。尽管如此, Anthropic的技术仍嵌入部分军事系统中使用。员工和消费者对此事件反应强烈。 2026-3-3 16:5:58 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

On Friday the US Pentagon cut ties with Anthropic, the company behind Claude AI. Defense Secretary Pete Hegseth designated the San Francisco-based company a “supply-chain risk to national security.”

The supply-chain risk designation means that no contractor, supplier, or partner doing business with the US military can deal with Anthropic. The label previously applied only to foreign adversaries like Huawei, though, and using it against a US company marks a rare escalation in a government-industry dispute. According to reports, President Donald Trump also ordered every federal agency to stop using Anthropic’s technology.

What Anthropic wouldn’t budge on

Anthropic called the designation “unlawful and politically motivated” and said it intends to challenge it in court.

At the center of the dispute is how far Anthropic believes its models should be allowed to go inside military systems. Anthropic, which was the first frontier AI company deployed on the military’s classified networks, wanted two contractual restrictions on its AI model Claude, as outlined in its response to the Pentagon’s announcement. It forbade the Pentagon to use its tech for the mass domestic surveillance of Americans and did not want its tech employed in fully autonomous weapons.

The Pentagon had previously demanded that all AI vendors agree to “all lawful purposes” language as part of their contracts. Anthropic told ABC that what the Pentagon finally offered left the door open for the government to violate the company’s no-surveillance and no-weapons clauses.

Defense Secretary Hegseth responded with a statement cancelling Anthropic’s $200m Pentagon contract, awarded last July. He accused Anthropic of attempting to seize veto power over military operations and called the company’s position fundamentally incompatible with American principles.

Anthropic’s CEO Dario Amodei called the government’s response retaliatory and punitive and promised to challenge the designation in court.

Legal scholars suggest that the AI company could have a strong case, questioning whether Hegseth can meet the statutory requirements for such a designation, which is allegedly intended to protect military systems from adversarial sabotage rather than resolving a commercial disagreement over contract terms.

Dan W. Ball, senior fellow at the American Foundation for Innovation, called the Pengaton’s move “attempted corporate murder,” arguing that Google, Amazon, and NVIDIA would have to detach themselves from Anthropic if Hegseth got his way. Amazon is Anthropic’s primary cloud computing provider, but it also uses Google’s data centers extensively. Both companies are investors in Anthropic, as is NVIDIA, which also partners with the AI company on GPU engineering. If the Pentagon’s designation restricts federal contractors from integrating Anthropic technology into defense-related systems, those partners could be required to separate or ringfence any federal-facing work involving the company.

OpenAI steps in

In a whirlwind of policy changes by the US military, the Pentagon also signed a deal with ChatGPT creator OpenAI on Friday evening, just a few hours after dropping Anthropic.

OpenAI CEO Sam Altman said the agreement preserved the same principles Anthropic had been blacklisted for defending.

The difference, according to Altman, is the enforcement mechanism. Instead of hard contractual prohibitions, OpenAI accepted the “all lawful purposes” framework but layered on architectural controls: cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared engineers embedded forward. OpenAI said these protections made the company confident that the Pentagon couldn’t cross the red lines it shares with Anthropic.

Altman reportedly said Anthropic’s approach differed because it relied on specific contract language rather than existing legal protections, adding Anthropic “may have wanted more operational control than we did.”

The morning after

The policy dispute did not immediately change how existing systems were operating. According to reporting by The Wall Street Journal and Axios, US Central Command used Anthropic’s AI during Operation Epic Fury, a coordinated US–Israeli operation targeting Iran. The outlets reported that the system was used for intelligence assessment, target analysis, and operational modeling.

Claude remained in use because it was already embedded in certain classified military systems. As a senior defense official previously told Axios:

“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

Hegseth announced a six-month period during which the Pentagon will pick Anthropic’s AI out of its systems.

Consumers vote with their feet

The dispute has also prompted reactions from some AI industry employees and users. More than 875 employees across Google and OpenAI signed an open letter backing Anthropic’s stance. According to the letter:

“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”

A consumer boycott, organized under the name QuitGPT, is organizing a campaign to avoid using ChatGPT, along with a protest at OpenAI’s HQ this week. Claude also rocketed to the top of Apple’s App Store over the weekend.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

*** This is a Security Bloggers Network syndicated blog from Malwarebytes authored by Malwarebytes. Read the original post at: https://www.malwarebytes.com/blog/news/2026/03/pentagon-ditches-anthropic-ai-over-security-risk-and-openai-takes-over


文章来源: https://securityboulevard.com/2026/03/pentagon-ditches-anthropic-ai-over-security-risk-and-openai-takes-over/
如有侵权请联系:admin#unsafe.sh