Anthropic’s new AI model, which is so good at detecting software vulnerabilities – and creating code to exploit them – that the vendor refused to widely release it, reportedly caught the attention of Trump Administration officials who felt it was dangerous enough to warn financial services organizations.
Treasury Secretary Scott Bessent and Jerome Powell, chairman of the Federal Reserve, late last week reportedly met with a group CEOs of a number of U.S. banks – including Citi, Bank of America, and Wells Fargo – to warn them about the cybersecurity risks that Anthropic’s Claude Mythos Preview poses if it’s used by nation-state or financially motivated bad actors.
According to The New York Times, Bessent told the CEOs that deploying Mythos Preview could put their customers’ sensitive data at risk.
The CEOs already were in Washington D.C. for a lobby group meeting and CNBC described the get-together with Bessent and Powell as a “surprise meeting.”
A Treasury spokesperson told The New York Times that the meeting “was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in AI.”
The meeting was first reported by Bloomberg. Financial regulators in the UK also are reviewing the cybersecurity implications of Mythos Preview, according the Financial Times, and Bloomberg reported that the Bank of England will discuss the AI model with banks in that country.
Anthropic announced Mythos Preview earlier this month, noting that company scientists used the frontier model to find thousands of zero-day vulnerabilities, many of them critical and some that have been around but undetected for decades. The flaws were found in every major operating system, and web browser.
In a blog post about the frontier AI model, Anthropic executives credited its strong agentic coding and reasoning skills for it “powerful cyber capabilities.” It was able to identify the vulnerabilities and create code to exploit them without any human intervention.
“Claude Mythos Preview demonstrates a leap in these cyber skills – the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated,” they wrote, adding that without needed safeguards, the cyber capabilities in frontier models like Mythos Preview “could be used to exploit the many existing flaws in the world’s most important software. This could make cyberattacks of all kinds much more frequent and destructive, and empower adversaries of the United States and its allies.”
Anthropic is making Mythos Preview the foundation of Project Glasswing, an initiative to make software more secure. The company is making Mythos Preview available to a small number of organizations, including hyperscalers like Amazon Web Services, Google, and Microsoft, security vendors like CrowdStrike and Palo Alto Networks, and infrastructure and device vendors like Broadcom, Apple, and Nvidia. Also, JPMorgan Chase and The Linux Foundation will have access.
The goal is to use the AI model to strengthen the security of their software.
In addition, Anthropic has been speaking with U.S. government officials about the offensive and defensive capabilities of Mythos Preview, noting that securing the nation’s critical infrastructure – a target of cyber operations of adversaries like China, Russia, Iran, and North Korea – is a priority for the United States, as it is for other democratic countries.
Jamie Dimon, CEO of JPMorgan, was on the only top executive of a U.S. bank who couldn’t make it to the meeting with Bessent and Powell. In a letter to shareholders earlier this month, Dimon lauded the advantages that AI will bring, but also warned of the cybersecurity threats, from deepfakes to information to vulnerabilities.
“These risks are real, but they are manageable if companies, regulators and governments prepare,” he wrote. “The worst mistakes we can make are predictable: overreact at the first serious incident and regulate out important innovation or underreact and fail to learn from what went wrong.”
Anthropic has had its share recent security issues. A human error involving a release package led to the leaking of some of the internal source code for its Claude Code coding assistant. In addition, LayerX researchers last week said protections in Claude Code can easily be hacked and used as a tool to launch cyberattacks, hack into websites, and develop new vulnerabilities.
This also comes as Anthropic pushes back at the U.S. Defense Department for labeling it as a supply chain risk that threatens national security. Company executives claim the Trump Administration is using the label to punish Anthropic for refusing to bend on limits for the use of its AI technology in warfare.
Recent Articles By Author