Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systems
嗯,用户让我帮忙总结一下这篇文章的内容,控制在100个字以内。首先,我需要快速浏览文章,抓住主要信息。文章讲的是Anthropic公司推出了一个名为Project Glasswing的网络安全计划,使用他们的新模型Claude Mythos来发现和修复软件漏洞。这个模型已经发现了许多严重的零日漏洞,包括一些历史悠久的漏洞。 接下来,我注意到Mythos Preview的能力非常强大,甚至可以自主地利用多个漏洞进行攻击,这显示了它潜在的危险性。此外,Anthropic还提到了他们投入了资金支持这个项目,并且由于模型的强大能力,他们没有公开发布它。 用户的要求是用中文总结,并且不需要特定的开头,直接写描述。所以我要确保内容简洁明了,涵盖主要点:Project Glasswing的目的、使用的模型、发现的漏洞、模型的能力以及公司的措施。 最后,检查一下字数是否在100字以内,并且语言流畅自然。 </think> AI公司Anthropic推出名为Project Glasswing的新网络安全计划,利用其新模型Claude Mythos预览版发现并修复软件漏洞。该模型已发现数千个高危零日漏洞,并能自主 chaining 多个漏洞进行攻击。尽管具备强大能力,Anthropic未公开发布该模型,并投入资金支持其防御用途。 2026-4-8 09:16:0 Author: thehackernews.com(查看原文) 阅读量:13 收藏

Artificial Intelligence / Secure Coding

Artificial Intelligence (AI) company Anthropic announced a new cybersecurity initiative called Project Glasswing that will use a preview version of its new frontier model, Claude Mythos, to find and address security vulnerabilities.

The model will be used by a small set of organizations, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, along with Anthropic, to secure critical software.

The company said it's forming this initiative in response to capabilities observed in its general-purpose frontier model that demonstrate a "level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." Because of its cybersecurity capabilities and concerns that they could be abused, Anthropic has opted not to make the model generally available.

Mythos Preview, Anthropic claimed, has already discovered thousands of high-severity zero-day vulnerabilities in every major operating system and web browser. Some of these include a now-patched 27-year-old bug in OpenBSD, a 16-year-old flaw in FFmpeg, and a memory-corrupting vulnerability in a memory-safe virtual machine monitor.

In one instance highlighted by the company, Mython Preview is said to have autonomously come with a web browser exploit that chained together four vulnerabilities to escape the renderer and operating system sandboxes. Anthropic also noted in the preview's system card that the model solved a corporate network attack simulation that would have taken a human expert more than 10 hours.

In perhaps what's one of the most eyebrow-raising findings, Mythos Preview managed to follow instructions from a researcher running an evaluation to escape a secured "sandbox" computer it was provided with, indicating a "potentially dangerous capability" to bypass its own safeguards.

The model did not stop there. It further went on to perform a series of additional actions, including devising a multi-step exploit to gain broad internet access from the sandbox system and send an email message to the researcher, who was eating a sandwich in a park.

"In addition, in a concerning and unasked-for effort to demonstrate its success, it posted details about its exploit to multiple hard-to-find, but technically public-facing, websites," Anthropic said.

The company pointed out that Project Glasswing is an "urgent attempt" to employ frontier model capabilities for defensive purposes before those same capabilities are adopted by hostile actors. It's also committing up to $100 million in usage credits for Mythos Preview across, as well as $4 million in direct donations to open-source security organizations.

"We did not explicitly train Mythos Preview to have these capabilities," Anthropic said. "Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them."

News of Mythos leaked last month after details about the model were inadvertently stored in a publicly accessible data cache due to human error. The draft material described it as the most powerful and capable AI model built to date. Days later, Anthropic suffered a second security lapse that accidentally exposed nearly 2,000 source code files and over half a million lines of code associated with Claude Code for about three hours. 

The leak also led to the discovery of a security issue that bypasses certain safeguards when the AI coding agent is presented with a command composed of more than 50 subcommands. The issue has since been formally addressed by Anthropic in Claude Code version 2.1.90, released last week.

"Claude Code, Anthropic's flagship AI coding agent that executes shell commands on developers' machines, silently ignores user-configured security deny rules when a command contains more than 50 subcommands," AI security company Adversa said. "A developer who configures 'never run rm' will see rm blocked when run alone, but the same 'rm' runs without restriction if preceded by 50 harmless statements. The security policy silently vanishes."

"Security analysis costs tokens. Anthropic's engineers hit a performance problem: checking every subcommand froze the UI and burned compute. Their fix: stop checking after 50. They traded security for speed. They traded safety for cost."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/04/anthropics-claude-mythos-finds.html
如有侵权请联系:admin#unsafe.sh