What Anthropic’s Mythos Means for the Future of Cybersecurity
嗯,用户让我帮忙总结一下这篇文章的内容,控制在一百个字以内。首先,我需要仔细阅读文章,理解其主要观点和关键信息。 文章主要讨论了Anthropic公司发布的Claude Mythos Preview模型,它能够自主发现并利用软件漏洞,转化为实际的攻击手段。这显然对网络安全有重大影响,因为这些漏洞存在于关键软件中,如操作系统和互联网基础设施,而这些漏洞是许多开发者未能发现的。 接下来,文章提到Anthropic没有将这个模型公开发布,而是限制给少数公司使用。这引发了互联网安全社区的震动,许多人对这一决定感到不满,并猜测背后的原因。有人认为这是出于计算资源的限制,也有人认为这是为了遵守AI安全使命。 然后,文章讨论了AI在网络安全中的作用变化。作者指出,虽然Mythos的能力是逐步发展的结果,但即使是小步骤也可能对整体安全格局产生重要影响。他们提到“基线转移”现象,即人们忽视了隐藏在渐进变化中的重大长期变化。 此外,文章还探讨了AI如何改变漏洞发现和修复的方式。他们认为,并非所有漏洞都能被自动修复,有些可能难以验证或修补。因此,需要区分可修补和不可修补的系统,并采取相应的保护措施。 最后,作者强调了软件安全实践的重要性,并建议使用防御性AI代理进行持续测试和修复。他们预计,在未来几年内可能会出现更多的攻击事件,直到新的安全常态形成。 总结起来,文章主要讲述了AI技术在网络安全中的新进展及其带来的挑战和应对策略。 </think> Anthropic的Claude Mythos模型可自主发现并利用软件漏洞生成攻击代码,这对网络安全构成重大威胁。该技术将改变漏洞发现与修复方式,促使防御体系升级,推动持续测试与自动化修复成为标准流程,以应对日益复杂的网络威胁环境。 2026-4-28 11:6:44 Author: securityboulevard.com(查看原文) 阅读量:18 收藏

The post What Anthropic’s Mythos Means for the Future of Cybersecurity appeared first on Schneier on Security.

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.

The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.

We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.

How AI Is Changing Cybersecurity

We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.

The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.

We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.

Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.

So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.

Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.

Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.

Rethinking Software Security Practices

This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.

Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.

Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.

Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

*** This is a Security Bloggers Network syndicated blog from Schneier on Security authored by Bruce Schneier. Read the original post at: https://www.schneier.com/blog/archives/2026/04/what-anthropics-mythos-means-for-the-future-of-cybersecurity.html


文章来源: https://securityboulevard.com/2026/04/what-anthropics-mythos-means-for-the-future-of-cybersecurity/
如有侵权请联系:admin#unsafe.sh