The Vulnerability Management Race Is Over. It’s Time to Focus on Exposure.
嗯,用户让我帮忙总结一篇文章,控制在100字以内,而且不需要用“文章内容总结”这样的开头。首先,我得仔细阅读这篇文章,理解它的主要观点。 文章主要讲的是Anthropic公司发布的Mythos Preview,这是一个能够自主发现大量零日漏洞的AI模型。它不仅发现了许多漏洞,还能生成有效的攻击路径,并且速度非常快。这说明传统的漏洞管理方法已经不够用了,防御者需要转向更主动的攻击路径分析和防御策略。 接下来,我需要提炼出关键点:Mythos的能力、传统漏洞管理的不足、新的防御策略(如攻击路径分析、补偿控制、AI资产纳入防御等)。然后把这些点浓缩到100字以内。 可能的结构是先介绍Mythos及其影响,然后指出传统方法的局限性,最后说明新的防御措施。这样既全面又简洁。 最后检查一下是否符合要求:中文、100字以内、直接描述内容。没问题的话就可以提交了。 </think> 文章讨论了Anthropic公司发布的Mythos Preview AI模型如何发现数千个零日漏洞并生成有效攻击路径,揭示传统漏洞管理已无法应对快速变化的威胁环境。文章强调需转向威胁驱动的安全策略,关注攻击路径分析、补偿控制和AI资产纳入防御,并建议组织重新定义安全指标以适应持续威胁。 2026-4-17 13:0:0 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

With Anthropic’s Mythos Preview announcement, the race to patch all vulnerabilities is over. As defenders, we must move on.

We must focus on what adversaries can do after they exploit a vulnerability: which attack paths those exploits enable, where those paths lead, and how to eliminate them before they reach what matters. That is a tractable problem. It is also a fundamentally different one than vulnerability management, and it requires a fundamentally different program.

What Is Mythos?

On April 7, 2026, Anthropic announced Claude Mythos Preview, a frontier AI model that autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser. It generated working exploits without human guidance. Against Mozilla Firefox vulnerabilities alone, it developed 181 working exploits. The oldest vulnerability it found had been sitting undetected for 27 years.

Mythos outperformed every prior model on real-world exploitation benchmarks, scoring 83.1% on CyberGym’s vulnerability reproduction test. It chains multiple vulnerabilities together and identifies novel attack paths, without a human in the loop. That is a qualitative shift in offensive capability, not an incremental one.

Mythos Didn’t Start This. It Ended the Debate.

But this is a future we’ve been working toward for several years. The data has been telling this story for a long time.

Researchers at the Zero Day Clock have tracked mean time-to-exploit across more than 3,500 confirmed CVE-exploit pairs drawn from CVE, CISA’s Known Exploited Vulnerabilities catalog, and other trusted sources. In 2021, that mean was over a year. By 2025, it had fallen to one month. By early 2026, it had crossed one week. Then one day.

The CrowdStrike 2026 Global Threat Report documented a 42% year-over-year increase in zero-day vulnerabilities exploited prior to public disclosure — continuing a multi-year trend of rising zero-day abuse. The vulnerability management ecosystem has been approaching a breaking point for years: more CVEs than teams can process, patch cycles that lag the threat, and a tracking infrastructure that was built for a world of dozens of critical disclosures per month, not hundreds.

What Mythos does is close the argument. For years, the response to accelerating exploit timelines was “we need to patch faster” or “we need better vulnerability management.” Those responses were always inadequate. Now they’re obviously inadequate. In April 2026, NIST announced updates to NVD operations specifically to address record CVE growth — another signal that the ecosystem itself is straining. Tracking vulnerabilities more diligently is not a path to resilience.

Mythos makes that even clearer. It can unpack patches, reverse engineer them, and develop working exploits from the patch content itself. Adversaries can now weaponize our patches faster than we can deploy them. The patch process, the last line of the old model’s defense, has become part of the attack surface.

The future that practitioners have been watching approach for years is here. The question isn’t whether it arrived. The question is what you’re going to do about it.

What this means for your program: Stop asking how to close the backlog faster. Start asking which vulnerabilities in that backlog create a real path to the assets your organization cannot afford to lose. That shift, from managing a list to managing actual risk, is the foundation everything else in this blog builds on.

CVSS Scores Don’t Tell You What Attackers Do Next

Here’s what most vulnerability programs miss – the score on a CVE tells you how bad the flaw is. It doesn’t tell you what an attacker can do once the vulnerability has been exploited.

A critical-severity CVE in an isolated segment, with no viable path to sensitive systems, is a fundamentally different risk than a medium-severity vulnerability that sits one lateral move from your most important data. CVSS doesn’t tell you the difference. Attack path analysis does.

Threat-informed attack path analysis is grounded in how adversaries actually behave. Understanding how adversaries use MITRE ATT&CK techniques post-exploit to build a path to crown jewels tells you which vulnerabilities actually matter, where the choke points are, and which controls you need to validate. The question isn’t “what’s the CVSS score?” It’s “what’s the attack path this vulnerability enables, and do our controls actually stop it?”

What this means for your program: Reorder your patch backlog around threat-informed, validated attack paths, not severity scores. A CVE that sits on a validated path to your crown jewels moves to the front of the queue, regardless of its score. A critical CVE with no viable onward path can wait. You can’t make that call without attack path analysis. Severity scoring alone will consistently send you to the wrong place.

Focus on Breaking Attack Paths, Not Patching.

Most security programs are built around remediating vulnerabilities. That’s the wrong frame when the backlog is growing faster than it can be closed.

You don’t always need a patch to eliminate risk. Network segmentation, access controls, and compensating controls can all sever a path to a crown jewel — even when the underlying vulnerability remains open. If you know the choke points within your attack paths, you have an opportunity beyond patching — break the path itself.

The Cloud Security Alliance coalition specifically calls out segmentation as a critical defensive layer. The challenge most programs face is knowing where to segment, and then validating that the controls they put in place are actually working. Deploying a control and testing a control are different things. Most programs do the first. Fewer do the second systematically.

This is the posture shift that matters. You’re not waiting for every vulnerability to be fixed. You’re identifying which ones create viable paths to what matters, and breaking those paths through a combination of patching where you can and compensating controls where you can’t. Both approaches count. Neither works without knowing the path.

What this means for your program: Map your crown jewels and the threat-informed attack paths that lead to them. Then ask two questions: which vulnerabilities sit on those paths, and which controls are supposed to break them? If you can’t answer the second question with evidence, not assumption, you have unvalidated gaps in your program. Find them before an attacker does.

Your AI Assets Are Part of Your Attack Surface Now

The Cloud Security Alliance analysis rates unmanaged AI agent exposure as Critical severity. Most security programs have no way to discover or validate controls against agents, MCP servers, and models. That gap is not theoretical — it means the asset class the paper identifies as most urgently unaddressed is also the one least likely to be in scope for any security validation program today. Your AI footprint is part of your overall exposure. It needs to be treated that way.

Most organizations don’t have a clear picture of what AI capabilities are running in their enterprise. Agents, models, MCP servers, and third-party integrations are being deployed fast — often by teams outside of security, often without a formal inventory. Organizations that wouldn’t dream of leaving a critical server unvalidated are doing exactly that with AI systems that have access to sensitive data, internal controls, and privileged workflows.

What this means for your program: Start by understanding what AI capabilities are running in your environment — who deployed them, what they’re connected to, and what data and systems they can access. Then bring them into scope for the same threat-informed validation you apply to the rest of your environment. A compromised AI agent with access to sensitive workflows isn’t just a data risk — it’s a potential pivot point to your crown jewels. Exposure management must include your AI exposure.

Detection Has to Keep Up. Right Now, It Doesn’t.

The Cloud Security Alliance paper identifies detection latency as a Critical severity gap. Alert triage, SIEM correlation, and incident response playbooks were all designed for threats that moved at human speed. AI-augmented attackers don’t.

The numbers make the problem concrete. According to the CrowdStrike 2026 Global Threat Report, the average eCrime breakout time fell to 29 minutes in 2025. The fastest recorded breakout took 27 seconds. In one intrusion, data exfiltration began within four minutes of initial access. Detection workflows built around human-paced investigation cycles don’t operate at that speed. By the time an alert is triaged, the attacker has already moved.

If your detection logic isn’t continuously validated against the techniques your most relevant threat actors actually use, you don’t know whether it works. You have an assumption. Detection engineering that continuously tests coverage and automatically tunes when coverage drifts is what closes that gap — and it’s the prerequisite for any response capability that operates at machine speed.

What this means for your program: Test your detection logic against real attack techniques on a continuous basis, not a quarterly one. Know whether your coverage holds against the specific TTPs most likely to be used against your organization. When it doesn’t, fix it immediately, not at the next review cycle. This is the closed loop that makes everything else in a threat-informed program operational rather than theoretical.

Continuous Defense Is Not a Technology Decision. It’s an Organizational One.

Quarterly assessments and annual pen tests were built for a different era. They don’t fail because they’re bad practices. They fail because they were designed for a threat that moved at human speed, and the threat no longer does.

The organizations that manage this environment successfully will be the ones that build continuously adaptive security operations. New vulnerability disclosed – reassess attack paths immediately. New AI agent deployed – update the exposure picture. Threat actor shifts techniques – adjust detection and validation automatically.

This kind of program requires organizational alignment around a different set of questions. Not “how many vulnerabilities did we close this quarter?” but “which paths to our crown jewels are broken, validated, and monitored right now?” Those are different metrics. They require different conversations between security teams and leadership.

And they require leadership that understands the only path forward runs through automation — not automation layered on top of the old vulnerability management model, but automation grounded in what adversaries actually do. Threat-informed defense is what gives that automation its direction. The technology has to change. So does the organizational context around it.

What this means for your program: If your security reporting still centers on patch rates and vulnerability counts, you’re reporting on activity, not on the posture question your organization actually needs answered. Work with leadership to reframe the metrics around attack path coverage and control validation. That shift is harder than deploying any technology. It matters more.

Mythos-class capabilities won’t stay exclusive. The models that follow will be faster, cheaper, and more accessible. The cost and skill required to discover and exploit vulnerabilities will keep falling. And the volume of what gets found will exceed anything the current vulnerability tracking infrastructure was built to handle.

None of that is a reason to panic. It is a reason to be clear-eyed about what a resilient security program looks like from here.

The programs that hold will be threat-informed and continuously operating. They’ll know their adversaries, test their defenses against real techniques, validate their controls with evidence, and extend that discipline to their AI assets. They’ll be built around the assumption that the backlog never fully closes, optimized to ensure the open items don’t create paths to what matters.

The threat has become a living system. Defense has to become one, too.

References

Jon is Vice President of Threat‑Informed Defense at AttackIQ, where he leads efforts to help organizations transform security operations through Threat‑Informed Defense and Continuous Threat Exposure Management (CTEM). His work focuses on defining how CTEM is applied in practice and helping security teams adopt disciplined, outcome‑driven approaches that improve security effectiveness at scale.
 
Previously, Jon was the Director and Co‑Founder of MITRE’s Center for Threat‑Informed Defense, where he built a global community that defined and advanced threat‑informed defense through collaborative R&D. The Center played a foundational role in maturing threat‑informed defense from concept to operational discipline.
 
Jon once led MITRE’s Cyber Threat Intelligence and Adversary Emulation Department, overseeing MITRE ATT&CK® and CALDERA, and contributed to the development of foundational cybersecurity standards including STIX, TAXII, and OVAL.


文章来源: https://securityboulevard.com/2026/04/the-vulnerability-management-race-is-over-its-time-to-focus-on-exposure/
如有侵权请联系:admin#unsafe.sh