On Feb. 20, the cybersecurity market experienced a structural tremor. Anthropic released Claude Code Security, pointing its Claude Opus 4.6 and million-token context window at the industry’s most “vetted” codebases.
This is a significant change from standard pattern-matching against known signatures, achieved by applying semantic reasoning to trace data flows and map component interactions across hundreds of thousands of lines of code. In its first demonstration, Opus 4.6 surfaced over 500 previously unknown, high-severity vulnerabilities in production open-source libraries that had survived decades of expert human review.
The financial fallout was immediate. Within 48 hours, more than $15 billion in cybersecurity market capitalization evaporated. CrowdStrike dropped nearly 11%. Zscaler fell about 9%. JFrog lost close to a quarter of its value in a single session. Forrester’s Jeff Pollard called it a “SaaS-Pocalypse.” Barclays called the selloff “incongruent.” Wedbush said it had “caused chaos on the cybersecurity sector.”
The market’s instinct was right, but its diagnosis was wrong. Investors sensed that something structural had shifted between AI and cybersecurity. The fear was straightforward: if an AI model can find vulnerabilities better than the tools sold by CrowdStrike, Palo Alto, and Zscaler, then a large share of cybersecurity revenue is at risk of displacement. AI eats the scanner. Margins compress.
But Claude Code Security solves a well-understood problem. Static application security testing has been a product category for over fifteen years. The innovation lies in how well the scanning is done, not in what is being scanned. Anthropic built a better microscope.
The earthquake is that the thing under the microscope (discrete codebases, individual applications, static artifacts) is no longer where the risk lives.
The average large organization now runs north of 400 SaaS applications. Those applications connect to each other through API integrations, OAuth trust chains, webhooks, workflow automations, and autonomous agents that authenticate, move data, and trigger actions across ten or fifteen services to complete a single task.
A sales operations workflow might start with an AI agent pulling prospect data from Salesforce, enriching it via a third-party data provider, routing it through a scoring model on a separate platform, pushing results into HubSpot, triggering a Slack notification, and logging the activity in ServiceNow. No human touches it. The agent authenticates to each service via OAuth tokens or API keys. It runs at 3 AM. If any one of those tokens is overprivileged, compromised, or manipulated, the blast radius extends across the entire chain.
A code scanner cannot see this. An endpoint agent cannot see this. A web application firewall cannot see this. The risk is not a bug in the code. It is in the composition: the emergent behavior that arises when dozens of services, identities, and autonomous actors interact in ways no single component was designed to anticipate.
The cybersecurity industry spent two decades perfecting the defense of things (endpoints, applications, containers, code). The attack surface migrated to the spaces between things: the connections, the data flows, the trust relationships, the behavioral patterns of machine-to-machine interactions that no single-application security tool was built to observe.
Third-party breaches increased 68% year-over-year through 2024 (Verizon DBIR). The most consequential SaaS security incidents of 2025 (ShinyHunters, Salesloft/Drift, Gainsight) all shared a common mechanism: OAuth trust exploitation. Not a code vulnerability. Not an isolated misconfiguration. An abuse of the trusted connections between applications.
In the ShinyHunters campaign, attackers used vishing to trick employees into approving what appeared to be a legitimate Salesforce Data Loader application. The rogue app reused a real client_id, bypassing domain allow lists. Once approved, it delivered a persistent OAuth refresh token that never expired. Over 100 enterprises lost customer records, employee PII, financial data, and embedded cloud credentials. Attackers used those credentials to pivot into AWS and Snowflake environments.
No code was exploited. The code was fine. The trust fabric was compromised, and no tool in the stack monitored how those connections behaved over time.
The Salesloft/Drift compromise was worse. Attackers found unexpired Drift OAuth tokens in a public GitHub repository. Those tokens carried broad scopes: full access to Salesforce, Google Workspace, and Zscaler. API calls originated from Drift infrastructure, so the logs looked normal. More than 700 organizations were probed or accessed in ten days. No single-platform security tool flagged it, because on each platform individually, the activity appeared legitimate. Only cross-application correlation could have surfaced the anomaly.
When Gainsight was compromised in November 2025, attackers moved upstream. They breached Gainsight’s own tenant, registered rogue connected apps using Gainsight’s legitimate trust relationship, and accessed 285 customer organizations. They cloaked their traffic behind common cloud IPs and benign user agents. Platform logs looked clean because the activity was “authorized.”
Three campaigns. Three different initial vectors. One shared lesson: the attack surface is the ecosystem, not the application.
Compound these risks with autonomous AI agents.
Gartner projects that by 2028, 80% of organizations will see AI agents consume the majority of their API calls, surpassing human developers. Through 2029, over 50% of successful cybersecurity attacks against AI agents are expected to exploit access control failures via prompt injection. Seventy-one percent of organizations have already granted AI tools access to core business systems.
These agents are qualitatively different from human users. They authenticate via non-human identities: service accounts, API keys, OAuth tokens. They chain operations across multiple services in seconds. They make decisions autonomously. A compromised agent does not slowly escalate privileges. It executes its objectives across the full scope of its permissions at machine speed, before most detection systems register the first alert.
OWASP catalogued this reality in its Top 10 for Agentic Applications: Agent Goal Hijack, Tool Misuse and Exploitation, Identity and Privilege Abuse, Agentic Supply Chain Compromise, Memory and Context Manipulation, Agent-to-Agent Communication Risks. These are ecosystem-level failures, not code-level bugs. They arise from interactions between agents, services, data, and trust relationships.
An adversary does not need a zero-day in your copilot’s source code. They need to compromise one tool in the agent’s supply chain: one poisoned MCP server, one overprivileged OAuth integration, one manipulated third-party data source. The agent will faithfully execute its task using the compromised component, moving sensitive data along a path that looks authorized to every individual service it touches. The attack is in the composition.
The dominant security architectures of the last decade were built for human users interacting with applications through browsers and managed devices. Inline inspection. Proxy-based interception. Endpoint agents. Browser extensions. The shared assumption: if you could see the traffic at the perimeter or on the device, you could see the risk.
That assumption is broken.
Agentic workflows are server-side, headless, and high-frequency. An inline proxy that adds 200 milliseconds of latency per hop will break an agent that chains a dozen API calls to complete one task. A browser extension is irrelevant when the “user” is a cloud function making API calls. An endpoint agent is irrelevant when the action happens between two SaaS platforms at the API layer, with no endpoint involved.
Platform-specific security tools can see only their own perimeter. Salesforce Security Mesh observes Salesforce-centric data flows. It does not see the full attack path when compromised data moves from Salesforce to Slack to Google Drive to an attacker-controlled endpoint. Each platform sees a local event. The attack is global.
I’d call this the “security parallax” problem. Every tool in the stack reports that things look normal from its vantage point. The system-level behavior, the composite pattern across all platforms and identities and data flows, is deeply abnormal. The tools are succeeding at the wrong job.
Code scanning matters. Endpoint protection matters. Neither is sufficient. The attack surface has expanded into a domain that no existing product category covers.
Security leaders should be demanding ecosystem-layer supervision: something that observes the full execution graph of every agent, integration, data flow, identity, and trust relationship across the SaaS and AI ecosystem, in real time. That requires four capabilities the industry largely lacks.
Behavioral baselines at the ecosystem level. Not “is this API call legitimate?” but “is the pattern of API calls across ten services consistent with the intended purpose of this integration?” Detecting that a Gainsight token is suddenly performing hourly bulk exports requires understanding what Gainsight’s normal behavior looks like across hundreds of customer environments.
Identity resolution across the human and non-human boundary. AI agents, service accounts, OAuth tokens, and human users need to be tracked as first-class identities with behavioral profiles. An enterprise with 3,000 employees might have 6,000 or more non-human identities operating across its SaaS ecosystem. Most organizations cannot inventory them, let alone monitor their behavior.
Parallel observation, not inline interception. The architecture has to work at machine speed without adding latency to agent workflows. That means observing telemetry in parallel (via APIs, event streams, log aggregation) rather than sitting inline and inspecting traffic hop by hop. Less firewall, more nervous system: distributed sensors that detect anomalies in the relationships between components and trigger targeted containment when risk is confirmed.
Blast radius mapping in real time. When a compromise is detected, the first question is not “which application was affected?” It is “which data, which identities, and which downstream systems are in the blast radius?” Only a system that already maps the full topology of the ecosystem can answer that.
We are in a transition that will look, in retrospect, like the 18 months before cloud security became its own discipline. The industry tried to retrofit on-premises tools for cloud workloads. It didn’t work. Purpose-built cloud-native security emerged, and early adopters gained a defensible advantage.
The same dynamic is playing out at higher speed. AI agents are rewriting the backend of enterprise computing. The integration layer (APIs, OAuth apps, MCP servers, webhooks, agent frameworks) is where attackers are already operating.
The Claude Code Security announcement did not cause a crisis. It revealed one.
The billions wiped from cybersecurity stocks were not lost because Anthropic built a better SAST tool. They were lost because investors briefly saw the future clearly: autonomous agents composing, executing, and interacting across hundreds of services at machine speed. And they could not find anyone in the current cybersecurity landscape built to secure it.
The next set of headlines will not be about stock prices. They will be about the breach that no tool in the stack could see, because every tool was watching its own perimeter while the attack moved through the spaces between them.