The post The Attack Runs Itself: What Agentic AI Fraud Actually Looks Like appeared first on Arkose Labs.
This is part 1 of a two-part series on agentic AI fraud defense.
Since joining Arkose Labs, one of the first things I did was go deep on the threat data — what the attack patterns actually look like, how autonomous systems probe defenses, where classification breaks down and where it holds. That data has shaped how I think about every product decision we’re making right now.
The industry has largely caught up to the reality that agentic AI attacks are happening now. The World Economic Forum’s Global Cybersecurity Outlook 2026 reports that 73% of respondents were personally affected by cyber-enabled fraud in 2025, with CEOs now rating fraud as their top concern above ransomware. What has not caught up is the response.
AI-driven fraud no longer looks like a bot farm. It no longer looks like a credential stuffing attack. It looks like a factory, one that runs 24 hours a day, improves with each shift, and requires almost no human labor to operate. And it exists on a spectrum: from AI-assisted attacks where automation handles the heavy lifting while humans make key decisions, to fully agentic campaigns where autonomous systems plan, execute, and adapt without human intervention at the task level.
Both ends of that spectrum are active today. Both are accelerating.
At the AI-assisted end, attackers use AI to generate synthetic identities, craft convincing phishing infrastructure, and scale operations that previously required large human fraud farms. The human is still in the loop, but doing far less work for far greater output.
At the fully agentic end, the structure is consistent across the campaigns we observe: one operator, twenty or more specialized agents, zero human effort per account. The attack chain runs autonomously across five stages:
The human sets the strategy once. The machines execute everything else, with persistent memory across sessions, self-improvement with each attempt, and the kind of patience that no human fraud operator can sustain.
This is not a theoretical threat model. The first fully autonomous cyberattack has already been documented: an AI agent used against approximately 30 targets including financial institutions, technology companies, and government agencies, executing 80 to 90% of the operation independently at thousands of requests per second. Human operators made four to six decisions per campaign.
Whether a human is making four decisions per campaign or forty, the attack volume, speed, and adaptability that AI enables has already broken the assumptions most fraud prevention models were built on.
What makes AI-driven fraud particularly significant is not just what it can do today. It is the trajectory.
Attack tooling that once required a state-sponsored team and sophisticated infrastructure now runs on a laptop, from a free download, with no telemetry and no oversight. The barrier to running a capable fraud campaign has dropped to the point where the primary requirement is intent, not technical skill or resources.
Gen 3 attack agents are locally hosted, self-directed, and come with 50 or more modular skills and OS-level access. They are not lab experiments. They are actively deployed, and the number of sophisticated attackers able to use them is growing rapidly as AI capability becomes cheaper and more accessible. As of early 2026, dozens of major Chinese technology companies have launched locally installable AI agent variants of their own, accelerating that trend further.
Here is the detail that matters most for fraud prevention strategy : agentic AI attacks don’t happen at the network layer. They happen at the interaction layer.
Account creation flows. Login and account takeover flows. API endpoints. Payment and checkout flows. And increasingly, MCP-connected surfaces, platforms where AI agents can access functionality directly through tool calls, an attack surface that most current security stacks have essentially zero visibility into.
This is where the digital factory executes. The agent navigates your onboarding flow the same way a legitimate user would, filling fields, uploading documents, completing liveness checks, and responding to friction. It doesn’t look different at the network layer. It looks different in how it behaves at the interaction layer, if you have the mechanism to observe that behavior.
Most platforms don’t. And that gap is where the campaigns run.
Understanding why this is categorically different from prior bot threats matters for how you think about defense. There are three specific properties of agentic AI attackers that break traditional security models.
Autonomous iteration. Traditional bot tooling runs a fixed script. Agentic AI attackers run adaptive campaigns; they observe the defense, adjust strategy, and try again. They probe classification boundaries systematically, sending sessions designed to look legitimate and observing what passes. They don’t do this once. They do it thousands of times, autonomously, without fatigue. Any static defense will have its edges found. Flashpoint’s 2026 Global Threat Intelligence Report documents this pattern at scale — adversaries rapidly deploying agentic frameworks capable of orchestrating autonomous attack chains with no direct human control.
Session-to-session learning. Agentic AI systems can share learnings across sessions. What one session discovers about your defense posture informs the next session’s strategy. The pace at which defenses are mapped and exploited accelerates with each campaign run. Human fraud farms don’t have this property. For example, a human operator who fails on Tuesday doesn’t automatically make their colleague on Wednesday more effective. Agentic AI does.
Identity spoofing at the interaction layer. Agentic AI attackers increasingly operate through compromised credentials, service account abuse, and identity spoofing. They don’t just look like bots, they look like authorized users, because they are using credentials your verification system recognizes as legitimate. The identity checks out. The behavior reveals the problem, but only if you can see it.
Understanding the digital factory model leads directly to an uncomfortable question: is the way the security industry is positioning its response actually built for this threat?
The dominant answer the market converged on at RSAC 2026 was agent identity. Framework after framework launched to verify who the agent was, its cryptographic signature, its declared origin, its claimed authorization. Post-conference analysis has now widely noted what was missing: none of those frameworks tracked what the agent actually did.
Agent identity is not agent behavior. And against the three threat properties above, autonomous iteration, session-to-session learning, identity spoofing at the interaction layer, identity verification alone has a specific, observable failure mode.
In Part 2 of this piece, I’ll make the architecture argument directly: why identity-first and classification-first models fall short against agentic AI attackers, what the behavioral alternative looks like, and why the interaction layer is where this gets decided. If you want broader context in the meantime, start with our agentic AI security platform page.
Shimon Modi is SVP of Product at Arkose Labs, where he leads product strategy across the Arkose Titan platform.
*** This is a Security Bloggers Network syndicated blog from Arkose Labs authored by Shimon Modi. Read the original post at: https://www.arkoselabs.com/blog/the-attack-runs-itself-what-agentic-ai-fraud-actually-looks-like/