The Attack Runs Itself: What Agentic AI Fraud Actually Looks Like
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我得仔细阅读文章,理解其主要内容和重点。 文章主要讨论了AI驱动的欺诈攻击,特别是自主型AI(agentic AI)在欺诈中的应用。作者Shimon Modi来自Arkose Labs,他提到AI攻击不再像传统的bot农场或 credential stuffing攻击,而是像一个24小时运作的工厂,不断进化且几乎不需要人工干预。 文章分为两部分,第一部分详细描述了AI驱动欺诈的模型,包括从AI辅助到完全自主的不同阶段。完全自主的攻击中,AI生成虚假身份、配置攻击参数、执行任务并管理账户,整个过程几乎不需要人类介入。 此外,文章指出这种攻击工具越来越普及,甚至可以在笔记本上运行,并且攻击发生在交互层而非网络层。传统的安全模型难以应对这种自主学习和适应能力的攻击。 最后,作者提出了一个问题:当前的安全防御措施是否真的针对这种威胁?第二部分将讨论防御架构的问题。 总结时需要涵盖这些要点:AI驱动的欺诈模型、完全自主攻击的特点、工具的普及、攻击发生的层级以及传统防御的不足。同时要保持简洁,在100字以内。 可能的结构是先介绍主题和模型类型,然后说明攻击特点和工具变化,最后指出防御问题。这样可以确保信息全面且简洁。 </think> 文章探讨了AI驱动的自主型欺诈攻击模式及其对现有安全防御的挑战。AI生成虚假身份、自动化执行任务并具备自我改进能力。此类攻击工具普及化且无需复杂技术门槛。与传统防御不同,这些攻击发生在交互层而非网络层,并通过模拟合法用户行为规避检测。 2026-4-21 16:58:46 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

The post The Attack Runs Itself: What Agentic AI Fraud Actually Looks Like appeared first on Arkose Labs.

This is part 1 of a two-part series on agentic AI fraud defense.

Since joining Arkose Labs, one of the first things I did was go deep on the threat data — what the attack patterns actually look like, how autonomous systems probe defenses, where classification breaks down and where it holds. That data has shaped how I think about every product decision we’re making right now.

The industry has largely caught up to the reality that agentic AI attacks are happening now. The World Economic Forum’s Global Cybersecurity Outlook 2026 reports that 73% of respondents were personally affected by cyber-enabled fraud in 2025, with CEOs now rating fraud as their top concern above ransomware. What has not caught up is the response.

The AI-Driven Fraud Model

AI-driven fraud no longer looks like a bot farm. It no longer looks like a credential stuffing attack. It looks like a factory, one that runs 24 hours a day, improves with each shift, and requires almost no human labor to operate. And it exists on a spectrum: from AI-assisted attacks where automation handles the heavy lifting while humans make key decisions, to fully agentic campaigns where autonomous systems plan, execute, and adapt without human intervention at the task level.

Both ends of that spectrum are active today. Both are accelerating.

At the AI-assisted end, attackers use AI to generate synthetic identities, craft convincing phishing infrastructure, and scale operations that previously required large human fraud farms. The human is still in the loop, but doing far less work for far greater output.

At the fully agentic end, the structure is consistent across the campaigns we observe: one operator, twenty or more specialized agents, zero human effort per account. The attack chain runs autonomously across five stages:

  1. Synthetic identity generation. AI generates complete fraudulent identities at scale in minutes rather than days, enabling sign-up fraud at volumes no human fraud ring could sustain. This is the entry point: fake accounts created at machine speed become the foundation for every downstream stage of the attack chain. 
  2. Attack workflow configuration. The agent selects the target and configures the campaign parameters without human input.
  3. Autonomous execution and navigation. The agent navigates your flows, fills forms, submits documents, handles MFA, and retries on failure.
  4. Post-creation account management. The agent builds credit history, executes transactions at human-plausible intervals, and stays dormant until activation.
  5. Coordinated cashout across accounts.

The human sets the strategy once. The machines execute everything else, with persistent memory across sessions, self-improvement with each attempt, and the kind of patience that no human fraud operator can sustain.

This is not a theoretical threat model. The first fully autonomous cyberattack has already been documented: an AI agent used against approximately 30 targets including financial institutions, technology companies, and government agencies, executing 80 to 90% of the operation independently at thousands of requests per second. Human operators made four to six decisions per campaign.

Whether a human is making four decisions per campaign or forty, the attack volume, speed, and adaptability that AI enables has already broken the assumptions most fraud prevention models were built on.

The Capability Is Democratizing

What makes AI-driven fraud particularly significant is not just what it can do today. It is the trajectory.

Attack tooling that once required a state-sponsored team and sophisticated infrastructure now runs on a laptop, from a free download, with no telemetry and no oversight. The barrier to running a capable fraud campaign has dropped to the point where the primary requirement is intent, not technical skill or resources.

Gen 3 attack agents are locally hosted, self-directed, and come with 50 or more modular skills and OS-level access. They are not lab experiments. They are actively deployed, and the number of sophisticated attackers able to use them is growing rapidly as AI capability becomes cheaper and more accessible. As of early 2026, dozens of major Chinese technology companies have launched locally installable AI agent variants of their own, accelerating that trend further.

Where Agents Commit Fraud

Here is the detail that matters most for fraud prevention strategy : agentic AI attacks don’t happen at the network layer. They happen at the interaction layer.

Account creation flows. Login and account takeover flows. API endpoints. Payment and checkout flows. And increasingly, MCP-connected surfaces, platforms where AI agents can access functionality directly through tool calls, an attack surface that most current security stacks have essentially zero visibility into.

This is where the digital factory executes. The agent navigates your onboarding flow the same way a legitimate user would, filling fields, uploading documents, completing liveness checks, and responding to friction. It doesn’t look different at the network layer. It looks different in how it behaves at the interaction layer, if you have the mechanism to observe that behavior.

Most platforms don’t. And that gap is where the campaigns run.

The Three Ways Agentic AI Changes the Threat Profile

Understanding why this is categorically different from prior bot threats matters for how you think about defense. There are three specific properties of agentic AI attackers that break traditional security models.

Autonomous iteration. Traditional bot tooling runs a fixed script. Agentic AI attackers run adaptive campaigns; they observe the defense, adjust strategy, and try again. They probe classification boundaries systematically, sending sessions designed to look legitimate and observing what passes. They don’t do this once. They do it thousands of times, autonomously, without fatigue. Any static defense will have its edges found.  Flashpoint’s 2026 Global Threat Intelligence Report  documents this pattern at scale — adversaries rapidly deploying agentic frameworks capable of orchestrating autonomous attack chains with no direct human control.

Session-to-session learning. Agentic AI systems can share learnings across sessions. What one session discovers about your defense posture informs the next session’s strategy. The pace at which defenses are mapped and exploited accelerates with each campaign run. Human fraud farms don’t have this property. For example, a human operator who fails on Tuesday doesn’t automatically make their colleague on Wednesday more effective. Agentic AI does.

Identity spoofing at the interaction layer. Agentic AI attackers increasingly operate through compromised credentials, service account abuse, and identity spoofing. They don’t just look like bots, they look like authorized users, because they are using credentials your verification system recognizes as legitimate. The identity checks out. The behavior reveals the problem, but only if you can see it.

The Question Agentic AI Raises for Fraud Prevention

Understanding the digital factory model leads directly to an uncomfortable question: is the way the security industry is positioning its response actually built for this threat?

The dominant answer the market converged on at RSAC 2026 was agent identity. Framework after framework launched to verify who the agent was, its cryptographic signature, its declared origin, its claimed authorization. Post-conference analysis has now widely noted what was missing: none of those frameworks tracked what the agent actually did.

Agent identity is not agent behavior. And against the three threat properties above, autonomous iteration, session-to-session learning, identity spoofing at the interaction layer, identity verification alone has a specific, observable failure mode.

In Part 2 of this piece, I’ll make the architecture argument directly: why identity-first and classification-first models fall short against agentic AI attackers, what the behavioral alternative looks like, and why the interaction layer is where this gets decided. If you want broader context in the meantime, start with our agentic AI security platform page.

Shimon Modi is SVP of Product at Arkose Labs, where he leads product strategy across the Arkose Titan platform.

*** This is a Security Bloggers Network syndicated blog from Arkose Labs authored by Shimon Modi. Read the original post at: https://www.arkoselabs.com/blog/the-attack-runs-itself-what-agentic-ai-fraud-actually-looks-like/


文章来源: https://securityboulevard.com/2026/04/the-attack-runs-itself-what-agentic-ai-fraud-actually-looks-like-2/
如有侵权请联系:admin#unsafe.sh