6 Takeaways from “The Rise of AI Fraud” Webinar: How AI Agents Are Rewriting Fraud Defense in 2025
2025年AI代理迅速增长至占总bot流量10%,带来显著安全挑战。传统防御难以应对复杂攻击;需转向基于意图检测以区分合法与恶意行为,并利用合法AI流量创造价值。 2025-10-23 20:26:51 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

AI agents are everywhere, and they’re just getting started

AI-driven traffic has exploded in 2025. According to DataDome’s Global Bot Security report, AI agents now make up 10% of all bot traffic, up from 2.6% in January—a fourfold increase in just eight months.

These agents power everything from personal assistants and booking tools to content-scraping bots and LLM crawlers. The challenge? Not all of them are bad. As Segura explained:

“AI agents are part of our everyday lives. But the same technologies that make them useful for consumers also make them dangerous in the hands of fraudsters.”

This rapid adoption means the traditional distinction between good bots and bad bots is blurring. Some agents deliver value—like legitimate shopping assistants or loyalty redemption tools—while others quietly harvest data, create fake accounts, or commit loyalty point fraud.

Most websites still fail to block simple bots

For DataDome’s 2025 Global Bot Security report, we tested nearly 17,000 websites by sending a controlled, small number of requests using a range of common bot profiles. The results are edifying:

  • 61% of sites failed to identify and block even a single test bot
  • Only 2.8% correctly identified all our test bots, down from 8.4% the year before.

This widespread exposure to even very simple bots means that businesses are extremely vulnerable to the far more sophisticated, AI-powered bot attacks we’re seeing in 2025.

As Segura noted, “Attackers evolve faster than businesses can adjust their defenses. While companies debate budgets, fraudsters innovate in real time.”

How AI is supercharging fraud

AI tools aren’t just creating new types of fraud; they’re scaling and optimizing existing ones. Here’s what DataDome’s threat research team is seeing:

  • Account takeovers & credential stuffing: AI accelerates password testing and pattern recognition.
  • Fake account & ghost account creation: Generative tools fill forms, bypass CAPTCHA, and mimic human browsing.
  • Loyalty point theft & payment fraud: AI agents automate redemption and checkout abuse at scale.
  • LLM-driven content theft: Automated scrapers repurpose entire websites for model training or resale.

Even more alarming: AI bots are increasingly targeting critical endpoints. In fact, DataDome found that 64% of AI traffic hits occur on form pages, 23% are hitting login pages, and 5% are making it to checkout.

“Unlike a simple bot that runs a fixed script,” Segura explained, “an AI agent can change tactics mid-attack, find new entry points, and mimic legitimate user behavior.”

This shift makes intent, not identity, the new battleground.

From identity to intent: The next leap forward in detection

The binary logic of bot vs. human no longer applies. Fraud teams must now interpret why a session behaves the way it does, not just what it claims to be. Humans are now sending their AI agents to make legitimate purchases on their behalf. 

DataDome’s approach focuses on analysing the actual intent of the user across the customer journey. By correlating server-side and client-side signals, our detection engines can identify misalignments such as a “ChatGPT” agent scraping payment data or a “GoogleBot” that doesn’t follow indexing patterns.

Intent-based detection lets businesses:

  • differentiate legitimate automation from abuse
  • reduce overblocking that frustrates real users
  • adapt dynamically to new agent behaviors

As Christopher Staab summarized during the session:

“We’ve been focused on identity verification for years. But as agentic AI rises, we need to move toward intent verification.”

Why fraud teams must adapt fast

AI fraud is scaling faster than traditional defenses can react. Segura warned that “the same democratization of AI that empowers consumers also empowers attackers.” Fraud-as-a-Service kits are now sold on Telegram or Discord, and low-skilled criminals can launch advanced attacks using public AI tools. This change means we need a new approach to fraud prevention

Meanwhile, defenders face budget cycles, procurement delays, and legacy infrastructure. To stay ahead, Segura advised teams to:

  1. Map your critical attack surfaces (login, checkout, forms)
  2. Test your defenses regularly using controlled simulations
  3. Benchmark your detection vendors: efficacy varies widely
  4. Collaborate across silos: fraud and cybersecurity teams must align
  5. Monitor AI agent traffic to both secure and monetize legitimate usage

The silver lining: AI traffic can be monetized

Of course, not all AI traffic is bad. Some AI agents, like price comparison bots or travel booking assistants, deliver value both to users and to the businesses that allow them in. AI agents will also contribute to revenue as agentic commerce becomes more widely adopted. 

DataDome’s new AI & LLM dashboard helps enterprises identify, control, and even monetize legitimate AI agent traffic, while continuing to block malicious actors.

As Segura concluded:

“Fraud prevention isn’t just about stopping bad traffic anymore. It’s also about recognizing and enabling the good traffic, without compromising security or user experience.”

Learn more & watch the replay

If you missed the live session, watch the full replay to hear more from the experts.

You can also download the complete 2025 Global Bot Security Report to explore DataDome’s findings and insights.


文章来源: https://securityboulevard.com/2025/10/6-takeaways-from-the-rise-of-ai-fraud-webinar-how-ai-agents-are-rewriting-fraud-defense-in-2025/
如有侵权请联系:admin#unsafe.sh