AI-driven traffic has exploded in 2025. According to DataDome’s Global Bot Security report, AI agents now make up 10% of all bot traffic, up from 2.6% in January—a fourfold increase in just eight months.
These agents power everything from personal assistants and booking tools to content-scraping bots and LLM crawlers. The challenge? Not all of them are bad. As Segura explained:
“AI agents are part of our everyday lives. But the same technologies that make them useful for consumers also make them dangerous in the hands of fraudsters.”
This rapid adoption means the traditional distinction between good bots and bad bots is blurring. Some agents deliver value—like legitimate shopping assistants or loyalty redemption tools—while others quietly harvest data, create fake accounts, or commit loyalty point fraud.
For DataDome’s 2025 Global Bot Security report, we tested nearly 17,000 websites by sending a controlled, small number of requests using a range of common bot profiles. The results are edifying:
This widespread exposure to even very simple bots means that businesses are extremely vulnerable to the far more sophisticated, AI-powered bot attacks we’re seeing in 2025.
As Segura noted, “Attackers evolve faster than businesses can adjust their defenses. While companies debate budgets, fraudsters innovate in real time.”
AI tools aren’t just creating new types of fraud; they’re scaling and optimizing existing ones. Here’s what DataDome’s threat research team is seeing:
Even more alarming: AI bots are increasingly targeting critical endpoints. In fact, DataDome found that 64% of AI traffic hits occur on form pages, 23% are hitting login pages, and 5% are making it to checkout.
“Unlike a simple bot that runs a fixed script,” Segura explained, “an AI agent can change tactics mid-attack, find new entry points, and mimic legitimate user behavior.”
This shift makes intent, not identity, the new battleground.
The binary logic of bot vs. human no longer applies. Fraud teams must now interpret why a session behaves the way it does, not just what it claims to be. Humans are now sending their AI agents to make legitimate purchases on their behalf.
DataDome’s approach focuses on analysing the actual intent of the user across the customer journey. By correlating server-side and client-side signals, our detection engines can identify misalignments such as a “ChatGPT” agent scraping payment data or a “GoogleBot” that doesn’t follow indexing patterns.
Intent-based detection lets businesses:
As Christopher Staab summarized during the session:
“We’ve been focused on identity verification for years. But as agentic AI rises, we need to move toward intent verification.”
AI fraud is scaling faster than traditional defenses can react. Segura warned that “the same democratization of AI that empowers consumers also empowers attackers.” Fraud-as-a-Service kits are now sold on Telegram or Discord, and low-skilled criminals can launch advanced attacks using public AI tools. This change means we need a new approach to fraud prevention.
Meanwhile, defenders face budget cycles, procurement delays, and legacy infrastructure. To stay ahead, Segura advised teams to:
Of course, not all AI traffic is bad. Some AI agents, like price comparison bots or travel booking assistants, deliver value both to users and to the businesses that allow them in. AI agents will also contribute to revenue as agentic commerce becomes more widely adopted.
DataDome’s new AI & LLM dashboard helps enterprises identify, control, and even monetize legitimate AI agent traffic, while continuing to block malicious actors.
As Segura concluded:
“Fraud prevention isn’t just about stopping bad traffic anymore. It’s also about recognizing and enabling the good traffic, without compromising security or user experience.”
If you missed the live session, watch the full replay to hear more from the experts.
You can also download the complete 2025 Global Bot Security Report to explore DataDome’s findings and insights.