Why DataDome Detects Intent: The Only Way to Stop Fraud in the AI Era
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得通读整篇文章,抓住主要观点。 文章主要讲的是网络安全和欺诈预防中身份验证的局限性。传统方法只关注“谁在请求访问”,但随着AI代理的普及,这种方法已经不够用了。因为即使身份验证通过,AI代理可能有恶意意图,比如价格抓取或数据泄露。 作者提到需要结合身份验证和意图检测。身份层确认访问者的真实性,而意图层分析行为模式,判断是否合法。DataDome的解决方案包括动态信任评分,根据识别强度、行为意图和声誉来评分,并根据业务目标调整策略。 总结下来,文章强调了在AI时代,仅靠身份验证是不够的,必须实时检测意图,并结合业务目标来管理风险。这样既能保障安全,又能促进业务增长。 现在我要把这些要点浓缩到100字以内,确保涵盖关键点:AI代理的兴起、身份验证的局限、意图检测的重要性、DataDome的信任评分系统以及业务对齐的重要性。 </think> 文章探讨了AI代理对网络安全的影响,指出仅凭身份验证无法防范恶意行为。需结合身份和实时意图检测以识别合法与恶意活动。DataDome通过动态信任评分系统,在保护安全的同时支持业务增长。 2026-3-9 15:24:49 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

For decades, most cybersecurity and fraud leaders operated under a simple assumption: if you can identify who or what is requesting access, you can decide whether to trust them. But this assumption is breaking down with the adoption of agentic AI. 

The good news is that agentic protocols are making identity verification stronger than ever. Standards like Web Bot Authentication and KYA give us greater confidence in answering the question, “Is this really ChatGPT-user?” 

But that’s only half the battle. The rise of AI agents has exposed a critical blind spot in identity-based security: knowing what or who is at the door doesn’t tell you why they want in

For example, a verified AI agent with impeccable credentials can still be weaponized for fraud or aggressive price scraping. A trusted bot can pivot mid-session from legitimate browsing to malicious data exfiltration or credential stuffing. 

The (somewhat uncomfortable) truth is this: identity verification is necessary, but it’s not enough. Fraud prevention requires verifying identity and detecting intent in real time, for every request, at scale, across the full user journey. 

Multi-layered detection: identity + intent

Identity layer: answering “what or who is this?”

Identity seeks to verify that a visitor is genuinely who or what they claim to be. For AI agents, this means confirming that traffic claiming to be from ChatGPT, Perplexity, or Claude is actually from those sources and not an attacker spoofing their identity. 

Identity verification relies on a hierarchy of authentication methods with varying levels of strength:

Identification method Strength Possible Weakness
Cryptographic token verification (i.e., Web Bot Authentication, KYA, TAP)  High Requires agent cooperation
Trusted IP ranges High IP spoofing, shared infrastructure
Reverse DNS (rDNS) High DNS hijacking
Network attribution Medium CDN/proxy obfuscation
Client-side fingerprinting Medium Easily manipulated
User-Agent header Low Trivially spoofed

As the agentic ecosystem matures, identity providers are emerging as critical infrastructure, acting as authentication authorities that verify agent legitimacy and issue cryptographically signed tokens (such as KYA tokens). 

DataDome partners with these identity providers to support all major agentic identification protocols, including KYA, TAP, and other emerging standards. When an identity provider authenticates an agent, DataDome performs cryptographic verification of the token and automatically handles new agents as they’re added to the trusted registry. 

The caveat: Even the strongest identity verification only tells you that ChatGPT-user is actually ChatGPT-user. It doesn’t tell you whether that authenticated agent is comparing prices as a legitimate shopping activity, or if it is being used maliciously for abusive scraping. For example, the Galileo threat research team recently found that 50% of requests from OpenClaw were dedicated to vulnerability scanning activities and that 80% of AI agents don’t properly identify themselves. 

Intent layer: answering “what are they trying to do and is it good for my business?”

Intent detection analyzes behavioral purpose: the pattern of actions, the trajectory of requests, and the contextual signals that reveal whether an interaction drives business value or undermines it.

Consider a real-world scenario:

  • Request 1: An authenticated AI agent browses product pages at human-like speed
  • Request 2: The same agent adds items to the cart, checks pricing
  • Request 3: Suddenly, the agent begins rapidly iterating through checkout endpoints, testing different credit card formats
  • Request 4: The session accelerates, probing for API vulnerabilities

Identity-only-based systems see: “Verified Agent. Allowed.”

Intent-based systems take it a step further. Here is an example of how DataDome’s multi-layered detection might respond: 

  • Requests 1-2: Identity verified (ChatGPT-user authenticated via Web Bot Authentication), behavioral signals consistent with shopping assistance → traffic allowed, baseline behavior established
  • Request 3: Pattern shift detected—velocity and endpoint targeting no longer match shopping assistance profile → intent flagged as potential reconnaissance activity
  • Request 4: Malicious intent confirmed through cross-signal analysis → session blocked immediately while other ChatGPT-user sessions from legitimate users continue unaffected

An identity-only system would have allowed all four requests because the agent’s cryptographic credentials remained valid throughout. DataDome’s intent detection enabled legitimate browsing and blocked only when behavior pivoted toward malicious patterns.

This is the critical distinction: the same verified agent can express radically different intentions within the same session. Identity is static, but intent is dynamic and must be evaluated continuously across the full user session and journey. 

But here’s where it gets more nuanced: intent isn’t just about detecting fraud. It’s about understanding whether an agent’s behavior aligns with YOUR specific business objectives.

DataDome’s Trust Score: making intent actionable for your business

Intent detection generates continuous signals, and businesses need a way to operationalize those signals. This is where DataDome’s Trust Score comes in.

For every AI agent, DataDome computes a dynamic Trust Score (0-100) that combines:

  1. Identification strength: How securely can this agent’s identity be authenticated?
  2. Behavioral intent: What proportion of this agent’s traffic is legitimate vs. fraudulent, both on your properties and across the DataDome network?
  3. Reputation: How established and business-relevant is the agent’s operating organization?

In the future, DataDome will also add ‘business alignment’ to this list to factor in how valuable this agent’s activity is for YOUR specific business model and KPIs. 

Critically, Trust Scores are calibrated per DataDome customer through specialized AI models that understand your business context. An agent that generates high-value traffic for e-commerce might be irrelevant or risky for media publishers. 

Trust Scores enable risk-based policies:

  • High-trust agents get frictionless access to sensitive endpoints
  • Medium-trust agents are allowed but monitored more closely
  • Low-trust agents are restricted to public pages or blocked entirely

But here’s what makes this approach powerful: regardless of the policy you choose, DataDome continues analyzing every request in real time. Even if you allow a high-trust agent, malicious intent is still detected and blocked at the session level. Trust informs policy, and intent determines action.

Continuous feedback loops: how DataDome learns what “good” means for your business

DataDome’s per-customer Trust Score calibration is powered by continuous feedback loops. These loops automatically learn and update models, improving detection accuracy over time.  

Looking ahead, we plan to take this a step further. Future models will learn and adapt specifically to what matters most for your business. Here is how it will work: 

  1. You define success metrics based on your business priorities, including: 
  • Target conversion rates
  • Successful transaction rates
  • Average order values
  • Engagement metrics (time-on-site, pages per session, content depth)
  • Lead quality scores
  • Customer lifetime value indicators
  1. DataDome’s AI models will ingest this feedback automatically, continuously learning which agent behaviors correlate with:
  • Positive business outcomes (completed purchases, quality leads, engaged sessions)
  • Fraud and abuse (credential testing, scraping, payment fraud)
  1. Trust Scores and detection models adapt in real time to:
  • Prioritize traffic that drives your specific KPIs
  • Block traffic that undermines business objectives or indicates fraud
  • Surface insights about which agents generate the most business value

Because DataDome will understand your business objectives, intent detection becomes more accurate over time. This distinction transforms fraud prevention from a cost center into a revenue enabler. You’re not just blocking threats, you’re actively identifying and prioritizing the most valuable customer experiences.

This business-aligned intent detection becomes especially critical when dealing with edge cases where technical legitimacy doesn’t equal business alignment.

When “legitimate” intent conflicts with business strategy

Not all problematic intent is fraudulent. Some of the most challenging scenarios involve authenticated agents with perfectly valid credentials executing behavior that is technically legitimate but strategically harmful to that specific business model. 

Example: A financial services comparison platform partners with lenders to connect customers with products. Traditionally, customers visit the lender’s site to complete applications, and the lender pays a referral commission.

Now, the comparison platform is developing an AI agent that handles applications end-to-end. Customers never visit the lender’s site. The agent authenticates properly, and its identity is verified.

But from the lender’s perspective, the direct customer relationship disappears. Opportunities for relationship building, cross-selling, brand engagement, and first-party data collection are eliminated. 

This isn’t fraud in the traditional sense. But it still doesn’t align with the lender’s long-term business objectives. Identity verification says “allow.” Intent analysis says, “This requires a strategic decision.”

This visibility enables businesses to make informed policy decisions—allowing them to block, allow, rate-limit, or monetize traffic per AI agent and endpoint.

From security tool to revenue driver

Instead of customers browsing your site directly, they’re increasingly sending AI assistants to research products, compare prices, and complete purchases on their behalf. 70% of consumers across the UK, US, and France have used AI for shopping in the past 12 months.

The opportunity is massive: agentic traffic represents new revenue channels, improved conversion efficiency, and novel business models.

The risk is equally massive: without the ability to distinguish between legitimate agent activity and agent-mediated fraud, businesses must choose between growth and security.

The shift from identity-based to both identity and intent-based detection transforms fraud prevention from a cost center focused on blocking threats into a business enabler that:

  • Identifies high-value traffic for prioritization
  • Provides actionable intelligence on user intent across human, bot, and agent visitors
  • Enables sophisticated, context-aware policies that align with business objectives
  • Feeds real-time signals into broader business systems: pricing, inventory, recommendation engines
  • Surfaces strategic decisions about customer relationship management and platform control

As the agentic web matures, the winners will be organizations that can leverage agent traffic as a strategic asset rather than treating it as a risk and blanket blocking it. 

The foundation of that capability is simple: You can’t manage what you can’t understand. Identity tells you what. Intent tells you why. You need both.

Want to see DataDome’s Bot & Agent Trust Management in action? Request a demo today.


文章来源: https://securityboulevard.com/2026/03/why-datadome-detects-intent-the-only-way-to-stop-fraud-in-the-ai-era/
如有侵权请联系:admin#unsafe.sh