For decades, most cybersecurity and fraud leaders operated under a simple assumption: if you can identify who or what is requesting access, you can decide whether to trust them. But this assumption is breaking down with the adoption of agentic AI.
The good news is that agentic protocols are making identity verification stronger than ever. Standards like Web Bot Authentication and KYA give us greater confidence in answering the question, “Is this really ChatGPT-user?”
But that’s only half the battle. The rise of AI agents has exposed a critical blind spot in identity-based security: knowing what or who is at the door doesn’t tell you why they want in.
For example, a verified AI agent with impeccable credentials can still be weaponized for fraud or aggressive price scraping. A trusted bot can pivot mid-session from legitimate browsing to malicious data exfiltration or credential stuffing.
The (somewhat uncomfortable) truth is this: identity verification is necessary, but it’s not enough. Fraud prevention requires verifying identity and detecting intent in real time, for every request, at scale, across the full user journey.
Identity seeks to verify that a visitor is genuinely who or what they claim to be. For AI agents, this means confirming that traffic claiming to be from ChatGPT, Perplexity, or Claude is actually from those sources and not an attacker spoofing their identity.
Identity verification relies on a hierarchy of authentication methods with varying levels of strength:
| Identification method | Strength | Possible Weakness |
| Cryptographic token verification (i.e., Web Bot Authentication, KYA, TAP) | High | Requires agent cooperation |
| Trusted IP ranges | High | IP spoofing, shared infrastructure |
| Reverse DNS (rDNS) | High | DNS hijacking |
| Network attribution | Medium | CDN/proxy obfuscation |
| Client-side fingerprinting | Medium | Easily manipulated |
| User-Agent header | Low | Trivially spoofed |
As the agentic ecosystem matures, identity providers are emerging as critical infrastructure, acting as authentication authorities that verify agent legitimacy and issue cryptographically signed tokens (such as KYA tokens).
DataDome partners with these identity providers to support all major agentic identification protocols, including KYA, TAP, and other emerging standards. When an identity provider authenticates an agent, DataDome performs cryptographic verification of the token and automatically handles new agents as they’re added to the trusted registry.
The caveat: Even the strongest identity verification only tells you that ChatGPT-user is actually ChatGPT-user. It doesn’t tell you whether that authenticated agent is comparing prices as a legitimate shopping activity, or if it is being used maliciously for abusive scraping. For example, the Galileo threat research team recently found that 50% of requests from OpenClaw were dedicated to vulnerability scanning activities and that 80% of AI agents don’t properly identify themselves.
Intent detection analyzes behavioral purpose: the pattern of actions, the trajectory of requests, and the contextual signals that reveal whether an interaction drives business value or undermines it.
Consider a real-world scenario:
Identity-only-based systems see: “Verified Agent. Allowed.”
Intent-based systems take it a step further. Here is an example of how DataDome’s multi-layered detection might respond:
An identity-only system would have allowed all four requests because the agent’s cryptographic credentials remained valid throughout. DataDome’s intent detection enabled legitimate browsing and blocked only when behavior pivoted toward malicious patterns.
This is the critical distinction: the same verified agent can express radically different intentions within the same session. Identity is static, but intent is dynamic and must be evaluated continuously across the full user session and journey.
But here’s where it gets more nuanced: intent isn’t just about detecting fraud. It’s about understanding whether an agent’s behavior aligns with YOUR specific business objectives.
Intent detection generates continuous signals, and businesses need a way to operationalize those signals. This is where DataDome’s Trust Score comes in.
For every AI agent, DataDome computes a dynamic Trust Score (0-100) that combines:
In the future, DataDome will also add ‘business alignment’ to this list to factor in how valuable this agent’s activity is for YOUR specific business model and KPIs.
Critically, Trust Scores are calibrated per DataDome customer through specialized AI models that understand your business context. An agent that generates high-value traffic for e-commerce might be irrelevant or risky for media publishers.
Trust Scores enable risk-based policies:
But here’s what makes this approach powerful: regardless of the policy you choose, DataDome continues analyzing every request in real time. Even if you allow a high-trust agent, malicious intent is still detected and blocked at the session level. Trust informs policy, and intent determines action.
DataDome’s per-customer Trust Score calibration is powered by continuous feedback loops. These loops automatically learn and update models, improving detection accuracy over time.
Looking ahead, we plan to take this a step further. Future models will learn and adapt specifically to what matters most for your business. Here is how it will work:
Because DataDome will understand your business objectives, intent detection becomes more accurate over time. This distinction transforms fraud prevention from a cost center into a revenue enabler. You’re not just blocking threats, you’re actively identifying and prioritizing the most valuable customer experiences.
This business-aligned intent detection becomes especially critical when dealing with edge cases where technical legitimacy doesn’t equal business alignment.
Not all problematic intent is fraudulent. Some of the most challenging scenarios involve authenticated agents with perfectly valid credentials executing behavior that is technically legitimate but strategically harmful to that specific business model.
Example: A financial services comparison platform partners with lenders to connect customers with products. Traditionally, customers visit the lender’s site to complete applications, and the lender pays a referral commission.
Now, the comparison platform is developing an AI agent that handles applications end-to-end. Customers never visit the lender’s site. The agent authenticates properly, and its identity is verified.
But from the lender’s perspective, the direct customer relationship disappears. Opportunities for relationship building, cross-selling, brand engagement, and first-party data collection are eliminated.
This isn’t fraud in the traditional sense. But it still doesn’t align with the lender’s long-term business objectives. Identity verification says “allow.” Intent analysis says, “This requires a strategic decision.”
This visibility enables businesses to make informed policy decisions—allowing them to block, allow, rate-limit, or monetize traffic per AI agent and endpoint.
Instead of customers browsing your site directly, they’re increasingly sending AI assistants to research products, compare prices, and complete purchases on their behalf. 70% of consumers across the UK, US, and France have used AI for shopping in the past 12 months.
The opportunity is massive: agentic traffic represents new revenue channels, improved conversion efficiency, and novel business models.
The risk is equally massive: without the ability to distinguish between legitimate agent activity and agent-mediated fraud, businesses must choose between growth and security.
The shift from identity-based to both identity and intent-based detection transforms fraud prevention from a cost center focused on blocking threats into a business enabler that:
As the agentic web matures, the winners will be organizations that can leverage agent traffic as a strategic asset rather than treating it as a risk and blanket blocking it.
The foundation of that capability is simple: You can’t manage what you can’t understand. Identity tells you what. Intent tells you why. You need both.
Want to see DataDome’s Bot & Agent Trust Management in action? Request a demo today.