Here's a statement that should make any security architect pause:
In most enterprise environments today, machine identities – service accounts, API keys, deployment pipelines, and increasingly AI agents – outnumber human identities by a significant margin. In AI-native companies, that ratio is already 10 to 1 or higher.
Zero Trust was designed with human users as the primary subject. The model assumes identity belongs to a person who authenticated with credentials, uses a device you can evaluate, and accesses applications in recognizable patterns. When something deviates from that pattern, behavioral analytics flags it.
AI agents don't fit that model. And the mismatch is creating security gaps that traditional Zero Trust frameworks weren't designed to close.
The first wave of AI in enterprise environments was relatively contained. Machine learning models trained on internal data, analytics pipelines, recommendation engines. These were workloads – they had identities, made API calls, and could be secured reasonably well with existing approaches.
The second wave – large language models, AI agents, and autonomous systems – is different in kind, not just in degree.
AI agents act autonomously. A human user making access requests follows recognizable patterns. An AI agent running a workflow might make hundreds or thousands of API calls in minutes, query multiple data sources in sequence, generate and execute code, and trigger downstream actions – all autonomously. The behavioral baseline for an AI agent looks nothing like a human user.
AI agents operate across long contexts. An AI workflow might start with a user request, retrieve context from multiple systems, call external APIs, process results, and write output to another system – all in a single execution. This multi-hop, multi-system access pattern is exactly what lateral movement looks like. Distinguishing legitimate agent behavior from adversarial behavior is genuinely hard.
AI agents inherit and amplify permissions. If an AI agent runs in the security context of a human user or a service account with broad permissions, it can do whatever that identity can do. And because it acts autonomously and at machine speed, any permission misuse happens before a human can intervene.
AI agents can be manipulated. Prompt injection attacks – where malicious content embedded in data the agent processes causes it to perform unintended actions – are a real and growing class of attack. An agent that trusts content from external sources without verification can be weaponized by that content.
Before even getting to AI agents, there's a foundational problem: most organizations have poor visibility and governance over non-human identities.
When I was scaling a CIAM platform to handle billions of user authentications, we were acutely aware of human identity management. But the service accounts, API keys, and machine-to-machine connections multiplied faster than anyone tracked them. This pattern holds across the industry.
A 2024 assessment across enterprise environments found that organizations with good visibility into their human identity inventory often had three to five times more non-human identities than human ones – and a fraction of the governance.
Here's what that looks like in practice:
Every one of these is a potential pivot point for an attacker. And because machine identities are less visible and less monitored than human ones, they're increasingly the preferred target.
The SolarWinds breach in 2020 exploited precisely this gap. The malicious code inserted into the build pipeline operated using legitimate service account credentials. No human user behaved anomalously. The compromise lived entirely in the machine-to-machine communication layer.
Classical Zero Trust principles still apply. Never trust, always verify. Least privilege. Assume breach. But the implementation needs to extend in several specific directions to handle AI agents and machine identities effectively.
An AI agent should not run under a shared service account or a human user's identity. It should have its own workload identity, with permissions explicitly scoped to what that specific agent needs to do its specific job.
This means:
The practical challenge: AI frameworks and orchestration platforms vary widely in how they handle identity. Some make this easy; many require deliberate work to implement correctly. Treat agent identity as a first-class design requirement, not an afterthought.
A human user with least privilege access typically needs read access to their own work, write access to their own projects, and limited access elsewhere. Scoping that is well understood.
An AI agent that orchestrates a complex workflow might legitimately need to read from a database, call an external API, write to a document store, and trigger a notification – in sequence, not simultaneously. Traditional least privilege models often grant all the permissions the workflow might ever need upfront.
A more sophisticated approach uses just-in-time permission grants: the agent requests and receives the permission it needs for each step, uses it, and releases it. This requires the orchestration layer to mediate permission requests rather than granting all permissions at initialization.
This is architecturally more complex but significantly reduces the blast radius if the agent is compromised or manipulated mid-workflow.
Prompt injection is the Zero Trust problem for the AI data plane. If an AI agent processes data from external sources – web content, emails, documents, user inputs – any of that content could contain instructions intended to manipulate the agent's behavior.
Zero Trust for AI inputs means:
This is a relatively new problem domain, and the tooling is still maturing. But the principle maps directly to existing Zero Trust thinking: explicit verification, never implicit trust.
UEBA (User and Entity Behavior Analytics) was built for human users and, to some extent, traditional service accounts. AI agents behave differently and require different baselines.
An AI agent running normally might make 500 API calls in ten minutes. That would be catastrophically anomalous for a human user. Building baselines that correctly distinguish normal high-volume agent behavior from anomalous agent behavior – excessive data access, calls to unexpected endpoints, anomalous output volumes – requires agent-aware analytics.
This means:
The most dangerous class of AI agent action is the irreversible high-stakes operation: sending an external communication, modifying production data, executing a financial transaction, deprovisioning an account.
Zero Trust for AI agents should include explicit checkpoints for these operations, where a human must review and approve before the agent proceeds. This isn't a performance optimization – it's a security control that limits the damage an adversarial manipulation can cause.
Designing these checkpoints into the workflow architecture from the beginning is far easier than retrofitting them later.
Understanding what you're defending against sharpens how you build defenses.
AI-assisted reconnaissance: Attackers are using AI to accelerate target profiling, identify exposed credentials and API keys in public repositories, and analyze large datasets for attack paths. The speed of attack preparation has increased dramatically.
AI-generated phishing: The barrier to convincing social engineering has dropped. AI-generated phishing content can be personalized, grammatically correct, and contextually appropriate at scale. Traditional spam filters that look for poor writing or generic templates are less effective.
Adversarial AI agents: As AI agents become more capable and more prevalent in enterprise environments, using them as attack vectors becomes more attractive. A compromised or manipulated AI agent with legitimate credentials can do significant damage quietly.
LLM-specific attacks against your own AI systems: If you're running internal LLMs or using AI services that ingest internal data, adversarial inputs designed to exfiltrate data or manipulate outputs become a real concern.
The AI security problem can feel overwhelming – too many new threat vectors, too few proven defenses. Here's a practical prioritization.
This quarter:
This year:
Ongoing:
Zero Trust's founding insight – that implicit trust based on network location is the fundamental flaw in enterprise security – applies with equal force to AI agents and machine identities.
The extension for the AI era is this: trust shouldn't be implicit based on any identity signal, human or machine. Every access request should be verified against explicit policy. Every identity should carry only the minimum permissions needed. Every system should be designed assuming that any component can be compromised.
AI doesn't break Zero Trust. But it does expose the parts of Zero Trust that most organizations implemented incompletely. Machine identity governance, east-west traffic control, behavioral analytics for non-human entities – these were always part of a complete Zero Trust architecture.
The AI era just makes it urgent to get them right.
Deepak Gupta is the Co-founder & CEO of GrackerAI and an AI & Cybersecurity expert with 15+ years in digital identity and enterprise security. He writes about cybersecurity, AI, and B2B SaaS at guptadeepak.com.
*** This is a Security Bloggers Network syndicated blog from Deepak Gupta | AI & Cybersecurity Innovation Leader | Founder's Journey from Code to Scale authored by Deepak Gupta - Tech Entrepreneur, Cybersecurity Author. Read the original post at: https://guptadeepak.com/zero-trust-in-the-age-of-ai-why-the-classic-model-isnt-enough-anymore/