For decades, identity followed a simple premise: authenticate once, authorize statically, audit periodically.
Human identity assumed interactive sessions, predictable access patterns, and human judgment as the final check. A user might have broad permissions because organizations trusted them to exercise discretion.
Workload identity (microservices, CI/CD pipelines, serverless functions) adapted this model for machines. These workloads are deterministic: given the same input, they produce the same output. A Kubernetes pod calls the same APIs in the same sequence every time. Preprovisioned credentials became the default because access patterns were predictable. You can define which resources a container needs before deployment. But even then, the model carries known risks: secrets sprawl, rotation overhead, and static credentials that outlive the workloads they protect.
This worked because both humans and traditional workloads share a critical trait: their behavior falls within known boundaries. Even at scale, you can map the full scope of what a microservice will do.
AI agents violate this assumption entirely.
AI agents aren’t faster workloads or smarter scripts. They’re a new category of actor that exposes gaps in the three foundational assumptions of identity systems.
A microservice follows code. An AI agent follows goals. When you deploy an autonomous coding assistant, you don’t know which files it will access, which APIs it will call, or which services it will invoke. It decides at runtime based on context, reasoning, and its interpretation of the objective.
Preprovisioning credentials becomes impossible. Grant too little access, and the agent fails mid-task. Grant too much, and you’ve created an overprivileged actor that makes autonomous decisions about how to use those permissions.
According to a 2025 SailPoint survey, 80% of organizations using AI agents have observed them acting unexpectedly or performing unauthorized actions.
Traditional OAuth assumes a single subject per token. When an AI agent acts on behalf of a user, the system must track two identities simultaneously: the user who delegated authority and the agent executing the action.
This creates questions that current IAM frameworks can’t answer:
MIT researchers have proposed delegation tokens that explicitly bind user, agent, and scope into a single verifiable artifact. The OpenID Foundation released a white paper in October 2025 addressing these challenges. But production systems are deploying agents faster than standards bodies can define how to secure them.
Workload IAM platforms solve real problems: eliminating static secrets, enabling just-in-time credential issuance, enforcing conditional access. These capabilities matter for AI agents, too. But they were designed for a world where machine behavior is predictable.
Workload IAM policies define what a workload can access. With deterministic systems, this aligns with what the workload will access. With AI agents, there’s a gap.
An autonomous agent given access to “read customer data for analysis purposes” might decide that cross-referencing with financial records improves its analysis. It’s not malicious. It’s doing what it was designed to do: reason toward a goal. But the access pattern exceeds what the policy author anticipated.
Workload IAM assumes credentials are requested for known operations. The system provisions access, the workload executes, the credential expires. This fits CI/CD pipelines and microservices with defined execution windows.
AI agents operate differently. A long-running autonomous agent might need access to different resources at unpredictable intervals. Its “session” is the entire period it’s working toward a goal, which could span hours or days. Short-lived credentials help but don’t solve the fundamental mismatch between how agents operate and how credential systems were designed.
Workload IAM logs show which identity accessed which resource. But when that identity is an agent acting on behalf of a user, spawning subagents, and making decisions that chain across multiple services, the audit trail fragments.
The question “who did this?” no longer has a simple answer. Was it the user who initiated the workflow? The agent that executed it? The subagent that performed the specific action? The tool the agent invoked? Traditional logging captures events but loses the context needed to establish accountability.
Securing AI agents demands capabilities that extend beyond both human IAM and traditional workload identity.
AI agents need identities that capture both what they are and who they represent. This means:
Aembit’s approach combines agent identity with user identity into a single auditable credential, ensuring that every action maps back to both the autonomous actor and the human who authorized it.
Because agent behavior is unpredictable, access decisions must happen at the moment of each request, not during deployment. This requires:
Accountability requires tracing complete interaction chains, including:
This goes beyond logging API calls. It requires capturing the reasoning context that led to each action.
The emergence of AI agents doesn’t invalidate what we’ve built for human and workload identity. It exposes the limits of treating all nonhuman identities the same way.
The unified identity framework of the future isn’t one system for humans and another for machines. It’s a model that recognizes three categories of actors:
Each category requires different operational controls built on shared principles: verified identity, least-privilege access, and auditable accountability.
Organizations deploying AI agents today face a choice. They can treat agents as faster workloads, applying existing patterns and accepting the gaps. Or they can recognize that agent autonomy demands a new approach to identity: one where trust is continuously verified, credentials are dynamically scoped, and every action traces back to accountable actors.
The companies getting this right are building the identity infrastructure that will define how autonomous systems participate in enterprise environments.