Human vs. AI Identity: Why AI Agents Are Breaking Identity
嗯,用户让我帮忙总结一篇文章,控制在100字以内,而且不需要特定的开头。首先,我需要仔细阅读文章内容,抓住主要观点。 文章主要讨论了传统身份模型在面对AI代理时的不足。传统模型适用于人类和传统工作负载,因为它们的行为是可预测的。但AI代理具有自主决策能力,导致访问模式不可预测,这使得传统的预配置凭证和静态策略不再适用。 接下来,文章提到了几个关键挑战:自主决策导致不可预测的访问模式、委托带来的双重身份复杂性、以及审计追踪的问题。此外,AI代理需要动态凭证和实时策略评估。 最后,文章指出未来需要一个统一的身份框架,区分人类、确定性工作负载和自主代理,并为每个类别设计不同的控制措施。 总结时,我需要将这些要点浓缩到100字以内,确保涵盖传统模型的局限性、AI代理带来的挑战以及未来的解决方案。 </think> 文章探讨了传统身份模型在面对AI代理时的局限性。传统模型基于可预测行为设计,适用于人类和传统工作负载。然而,AI代理的自主性和动态行为打破了这一假设:它们根据实时目标调整访问模式,导致不可预测的行为和复杂的委托关系。这要求新的身份管理方法,包括动态凭证、实时策略评估和增强的审计能力。 2026-3-3 18:37:22 Author: securityboulevard.com(查看原文) 阅读量:3 收藏

For decades, identity followed a simple premise: authenticate once, authorize statically, audit periodically.

Human identity assumed interactive sessions, predictable access patterns, and human judgment as the final check. A user might have broad permissions because organizations trusted them to exercise discretion.

Workload identity (microservices, CI/CD pipelines, serverless functions) adapted this model for machines. These workloads are deterministic: given the same input, they produce the same output. A Kubernetes pod calls the same APIs in the same sequence every time. Preprovisioned credentials became the default because access patterns were predictable. You can define which resources a container needs before deployment. But even then, the model carries known risks: secrets sprawl, rotation overhead, and static credentials that outlive the workloads they protect.

This worked because both humans and traditional workloads share a critical trait: their behavior falls within known boundaries. Even at scale, you can map the full scope of what a microservice will do.

AI agents violate this assumption entirely.

Where AI Agents Challenge Traditional Identity Models

AI agents aren’t faster workloads or smarter scripts. They’re a new category of actor that exposes gaps in the three foundational assumptions of identity systems.

Autonomous Decision-Making Creates Unpredictable Access Patterns

A microservice follows code. An AI agent follows goals. When you deploy an autonomous coding assistant, you don’t know which files it will access, which APIs it will call, or which services it will invoke. It decides at runtime based on context, reasoning, and its interpretation of the objective.

Preprovisioning credentials becomes impossible. Grant too little access, and the agent fails mid-task. Grant too much, and you’ve created an overprivileged actor that makes autonomous decisions about how to use those permissions.

According to a 2025 SailPoint survey, 80% of organizations using AI agents have observed them acting unexpectedly or performing unauthorized actions.

Delegation Introduces Dual-Identity Complexity

Traditional OAuth assumes a single subject per token. When an AI agent acts on behalf of a user, the system must track two identities simultaneously: the user who delegated authority and the agent executing the action.

This creates questions that current IAM frameworks can’t answer:

  • When an agent exceeds its intended scope, who is accountable?
  • How do you audit an action taken by an agent acting on behalf of a user who didn’t explicitly approve that specific action?
  • If an agent spawns subagents, how do you trace the delegation chain?

MIT researchers have proposed delegation tokens that explicitly bind user, agent, and scope into a single verifiable artifact. The OpenID Foundation released a white paper in October 2025 addressing these challenges. But production systems are deploying agents faster than standards bodies can define how to secure them.

Why Traditional Workload IAM Falls Short

Workload IAM platforms solve real problems: eliminating static secrets, enabling just-in-time credential issuance, enforcing conditional access. These capabilities matter for AI agents, too. But they were designed for a world where machine behavior is predictable.

Static Scoping Breaks Against Dynamic Behavior

Workload IAM policies define what a workload can access. With deterministic systems, this aligns with what the workload will access. With AI agents, there’s a gap.

An autonomous agent given access to “read customer data for analysis purposes” might decide that cross-referencing with financial records improves its analysis. It’s not malicious. It’s doing what it was designed to do: reason toward a goal. But the access pattern exceeds what the policy author anticipated.

Credential Lifecycle Assumptions Don’t Hold

Workload IAM assumes credentials are requested for known operations. The system provisions access, the workload executes, the credential expires. This fits CI/CD pipelines and microservices with defined execution windows.

AI agents operate differently. A long-running autonomous agent might need access to different resources at unpredictable intervals. Its “session” is the entire period it’s working toward a goal, which could span hours or days. Short-lived credentials help but don’t solve the fundamental mismatch between how agents operate and how credential systems were designed.

Audit Trails Become Incoherent

Workload IAM logs show which identity accessed which resource. But when that identity is an agent acting on behalf of a user, spawning subagents, and making decisions that chain across multiple services, the audit trail fragments.

The question “who did this?” no longer has a simple answer. Was it the user who initiated the workflow? The agent that executed it? The subagent that performed the specific action? The tool the agent invoked? Traditional logging captures events but loses the context needed to establish accountability.

What AI Agents Actually Require

Securing AI agents demands capabilities that extend beyond both human IAM and traditional workload identity.

Blended Identity That Tracks Delegation Chains

AI agents need identities that capture both what they are and who they represent. This means:

  • Cryptographic binding between agent identity and delegating user
  • Explicit scope limitations encoded in the credential itself
  • Verifiable attestation that the agent is running in an authorized environment

Aembit’s approach combines agent identity with user identity into a single auditable credential, ensuring that every action maps back to both the autonomous actor and the human who authorized it.

Runtime Policy Evaluation, Not Preprovisioned Access

Because agent behavior is unpredictable, access decisions must happen at the moment of each request, not during deployment. This requires:

  • Evaluating the agent’s current task context before issuing credentials
  • Applying conditional access based on real-time posture, not static roles
  • Scoping credentials to the specific operation, not the agent’s full potential capability

Agent-Aware Audit and Observability

Accountability requires tracing complete interaction chains, including:

  • Which agent initiated a workflow
  • Which user delegated authority
  • Which subagents or tools were invoked
  • What decisions the agent made and why

This goes beyond logging API calls. It requires capturing the reasoning context that led to each action.

The Path Forward

The emergence of AI agents doesn’t invalidate what we’ve built for human and workload identity. It exposes the limits of treating all nonhuman identities the same way.

The unified identity framework of the future isn’t one system for humans and another for machines. It’s a model that recognizes three categories of actors:

  • Humans: Interactive, judgment-driven, operating within cultural and procedural constraints
  • Deterministic workloads: Predictable, code-driven, operating within defined execution paths
  • Autonomous agents: Goal-directed, reasoning-driven, operating within learned and interpreted boundaries

Each category requires different operational controls built on shared principles: verified identity, least-privilege access, and auditable accountability.

Organizations deploying AI agents today face a choice. They can treat agents as faster workloads, applying existing patterns and accepting the gaps. Or they can recognize that agent autonomy demands a new approach to identity: one where trust is continuously verified, credentials are dynamically scoped, and every action traces back to accountable actors.

The companies getting this right are building the identity infrastructure that will define how autonomous systems participate in enterprise environments.

Related Reading


文章来源: https://securityboulevard.com/2026/03/human-vs-ai-identity-why-ai-agents-are-breaking-identity/
如有侵权请联系:admin#unsafe.sh