Zero Trust in the Age of AI: Why the Classic Model Isn’t Enough Anymore
好的,我现在需要帮用户总结一篇文章的内容,控制在100个字以内。用户的要求很明确,不需要以“文章内容总结”或“这篇文章”这样的开头,直接写文章描述即可。 首先,我得仔细阅读这篇文章。文章主要讨论了在AI时代,传统的Zero Trust模型存在的问题。特别是机器身份(如服务账户、API密钥、AI代理)数量远超人类身份,导致传统模型无法有效应对这些非人类实体的安全威胁。 接下来,我需要提取关键点:机器身份数量多、Zero Trust最初针对人类用户、AI代理行为与人类不同、权限管理问题、输入输出保护等。这些都是文章的核心内容。 然后,我要把这些点浓缩成一句话。要确保涵盖主要问题和解决方案,同时保持简洁。比如提到机器身份数量多、传统模型的不足以及扩展Zero Trust的必要性。 最后,检查字数是否在100字以内,并确保语言流畅自然。这样用户就能快速理解文章的主要观点了。 </think> 随着AI的普及,企业环境中机器身份(如服务账户、API密钥和AI代理)数量远超人类身份。传统Zero Trust模型主要针对人类用户,无法有效应对AI代理的自主行为和复杂权限管理需求。为应对这一挑战,需扩展Zero Trust框架以涵盖机器身份治理、行为分析及输入输出保护等新要求。 2026-3-5 16:40:39 Author: securityboulevard.com(查看原文) 阅读量:6 收藏

Zero Trust in the Age of AI: Why the Classic Model Isn't Enough Anymore

Here's a statement that should make any security architect pause:

In most enterprise environments today, machine identities – service accounts, API keys, deployment pipelines, and increasingly AI agents – outnumber human identities by a significant margin. In AI-native companies, that ratio is already 10 to 1 or higher.

Zero Trust was designed with human users as the primary subject. The model assumes identity belongs to a person who authenticated with credentials, uses a device you can evaluate, and accesses applications in recognizable patterns. When something deviates from that pattern, behavioral analytics flags it.

AI agents don't fit that model. And the mismatch is creating security gaps that traditional Zero Trust frameworks weren't designed to close.


What Changed When AI Entered the Picture

The first wave of AI in enterprise environments was relatively contained. Machine learning models trained on internal data, analytics pipelines, recommendation engines. These were workloads – they had identities, made API calls, and could be secured reasonably well with existing approaches.

The second wave – large language models, AI agents, and autonomous systems – is different in kind, not just in degree.

AI agents act autonomously. A human user making access requests follows recognizable patterns. An AI agent running a workflow might make hundreds or thousands of API calls in minutes, query multiple data sources in sequence, generate and execute code, and trigger downstream actions – all autonomously. The behavioral baseline for an AI agent looks nothing like a human user.

AI agents operate across long contexts. An AI workflow might start with a user request, retrieve context from multiple systems, call external APIs, process results, and write output to another system – all in a single execution. This multi-hop, multi-system access pattern is exactly what lateral movement looks like. Distinguishing legitimate agent behavior from adversarial behavior is genuinely hard.

AI agents inherit and amplify permissions. If an AI agent runs in the security context of a human user or a service account with broad permissions, it can do whatever that identity can do. And because it acts autonomously and at machine speed, any permission misuse happens before a human can intervene.

AI agents can be manipulated. Prompt injection attacks – where malicious content embedded in data the agent processes causes it to perform unintended actions – are a real and growing class of attack. An agent that trusts content from external sources without verification can be weaponized by that content.


The Machine Identity Problem

Before even getting to AI agents, there's a foundational problem: most organizations have poor visibility and governance over non-human identities.

When I was scaling a CIAM platform to handle billions of user authentications, we were acutely aware of human identity management. But the service accounts, API keys, and machine-to-machine connections multiplied faster than anyone tracked them. This pattern holds across the industry.

A 2024 assessment across enterprise environments found that organizations with good visibility into their human identity inventory often had three to five times more non-human identities than human ones – and a fraction of the governance.

Here's what that looks like in practice:

  • API keys embedded in code repositories with no expiration and no ownership
  • Service accounts with broad permissions granted for a project that ended two years ago
  • OAuth application grants that were authorized by employees who have since left
  • AI pipeline credentials with read access to data systems far beyond what the pipeline actually needs

Every one of these is a potential pivot point for an attacker. And because machine identities are less visible and less monitored than human ones, they're increasingly the preferred target.

The SolarWinds breach in 2020 exploited precisely this gap. The malicious code inserted into the build pipeline operated using legitimate service account credentials. No human user behaved anomalously. The compromise lived entirely in the machine-to-machine communication layer.


How Zero Trust Needs to Evolve for AI

Classical Zero Trust principles still apply. Never trust, always verify. Least privilege. Assume breach. But the implementation needs to extend in several specific directions to handle AI agents and machine identities effectively.

1. Every AI Agent Needs Its Own Identity

An AI agent should not run under a shared service account or a human user's identity. It should have its own workload identity, with permissions explicitly scoped to what that specific agent needs to do its specific job.

This means:

  • Workload identity credentials issued per-agent, not per-application
  • Short-lived credentials where possible (tokens with brief expiry rotated frequently)
  • No standing permissions – access granted when needed, revoked when the workflow completes
  • Machine identity lifecycle management as rigorous as human identity management

The practical challenge: AI frameworks and orchestration platforms vary widely in how they handle identity. Some make this easy; many require deliberate work to implement correctly. Treat agent identity as a first-class design requirement, not an afterthought.

2. Least Privilege for Agentic Workflows Is Harder – and More Important

A human user with least privilege access typically needs read access to their own work, write access to their own projects, and limited access elsewhere. Scoping that is well understood.

An AI agent that orchestrates a complex workflow might legitimately need to read from a database, call an external API, write to a document store, and trigger a notification – in sequence, not simultaneously. Traditional least privilege models often grant all the permissions the workflow might ever need upfront.

A more sophisticated approach uses just-in-time permission grants: the agent requests and receives the permission it needs for each step, uses it, and releases it. This requires the orchestration layer to mediate permission requests rather than granting all permissions at initialization.

This is architecturally more complex but significantly reduces the blast radius if the agent is compromised or manipulated mid-workflow.

3. Protect AI Agent Inputs and Outputs

Prompt injection is the Zero Trust problem for the AI data plane. If an AI agent processes data from external sources – web content, emails, documents, user inputs – any of that content could contain instructions intended to manipulate the agent's behavior.

Zero Trust for AI inputs means:

  • Treating all external content as untrusted data, not as instructions
  • Implementing input sanitization and validation before agent processing
  • Separating the trust level of agent instructions (from your system prompt, your code) from the trust level of agent inputs (user data, external content)
  • Monitoring agent outputs for anomalies that might indicate manipulation

This is a relatively new problem domain, and the tooling is still maturing. But the principle maps directly to existing Zero Trust thinking: explicit verification, never implicit trust.

4. Behavioral Baselines for AI Agents

UEBA (User and Entity Behavior Analytics) was built for human users and, to some extent, traditional service accounts. AI agents behave differently and require different baselines.

An AI agent running normally might make 500 API calls in ten minutes. That would be catastrophically anomalous for a human user. Building baselines that correctly distinguish normal high-volume agent behavior from anomalous agent behavior – excessive data access, calls to unexpected endpoints, anomalous output volumes – requires agent-aware analytics.

This means:

  • Agent activity should be logged separately from human user activity, with agent-specific context
  • Behavioral baselines should be established per-agent-type, not applied from human user templates
  • Anomaly detection rules should account for the burst-and-pause pattern typical of AI workflows
  • Threshold violations should trigger agent suspension and review, not just alerting

5. Human-in-the-Loop for High-Stakes Decisions

The most dangerous class of AI agent action is the irreversible high-stakes operation: sending an external communication, modifying production data, executing a financial transaction, deprovisioning an account.

Zero Trust for AI agents should include explicit checkpoints for these operations, where a human must review and approve before the agent proceeds. This isn't a performance optimization – it's a security control that limits the damage an adversarial manipulation can cause.

Designing these checkpoints into the workflow architecture from the beginning is far easier than retrofitting them later.


The Threat Landscape AI Is Creating

Understanding what you're defending against sharpens how you build defenses.

AI-assisted reconnaissance: Attackers are using AI to accelerate target profiling, identify exposed credentials and API keys in public repositories, and analyze large datasets for attack paths. The speed of attack preparation has increased dramatically.

AI-generated phishing: The barrier to convincing social engineering has dropped. AI-generated phishing content can be personalized, grammatically correct, and contextually appropriate at scale. Traditional spam filters that look for poor writing or generic templates are less effective.

Adversarial AI agents: As AI agents become more capable and more prevalent in enterprise environments, using them as attack vectors becomes more attractive. A compromised or manipulated AI agent with legitimate credentials can do significant damage quietly.

LLM-specific attacks against your own AI systems: If you're running internal LLMs or using AI services that ingest internal data, adversarial inputs designed to exfiltrate data or manipulate outputs become a real concern.


Practical Steps for CISOs and Security Teams Today

The AI security problem can feel overwhelming – too many new threat vectors, too few proven defenses. Here's a practical prioritization.

This quarter:

  • Inventory all AI tools and services currently in use across the organization (the number is higher than IT knows)
  • Audit all service accounts and API keys associated with AI workloads; revoke anything not actively used
  • Implement conditional access policies that apply specifically to AI service accounts
  • Define acceptable-use policy for AI tools and communicate it explicitly

This year:

  • Implement workload identity management for AI agents you develop or deploy
  • Establish agent-specific logging and build initial behavioral baselines
  • Design human-in-the-loop controls for irreversible agent actions
  • Conduct a prompt injection threat assessment for any internal LLM deployments
  • Extend your access certification process to cover AI tool authorizations

Ongoing:

  • Treat AI agent identity with the same rigor as privileged human identity
  • Incorporate AI attack scenarios into red team exercises
  • Stay current with emerging standards (the OAuth working group is actively developing workload identity extensions; NIST is updating SP 800-207 guidance to incorporate AI considerations)

The Core Insight

Zero Trust's founding insight – that implicit trust based on network location is the fundamental flaw in enterprise security – applies with equal force to AI agents and machine identities.

The extension for the AI era is this: trust shouldn't be implicit based on any identity signal, human or machine. Every access request should be verified against explicit policy. Every identity should carry only the minimum permissions needed. Every system should be designed assuming that any component can be compromised.

AI doesn't break Zero Trust. But it does expose the parts of Zero Trust that most organizations implemented incompletely. Machine identity governance, east-west traffic control, behavioral analytics for non-human entities – these were always part of a complete Zero Trust architecture.

The AI era just makes it urgent to get them right.


Deepak Gupta is the Co-founder & CEO of GrackerAI and an AI & Cybersecurity expert with 15+ years in digital identity and enterprise security. He writes about cybersecurity, AI, and B2B SaaS at guptadeepak.com.

*** This is a Security Bloggers Network syndicated blog from Deepak Gupta | AI &amp; Cybersecurity Innovation Leader | Founder&#039;s Journey from Code to Scale authored by Deepak Gupta - Tech Entrepreneur, Cybersecurity Author. Read the original post at: https://guptadeepak.com/zero-trust-in-the-age-of-ai-why-the-classic-model-isnt-enough-anymore/


文章来源: https://securityboulevard.com/2026/03/zero-trust-in-the-age-of-ai-why-the-classic-model-isnt-enough-anymore/
如有侵权请联系:admin#unsafe.sh