The Silent Threat to the Agentic Enterprise: Why BOLA is the #1 Risk for AI Agents
文章指出,在部署自主AI代理的过程中,Broken Object Level Authorization(BOLA)成为主要安全威胁。当AI代理通过API执行操作时,若API未充分授权验证,则可能导致数据泄露或滥用。文章分析了BOLA与AI结合带来的三重风险:机器速度攻击、上下文缺口及“影子”MCP问题,并强调需采用新策略和工具来应对这一挑战。 2026-1-6 13:0:1 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

In the race to deploy autonomous AI agents, organizations are inadvertently building on a foundation of shifting sand. While security teams have spent the last year focused on “Prompt Injection” and “Model Poisoning,” a much older, more dangerous adversary has quietly become the primary attack vector for the agentic era: Broken Object Level Authorization (BOLA).

Market data and search trends indicate a significant surge in interest regarding BOLA as organizations move from AI experimentation to production. The industry is waking up to a hard truth: when an AI agent takes an action, it does so via an API. And if that API isn’t secured against BOLA, your “autonomous helper” could easily become an “autonomous data exfiltrator.”

The Agentic AI Action Layer: A 1-to-Many Risk Multiplier

At Salt Security, we focus on the Agentic AI Action Layer. This is the ecosystem of APIs and Model Context Protocol (MCP) servers that allow an AI agent to move beyond just “chatting” and start “doing,” such as processing refunds, accessing customer records, or updating supply chain databases.

In a traditional application, a human user interacts with one record at a time. In the agentic enterprise, a single AI agent might be authorized to interact with thousands of objects across dozens of systems. This creates a “1-to-many” connection pattern that traditional security tools, such as WAFs and API Gateways, weren’t designed to govern.

Why BOLA + AI Agents = A Perfect Storm

BOLA occurs when an API fails to verify that an authenticated user, or, in this case, an autonomous agent, has permission to access specific resources. It is essentially the “hotel key card” problem: your key gets you into the hotel (authentication), but it shouldn’t open every door in the building (authorization).

According to the OWASP API Security Top 10, BOLA remains the most common and damaging vulnerability in the API ecosystem. When you introduce AI agents into this mix, this risk is amplified by three critical factors:

1. The Machine-Scale Speed of Abuse

The threat of BOLA is no longer theoretical; it is currently the primary way AI infrastructure is breached. The June 2025 McHire (McDonald’s) data breach serves as a stark warning for the agentic era. In that incident, an AI-powered hiring chatbot was compromised when researchers exploited an Insecure Direct Object Reference (IDOR/BOLA) flaw in the backend API. By simply manipulating applicant IDs, they gained unauthorized access to the PII and private chat histories of up to 64 million job seekers.

In the agentic era, an attacker doesn’t need to guess object IDs manually. By compromising an agent, or simply using a legitimate agent with “Excessive Agency” (a top risk in the OWASP Top 10 for LLMs), an attacker can programmatically iterate through object IDs at machine speed. What used to be a “low-and-slow” human attack is now an instantaneous machine-scale breach.

2. The Context Gap

Traditional security tools see a series of individually “valid” API calls. To a WAF or Gateway, the call for record_A and record_B looks identical. However, the intent behind an agent suddenly requesting 5,000 records it has never touched before is clearly malicious.

This risk is compounded by “Vibe Coding,” where AI agents generate their own code. Research has shown that AI agents frequently generate applications with native BOLA flaws, such as failing to check if a userId in a URL matches the authenticated session. Without behavioral AI to baseline “normal” agent behavior, these logic-based exploits go completely undetected.

3. The “Shadow” MCP Problem

The rapid adoption of the Model Context Protocol (MCP), the new standard for connecting LLMs to data, allows developers to quickly spin up servers that give agents access to local data. These MCP servers often bypass central security reviews, creating a new generation of “Shadow APIs” that lack the robust authorization checks needed to stop BOLA. Industry trends show a massive pivot where agentic consumers are quickly becoming the primary drivers of API traffic, yet many of these connections remain invisible to traditional security governance.

Securing the Future: The Three Pillars of Agentic Governance

You cannot secure the agentic enterprise with legacy tools. To protect your AI initiatives from BOLA and other logic-based attacks, you need a strategy that covers the full API lifecycle:

  1. See It: You must maintain a real-time, continuous inventory of every API and MCP server your agents use. If you can’t see the connection, you can’t secure the object.
  2. Govern It: Move beyond static “Developer Experience” policies to “Agentic Experience” governance. This means enforcing granular, context-aware access controls within the Agentic AI Action Layer to verify every object request in real time.
  3. Protect It: Use cloud-scale AI to baseline agent behavior. Salt’s patented Intent Analysis can distinguish between a helpful agent doing its job and a compromised agent abusing BOLA to exfiltrate data.

Conclusion

The promise of the Agentic Enterprise is too great to ignore, but the speed of its adoption is currently outstripping our ability to secure it. In the world of autonomous agents, the API isn’t just a technical detail; it is the only control plane that matters. Organizations that fail to address BOLA at the Agentic AI Action Layer aren’t just deploying AI; they are deploying an unmanaged liability. True innovation requires the confidence that your agents are operating exactly as intended.

If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security’s research team and learn what attackers already know.

*** This is a Security Bloggers Network syndicated blog from Salt Security blog authored by Eric Schwake. Read the original post at: https://salt.security/blog/the-silent-threat-to-the-agentic-enterprise-why-bola-is-the-1-risk-for-ai-agents


文章来源: https://securityboulevard.com/2026/01/the-silent-threat-to-the-agentic-enterprise-why-bola-is-the-1-risk-for-ai-agents/
如有侵权请联系:admin#unsafe.sh