In the race to deploy autonomous AI agents, organizations are inadvertently building on a foundation of shifting sand. While security teams have spent the last year focused on “Prompt Injection” and “Model Poisoning,” a much older, more dangerous adversary has quietly become the primary attack vector for the agentic era: Broken Object Level Authorization (BOLA).
Market data and search trends indicate a significant surge in interest regarding BOLA as organizations move from AI experimentation to production. The industry is waking up to a hard truth: when an AI agent takes an action, it does so via an API. And if that API isn’t secured against BOLA, your “autonomous helper” could easily become an “autonomous data exfiltrator.”
At Salt Security, we focus on the Agentic AI Action Layer. This is the ecosystem of APIs and Model Context Protocol (MCP) servers that allow an AI agent to move beyond just “chatting” and start “doing,” such as processing refunds, accessing customer records, or updating supply chain databases.
In a traditional application, a human user interacts with one record at a time. In the agentic enterprise, a single AI agent might be authorized to interact with thousands of objects across dozens of systems. This creates a “1-to-many” connection pattern that traditional security tools, such as WAFs and API Gateways, weren’t designed to govern.
BOLA occurs when an API fails to verify that an authenticated user, or, in this case, an autonomous agent, has permission to access specific resources. It is essentially the “hotel key card” problem: your key gets you into the hotel (authentication), but it shouldn’t open every door in the building (authorization).
According to the OWASP API Security Top 10, BOLA remains the most common and damaging vulnerability in the API ecosystem. When you introduce AI agents into this mix, this risk is amplified by three critical factors:
The threat of BOLA is no longer theoretical; it is currently the primary way AI infrastructure is breached. The June 2025 McHire (McDonald’s) data breach serves as a stark warning for the agentic era. In that incident, an AI-powered hiring chatbot was compromised when researchers exploited an Insecure Direct Object Reference (IDOR/BOLA) flaw in the backend API. By simply manipulating applicant IDs, they gained unauthorized access to the PII and private chat histories of up to 64 million job seekers.
In the agentic era, an attacker doesn’t need to guess object IDs manually. By compromising an agent, or simply using a legitimate agent with “Excessive Agency” (a top risk in the OWASP Top 10 for LLMs), an attacker can programmatically iterate through object IDs at machine speed. What used to be a “low-and-slow” human attack is now an instantaneous machine-scale breach.
Traditional security tools see a series of individually “valid” API calls. To a WAF or Gateway, the call for record_A and record_B looks identical. However, the intent behind an agent suddenly requesting 5,000 records it has never touched before is clearly malicious.
This risk is compounded by “Vibe Coding,” where AI agents generate their own code. Research has shown that AI agents frequently generate applications with native BOLA flaws, such as failing to check if a userId in a URL matches the authenticated session. Without behavioral AI to baseline “normal” agent behavior, these logic-based exploits go completely undetected.
The rapid adoption of the Model Context Protocol (MCP), the new standard for connecting LLMs to data, allows developers to quickly spin up servers that give agents access to local data. These MCP servers often bypass central security reviews, creating a new generation of “Shadow APIs” that lack the robust authorization checks needed to stop BOLA. Industry trends show a massive pivot where agentic consumers are quickly becoming the primary drivers of API traffic, yet many of these connections remain invisible to traditional security governance.
You cannot secure the agentic enterprise with legacy tools. To protect your AI initiatives from BOLA and other logic-based attacks, you need a strategy that covers the full API lifecycle:
The promise of the Agentic Enterprise is too great to ignore, but the speed of its adoption is currently outstripping our ability to secure it. In the world of autonomous agents, the API isn’t just a technical detail; it is the only control plane that matters. Organizations that fail to address BOLA at the Agentic AI Action Layer aren’t just deploying AI; they are deploying an unmanaged liability. True innovation requires the confidence that your agents are operating exactly as intended.
If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security’s research team and learn what attackers already know.
*** This is a Security Bloggers Network syndicated blog from Salt Security blog authored by Eric Schwake. Read the original post at: https://salt.security/blog/the-silent-threat-to-the-agentic-enterprise-why-bola-is-the-1-risk-for-ai-agents