Enterprises are accelerating AI adoption with transformative results- 78% already use AI and 50%+ of them are seeing revenue increases. As implementations evolve from simple assistants to autonomous agents, we’re reaching an inflection point where security approaches must advance beyond acceptable use policies and human oversight. Solutions must address the growing risk landscape—data leakage, autonomous agent actions, and non-deterministic systems making enterprise decisions.
While open source tools and infrastructure providers supply the building blocks, they create a fragmented landscape that requires domain expertise to unify effectively. As SaaS providers turn into agent companies, businesses that deliver AI agents as products, they should be the ones embedding security into their offering rather than relying on their customers or open source tools to fill the security gaps.
Data security best practices for AI have become clear—public models shouldn’t train on enterprise data, enterprise data should be permissions-enforced, and AI responses need grounding and citations in enterprise data to reduce hallucinations.
Yet, the implementation burden of data security has largely fallen on enterprise customers. It’s often assumed that enterprise data has been properly permissioned from the start, when in reality, data is messy and enforcement of permissions and retention policies hasn’t kept pace with rapid data growth. This challenge is compounded by the fact that many data cleanup methods rely on organizations manually classifying content as ‘highly sensitive’—a process far too slow for the pace of AI innovation.
Reliable AI agents require two things: Strict permission boundaries to protect sensitive data, and comprehensive access to organizational context—content, people, and processes—to understand how work gets done and operate autonomously.
Agent companies should be more active partners, helping to provide a means for easily getting enterprise data AI-ready. This means understanding how the data will be used, designing security measures around those use cases, and automating sensitive data detection.
AI models are designed to be helpful, with an inherent bias toward saying “yes” rather than refusing requests that may breach security and confidentiality, making them vulnerable to attacks. Attackers use prompt injection techniques, framing malicious requests as hypothetical scenarios, roleplay, or urgent situations that compel the model to assist. Many models fall victim to these attacks, including the well-known DAN (Do Anything Now) prompt. While AI security has advanced, there remains a need for continued innovation as attack methods evolve.
In a multi-model environment where built-in protections vary across models, it falls on agent companies to provide a unified security layer against AI attacks. Attack types also vary by use case, for example, a customer service agent might face social engineering attempts to extract customer data, while a code generation agent could be targeted with malicious code injection to compromise development environments.
The transition to agentic AI introduces a new risk profile. Agents can take actions back into enterprise source applications, initially with human-in-the-loop and eventually progressing to autonomous actions. Companies want graduated trust models, where initial deployments start restrictive and gradually open up the action space as agents prove that they work reliably and consistently. Agent rollouts benefit from careful controls around who can create agents, what actions they can take, and how they get triggered, whether manually or via subscription-based automation.
As agents get deployed across the enterprise, it’s just as important to keep them aligned with their intended purpose, making sure they don’t drift from their original goals and that they follow the organization’s use policies. This is a big frontier that calls for further research on both performance and security. At Glean, for example, the way we design and evaluate quickstart agents focuses on how well they complete tasks and stay within defined boundaries, balancing effectiveness with safe execution.
Agents go beyond LLM calls, combining enterprise data and actions to effectively get work done. That’s why alignment must look beyond the model to the wider system implementation.
An interesting pattern emerges when examining AI system implementations: The correlation between security and performance metrics. Systems that effectively minimize hallucinations tend to maintain better agent accuracy. Agents that stay within defined task boundaries typically perform better at their intended functions.
Effective security depends on understanding the entire agentic workflow: Data flows, decision-making, action execution, and result validation. And, also understanding each agent: The intent of the agent creator, the role of the agent, the actions taken and more. In sum, the system and the context. This depth of understanding comes only from companies that combine agent capabilities with real-world deployments.
That’s why agent security can’t be an afterthought or handled with standalone tools; it must be built into the platform itself. Enterprise customers also shouldn’t bear the burden of detecting prompt injection or enforcing agent guardrails; they don’t have the models or the system-level context to block these risks effectively. Agent platforms must take ownership of securing the systems they create, because they’re the only ones in a position to do it right.