Your AI Agents Aren’t Hidden. They’re Ungoverned. It’s time to Act
文章指出企业在数字化转型中面临的AI治理挑战,强调安全团队不应过度追求完美可见性而忽视行动。建议从已知的高风险AI代理入手,优先控制其访问权限和行为模式,并利用Defakto框架实现动态身份认证和实时监控,从而有效降低安全风险并提升治理效率。 2025-12-15 19:50:10 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

“Start by doing what’s necessary; then do what’s possible; and suddenly you are doing the impossible.” – St. Francis of Assisi

In the 12th century, St. Francis wasn’t talking about digital systems, but his advice remains startlingly relevant for today’s AI governance challenges.

Enterprises are suddenly full of AI agents such as copilots embedded in SaaS platforms, LLM-powered automations in CI/CD pipelines, and countless scripts making API calls to model providers.

The Visibility Trap

Every security team starts with the same mantra: “We need visibility first.” But with AI Agents, discovery has become a trap. The pursuit of perfect visibility can keep teams paralyzed while ungoverned agents run free. Discovery feels safe. Discovery is measurable and non-controversial. Discovery is an endless pursuit. However, discovery on its own doesn’t move you closer to the ultimate goal of security and governance. AI agents aren’t hiding behind obscure APIs. They’re in plain sight with excessive access! Waiting for perfect inventory before enforcing governance only extends your exposure window. The truth is you already have visibility. It’s imperfect and incomplete, but it’s there.

The question isn’t whether you can see your AI agents. It’s what you’re going to do about the ones you already know exist.

Don’t let perfect be the enemy of good.

For those of us in the security business, the saying “Do not let perfect be the enemy of good,” or in this case “good enough” is not just good advice, it is the pragmatic foundation for making progress in an imperfect world. This pragmatic approach extends to improving visibility. The best way to improve your visibility isn’t to deploy another scanning tool, it’s to gain momentum by securing what you do know. This pragmatic approach lets you create a framework will then allow you to use that identity fabric to discover what you’re missing and get you to your ultimate goal.

Stop waiting for perfect discovery! You already know about AI agents in your environment:

  • Network logs show the traffic. Your SIEM, proxy, or firewall already captures outbound calls to OpenAI, Anthropic, Gemini, and Azure AI endpoints. The patterns are there: who’s calling, how often, from where.
  • API keys reveal the agents. Check your LLM provider portals. Every API key represents at least one agent. Then look for those keys in CI/CD pipelines, config repos, and secret stores. You’ll be surprised how many exist, some legitimate, others forgotten experiments.
  • Code repositories don’t lie. Your software composition analysis tools can already scan for AI SDKs: openai, anthropic, transformers, langchain, llama-index, mcp. You just need to look.

The Real Problem Isn’t Discovery, It’s Control

Are you governing what you do know about AI in your environment? Most authenticate with static API keys, creating an exponential secrets sprawl problem that grants uncontrolled access to sensitive data and systems with long-lived credentials.

Even worse, these agents operate inside legacy environments that were designed for human users, not autonomous actors. These systems are overpermissioned and lack fine-grained access control, making them fertile ground for AI agents to overreach unintentionally as they pursue their assigned tasks. While the intention of your development teams is to enable the business, the unintended consequences could do more harm than good. 

The risks compound quickly:

  1. Credential exposure: A single leaked OpenAI or Anthropic API key can trigger a $50,000 usage bill, expose sensitive data and IP, and create compliance violations, often with no clear attribution or containment path.
  2. Excessive access: Legacy environments grant overly broad permissions, allowing AI agents to explore and act far beyond their intended scope. These systems were built for trusted humans, not autonomous workloads capable of probing every interface without oversight.
  3. Governance blindness: As AI agents multiply, manual reviews and static policies fall behind dynamic behavior, leaving organizations blind to what agents are doing or how data is being used.

Start with the Riskiest Agents You Already Know

Here’s the path forward:

  1. Identify and prioritize. Take the AI agents you already know about and rank them by risk. Which ones have access to customer data? Which ones run in production? Which ones use API keys that have been copy-pasted across multiple systems?
  2. Eliminate API key sprawl first. Start eradicating static API keys for your highest-risk LLM interactions. Deploy Defakto Mind to replace them with dynamic, verifiable identities that authenticate and authorize every action at runtime. This immediately removes the threat of credential leakage and ensures each agent operates only within approved boundaries.
  3. Use identity to enhance visibility. Once agents have workload identities, you can track their actual access patterns. You’ll see what they’re really doing, not just where they’re calling. This is how visibility compounds. Identity creates an audit trail that reveals gaps in your original discovery.
  4. Extend access controls. Deploy distributed access control policies through Defakto Mind that restrict what those agents can access in the rest of your environment. Protect your systems from your agents. Fine-grained, identity-based controls contain the blast radius of overly broad permissions in legacy systems, ensuring agents operate only where they should.

From Visibility to Velocity

The misconception is that you need complete visibility before you can act. In practice, acting on partial visibility creates better visibility.

If yesterday you governed five AI agents and today you govern fifteen, that’s not imperfection, that’s momentum.

With Defakto, each step forward compounds:

  • Eliminate static API keys and the costly remediation overhead that comes with credential leakage
  • Contain risk through identity-based access controls that work even in legacy systems
  • Gain real-time observability that prevents oversight fatigue and gives security teams the agility to respond before incidents escalate

Organizations using this approach deploy AI faster, reduce security review cycles from weeks to minutes, and cut the operational cost of credential management to near zero, all while maintaining provable control, compliance, and trust.

Defakto – Your Framework to get started

You don’t have to discover every AI agent before you can take control. Defakto gives you the framework to make it safe to deploy AI at scale by starting with what you can see today. 

Defakto’s no-code/low-code deployment model, identity-based access control, and real-time audit trail, is the AI Identity Fabric that connects agents to your environment without requiring your engineers to become identity experts, slowing down delivery or sacrificing control.

With Defakto, your security team gains continuous oversight. Your AI projects keep their momentum. And your organization builds trust in AI, one visible, governable step at a time. 

Curious How Defakto can help you with AI Security?

Schedule some time with our identity architect team.

*** This is a Security Bloggers Network syndicated blog from Defakto authored by Pieter Kasselman. Read the original post at: https://www.defakto.security/blog/your-ai-agents-arent-hidden-theyre-ungoverned-its-time-to-act/


文章来源: https://securityboulevard.com/2025/12/your-ai-agents-arent-hidden-theyre-ungoverned-its-time-to-act/
如有侵权请联系:admin#unsafe.sh