Rampant proliferation of Agentic AI and its nearly unrestricted access to data without proper security and governance is eroding trust and hampering the success of the AI initiatives organizations are adopting at a dizzying pace.
Nine out of ten organizations have given broad data access to their enterprise GenAI—and if that doesn’t scare the bejesus out of security teams, then consider that 68% have no real idea what data agents are accessing and just under that (65%) couldn’t tell you what data is accessible to use for AI input, according to research from MIND. They just know (41%) that they’re plagued by Shadow GenAI. And if that’s not bad enough, nearly one-third understand they have unknown agents cavorting around their environments.
Noting that “identity is no longer about perimeter-based defense,” Mark McClain, CEO at SailPoint, says, “the rise in AI-based agents and the massively accelerating threat landscape has rendered that approach insufficient, and prompted a shift towards identity as the critical element to enterprise security.”
With AI and development teams “scaling faster than the security workforce that’s meant to govern them,” Randolph Barr, CISO at Cequence Security, says, “security teams are no longer just defending human users; they’re supervising autonomous systems that generate their own integrations.”
That creates a broad set of technical, organizational and cultural challenges. “Teams must now manage ‘shadow AI’ and ‘shadow APIs,’ both of which introduce risks far beyond traditional DevOps oversight,” pushing governance and visibility to the frontline. “Without unified oversight, a single misconfigured API or orphaned key can compromise entire AI pipelines,” he says.
Perhaps that explains MIND’s “The Impact of Data Trust on AI Initiative Success,” found 66% can’t enforce policies around AI agents and slightly more (70%) grapple with enforcing policies on GenAI tools. That’s unacceptable in today’s terms since nearly all (98%) have already seen at least one AI security challenge in their organizations.
Those findings underscore one, not-so-surprising truth: The gap between rapid AI adoption and security and governing the data that fuels it yawns wide and is only getting wider. Just 20% of organizations have security strategies mature enough to accommodate AI.
But they need to move fast to rectify that. “AI can’t operate sustainably without strong security safeguards,” and now “that reality is raising the stakes on cybersecurity work and changing the shape of the job,” says Diana Kelley, CISO at Noma Security.
While “traditionally, security teams focused on the protection of systems and data,” Kelley points out they now “are helping to govern AI systems and agents that make recommendations and decisions, and in some cases take action on behalf of the business, while enabling the business to adopt AI quickly and safely.”
With “a more well-resourced adversary,” the cost of scale is lowered while the quality of automated attack campaigns increases. So, “without a strong control plane for AI systems and agents, including clear guardrails on access and actions, along with identity, access control, data governance, and runtime monitoring, AI will amplify whatever weaknesses already exist,” Kelley says.
“AI doesn’t invent new attack vectors. It removes hesitation and reduces the time to execute, says Shane Barney, CISO at Keeper Security.
“As attackers automate their own decision-making, defenders need the same ability to understand actions and intent immediately, not just record activity for later review,” he says, which means turning to AI-powered threat detection and response that “can flag and shut down malicious behavior in real time – moving at the speed of AI-powered threats.”
He warns that in a cloud environment “where attackers can reason and act in minutes, any standing privilege is an open invitation.”
And much like the open source ecosystem before it, “as AI systems, agents, and toolchains evolve rapidly, new classes of vulnerabilities are inevitable,” says Boris Cipot, senior security engineer at Black Duck.
Users and organizations must anticipate those “risks rather than react to them, by assuming that AI supply chains, like traditional software supply chains, will be actively targeted and must be secured accordingly,” he says.
The need for “real-time, intelligent, and dynamic identity security, built to govern and secure not just ‘who,’ or in the case of AI agents, ‘what,’ has access to the enterprise” is clear, McLain says, “but what data they can access and what they can do once inside.”
The dire predictions for cybersecurity offerings may be overstated. David Brumley, chief AI and science officer at Bugcrowd, says that while “the market reaction assumes AI collapses the value of cybersecurity platforms,” it in reality “compresses certain features while expanding the overall surface area of security work. As attackers use AI to scale, defenders must do the same.”
As a result, demand grows for “platforms that operationalize AI effectively — it doesn’t eliminate them.”
The real shift, Brumley says, “is in how the work gets done. Security professionals are knowledge workers, and like every knowledge profession, our workflows are being reshaped by AI. Those who ignore it will fall behind.”
Organizations that do adopt it “will become dramatically more effective,” he says, adding that “while security professionals are used to learning new skills, what makes this scarier is the speed and scale at which the change is coming.”
Recent Articles By Author