AI Agents: Who’s There? What Are They Doing? Most Security Teams Don’t Know
好,我现在需要帮用户总结这篇文章的内容,控制在100个字以内。首先,我得通读文章,抓住主要观点。文章主要讨论了AI代理的快速普及和数据访问缺乏安全治理的问题,导致信任危机和AI项目受阻。 接下来,我注意到文章提到大部分企业给了AI广泛的访问权限,但缺乏对数据使用的监控。这带来了影子AI和未知代理的风险。还有专家提到身份安全的重要性,以及安全团队需要管理自主系统带来的挑战。 然后,文章指出66%的企业无法有效管理AI策略,这凸显了安全与治理之间的巨大差距。专家们强调了实时监控、数据治理和身份安全的重要性,并提到攻击者也在利用AI加速攻击。 最后,用户要求总结在100字以内,不需要特定开头。我需要把这些关键点浓缩成简洁的句子。确保涵盖AI普及、数据访问问题、治理不足以及安全威胁加剧这几个方面。 </think> 文章指出,人工智能(AI)代理的快速普及和对数据的无限制访问正在削弱信任并阻碍企业AI项目的成功。大多数企业给予AI广泛的数据访问权限,但缺乏对数据使用的有效监控和治理。这种状况导致了“影子AI”和未知代理的风险增加,并给企业带来了技术、组织和文化上的挑战。专家强调,身份安全已成为企业安全的核心要素,并呼吁加强实时监控、数据治理和身份管理以应对日益复杂的威胁。 2026-4-10 08:27:0 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Rampant proliferation of Agentic AI and its nearly unrestricted access to data without proper security and governance is eroding trust and hampering the success of the AI initiatives organizations are adopting at a dizzying pace. 

Nine out of ten organizations have given broad data access to their enterprise GenAI—and if that doesn’t scare the bejesus out of security teams, then consider that 68% have no real idea what data agents are accessing and just under that (65%) couldn’t tell you what data is accessible to use for AI input, according to research from MIND. They just know (41%) that they’re plagued by Shadow GenAI. And if that’s not bad enough, nearly one-third understand they have unknown agents cavorting around their environments. 

Noting that “identity is no longer about perimeter-based defense,” Mark McClain, CEO at SailPoint, says, “the rise in AI-based agents and the massively accelerating threat landscape has rendered that approach insufficient, and prompted a shift towards identity as the critical element to enterprise security.” 

With AI and development teams “scaling faster than the security workforce that’s meant to govern them,” Randolph Barr, CISO at Cequence Security, says, “security teams are no longer just defending human users; they’re supervising autonomous systems that generate their own integrations.” 

That creates a broad set of technical, organizational and cultural challenges. “Teams must now manage ‘shadow AI’ and ‘shadow APIs,’ both of which introduce risks far beyond traditional DevOps oversight,” pushing governance and visibility to the frontline. “Without unified oversight, a single misconfigured API or orphaned key can compromise entire AI pipelines,” he says. 

Perhaps that explains MIND’s “The Impact of Data Trust on AI Initiative Success,” found 66% can’t enforce policies around AI agents and slightly more (70%) grapple with enforcing policies on GenAI tools. That’s unacceptable in today’s terms since nearly all (98%) have already seen at least one AI security challenge in their organizations.

Those findings underscore one, not-so-surprising truth: The gap between rapid AI adoption and security and governing the data that fuels it yawns wide and is only getting wider. Just 20% of organizations have security strategies mature enough to accommodate AI.

But they need to move fast to rectify that. “AI can’t operate sustainably without strong security safeguards,” and now “that reality is raising the stakes on cybersecurity work and changing the shape of the job,” says Diana Kelley, CISO at Noma Security. 

While “traditionally, security teams focused on the protection of systems and data,” Kelley points out they now “are helping to govern AI systems and agents that make recommendations and decisions, and in some cases take action on behalf of the business, while enabling the business to adopt AI quickly and safely.”

With “a more well-resourced adversary,” the cost of scale is lowered while the quality of automated attack campaigns increases. So, “without a strong control plane for AI systems and agents, including clear guardrails on access and actions, along with identity, access control, data governance, and runtime monitoring, AI will amplify whatever weaknesses already exist,” Kelley says.

“AI doesn’t invent new attack vectors. It removes hesitation and reduces the time to execute, says Shane Barney, CISO at Keeper Security.

“As attackers automate their own decision-making, defenders need the same ability to understand actions and intent immediately, not just record activity for later review,” he says, which means turning to AI-powered threat detection and response that “can flag and shut down malicious behavior in real time – moving at the speed of AI-powered threats.”

He warns that in a cloud environment “where attackers can reason and act in minutes, any standing privilege is an open invitation.”

And much like the open source ecosystem before it, “as AI systems, agents, and toolchains evolve rapidly, new classes of vulnerabilities are inevitable,” says Boris Cipot, senior security engineer at Black Duck. 

Users and organizations must anticipate those “risks rather than react to them, by assuming that AI supply chains, like traditional software supply chains, will be actively targeted and must be secured accordingly,” he says.

The need for “real-time, intelligent, and dynamic identity security, built to govern and secure not just ‘who,’ or in the case of AI agents, ‘what,’ has access to the enterprise” is clear, McLain says, “but what data they can access and what they can do once inside.” 

The dire predictions for cybersecurity offerings may be overstated. David Brumley, chief AI and science officer at Bugcrowd, says that while “the market reaction assumes AI collapses the value of cybersecurity platforms,”  it in reality “compresses certain features while expanding the overall surface area of security work. As attackers use AI to scale, defenders must do the same.”

As a result, demand grows for “platforms that operationalize AI effectively — it doesn’t eliminate them.”

The real shift, Brumley says, “is in how the work gets done. Security professionals are knowledge workers, and like every knowledge profession, our workflows are being reshaped by AI. Those who ignore it will fall behind.”

Organizations that do adopt it “will become dramatically more effective,” he says, adding that “while security professionals are used to learning new skills, what makes this scarier is the speed and scale at which the change is coming.” 

Recent Articles By Author


文章来源: https://securityboulevard.com/2026/04/ai-agents-whos-there-what-are-they-doing-most-security-teams-dont-know/
如有侵权请联系:admin#unsafe.sh