CSA Study: Mature AI Governance Translates Into Responsible AI Adoption
Cloud Security Alliance的研究表明,有效的AI治理是推动负责任AI采用的关键因素。成熟治理体系能提升组织信心、员工培训水平及创新责任性。安全团队正积极利用AI进行威胁检测和响应。然而,尽管多数企业已将AI融入核心业务和安全流程中,仅有26%的企业制定了全面的治理政策。高管支持对构建治理体系至关重要,而缺乏治理则可能导致技术应用超出管理能力范围。 2025-12-19 14:52:25 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

Before you dismiss AI governance as too difficult or out of reach, consider new research from the Cloud Security Alliance that found AI governance to be the “maturity multiplier” that drives responsible AI adoption.

Responsible and AI adoption. Those are words we’d all like to see coupled more frequently. 

“AI governance is the strongest predictor of AI readiness. Mature programs correlate to higher confidence, increased staff training, and more responsible innovation,” CSA said on releasing the report. “It also highlights a meaningful shift: Security teams have become early adopters of AI,” which they use for important actions like threat detection, red teaming, automation and incident response.

The survey, commissioned by Google Cloud, shows “a clear divide: organizations with established AI governance are accelerating adoption with confidence, while the rest are moving quickly but without the structures needed to manage emerging risk.”

This year’s survey found that security leaders are “working to secure AI systems even as they begin using AI to strengthen security itself.” With the market “evolving at remarkable speed,” governance is increasingly becoming  “the foundation that determines whether adoption advances responsibly or outpaces an organization’s ability to manage it.”

The research shows that organizations across all sectors are embedding AI into their core operations and security workflows—54% use public frontier LLMs and 60% plan to use agentic AI within 12 months—while governance lags. Only 26% have comprehensive AI governance policies in place. And concern over security issues runs high, with 53% pointing to sensitive data exposure as the chief security risk.

“As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies,” says Nicole Carignan, senior vice president, security and AI strategy, and field CISO at Darktrace.

“Day-to-day AI safety comes from disciplined oversight that reduces unnecessary risk and prevents harm,” says Noma Security CISO Diane Kelley. 

Noting that there is no one-size-fits-all approach, Carignan says, “each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements.” For that to happen, “executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions.”

The CSA report found that executive enthusiasm for AI was high, but “most respondents (72%) were either not confident or neutral in their organization’s ability to secure it.” Seven in 10 respondents “report moderate to full leadership awareness of AI security implications,” exposing a gap that “underscores the need for deeper governance, education, and cross-functional collaboration.”

Those organizations that haven’t prioritized governance, shouldn’t hesitate to do so now. Organizations that have formal governance in place are twice as likely to adopt agentic AI. They are also three times more likely to train their staff on AI security tools and have double the confidence that they can protect their AI systems. “This reinforces governance as the foundation for responsible innovation—and a practical countermeasure to “shadow AI,” the report said.

Breaking with past precedent, security has become an early adopter of AI, with more than 90% testing or planning to test it, which the report says highlights “the urgency and opportunity to embed AI into security from the outset.” Perhaps not surprisingly, since AI ownership is diffuse and deployments are distributed across functions, security is taking the lead in protecting AI in just over half of organizations.

Even as organizations are using multiple LLMs (2.6 on average), the report found that they are consolidating around Gemini, Claude, GPT and LLaMA. “While this signals growing operational maturity, it also introduces new resilience, interoperability, and vendor lock-in concerns,” the study noted.

While data exposure is the top security concern among organizations, regulatory compliance is a close second at 50%, demonstrating that the focus remains on traditional issues rather than AI-specific threats like prompt injection and model drift.  

“The largest concern I see today is the insatiable demand by cybercriminals to create persistence inside systems so they cannot easily be detected or evicted; knowing if this has occurred and getting proof that it not the case are two very different things, both of which will keep you up at night waiting for it to happen again,” says Dave Tyson, CIO at iCounter.

But Curtis Wilson, data scientist at Black Duck, says, “The greatest challenge facing AI adoption isn’t regulation—it’s trust.”

When people have confidence that AI systems are being developed responsibly, he says, “they’re more likely to use them.” AI developers “need interoperability,” he adds. 

Since companies operating globally “are already navigating the EU AI Act,” he says, “The practical solution is to align U.S. federal and state regulations with established frameworks like NIST’s AI Risk Management Framework” to provide “genuine clarity while maintaining the protections people need.”

In 2026, security operations will move closer to “what FortiGuard Labs describes as machine-speed defense—a continuous process of intelligence, validation, and containment that compresses detection and response from hours to minutes,” says Derek Manky, chief security strategist and global vice president of threat intelligence with Fortinet’s FortiGuard Labs.

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/12/csa-study-mature-ai-governance-translates-into-responsible-ai-adoption/
如有侵权请联系:admin#unsafe.sh