Data trust is the hidden reason most AI initiatives fail
嗯,用户让我总结一篇文章,控制在一百个字以内,而且不需要特定的开头。首先,我得仔细阅读这篇文章,抓住主要观点。 文章标题是“Ready, Fire, AI.”,看起来是关于企业级生成AI的采用情况。开头提到90%的企业已经在大规模运行Enterprise GenAI,但只有34%的CISO对AI数据安全控制有信心。这说明虽然AI adoption很高,但信心不足。 接下来,文章讨论了为什么AI暴露了数据治理的问题。过去的数据治理问题可能没有被发现,因为没有系统同时扫描所有数据。而AI一旦连接到数据源,就会发现所有未分类、过度共享的数据,导致安全问题。 研究显示70%的安全领导难以在GenAI工具上执行策略,66%无法在AI代理上执行策略,98%面临至少一个重大安全挑战。这表明现有的治理框架缺乏技术执行能力。 深层问题是结构上的:安全框架是为人类设计的,而AI代理没有判断力,以机器速度操作,导致现有框架无法应对。文章还提到数据信任是AI成功的关键因素,并提出了如何构建基础设施来解决这些问题。 总结起来,文章主要讲大规模采用Enterprise GenAI带来了数据治理和安全控制的问题,导致信心不足和效果不佳,并强调了数据信任的重要性以及如何解决这些挑战。 </think> 企业大规模采用Enterprise GenAI的同时, 数据安全信心却在下降。研究显示70%的安全领导难以在GenAI工具上执行策略, 66%无法在AI代理上执行策略, 98%面临至少一个重大安全挑战。现有治理框架缺乏技术执行能力, 安全框架为人类设计, 而AI代理无判断力且以机器速度操作, 导致现有框架无法应对。 2026-4-8 12:0:0 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Ready, Fire, AI.

Ninety percent of enterprises are already running Enterprise GenAI at scale. That number comes from new research conducted by MIND in partnership with CISO ExecNet, and it should give every security leader pause. Not because AI adoption is surprising. But because of what sits directly beneath it.

Although 90% of organizations are deploying Enterprise GenAI at scale, only 34% of CISOs describe themselves as reasonably confident in their AI data security controls. As a result, only 1 in 5 of those AI initiatives are meeting their intended KPIs.

The adoption curve and the confidence curve are moving in opposite directions. That gap is what this research was built to examine.

Why does AI adoption expose what poor data governance was hiding?

For years, poor data governance was survivable. Files went unclassified. Repositories stayed ungoverned. Access controls were written for human actors who exercised natural judgment about what they touched and when. None of it surfaced as a crisis because no system was scanning everything at once.

AI changed that equation entirely. The moment an Enterprise GenAI tool connects to a data source, it finds everything within reach. Unclassified files, overshared repositories and sensitive data that nobody realized was broadly accessible. At one organization, executive compensation files had been sitting in SharePoint for years with no classification or access controls. When an Enterprise AI tool was deployed, those files became broadly accessible to a wide internal audience overnight. Security by obscurity ended the moment AI came online.

The research puts numbers to this reality.

  • 70% of security leaders struggle to enforce policies on GenAI tools
  • 66% cannot enforce policies on AI agents
  • And 98% are dealing with at least one significant AI security challenge

These aren't organizations without governance. Boards have been briefed. Policies have been written. Frameworks have been established. But as the research makes clear, governance without technical enforcement is intention without effect. For most organizations, the mechanisms capable of applying those policies against data in motion, at the speed AI demands, simply don't exist yet.

The deeper issue is structural. Every security framework in the enterprise was built with human actors in mind. Humans can be trained, audited and held accountable. Even privileged users exercise judgment about what they access and share. AI agents inherit the same permissions but operate without any of that judgment. They move at machine speed and find everything within reach, not just what's relevant. Thirty-two percent of organizations already have unknown agents operating in their environments. The frameworks that were adequate before AI arrived are now being stress-tested at a scale they were never built to handle.

“undefined”

Parrish Gunnels CISO, MVB Bank

What does new research from 124 CISOs reveal about AI success and data trust?

MIND and CISO ExecNet set out to understand exactly where data trust is breaking down and what it means for AI success. The study combined a quantitative survey of 124 senior security leaders with 20 qualitative interviews from CISOs at organizations with more than 1,500 employees or over one billion dollars in annual revenue. All participants held VP-level roles or higher. The seven insights that emerged from the convergence of survey data and practitioner experience represent the strongest and most consistent patterns across the entire research project.

Those insights trace a connected arc.

  • The enforcement gap
  • The data debt problem
  • The structural mismatch between security frameworks designed for human actors and the non-human actors now operating against them
  • The measurable cost of AI initiative failure
  • The growing difficulty of communicating AI risk to a business that is committed to moving fast
  • The competitive advantage that flows to organizations who solve it first.

The central thesis is that data trust is not a security feature. It is the invisible but decisive ingredient that determines whether AI initiatives succeed or fail. When data trust is high, organizations can use data freely to power AI-driven outcomes. When it isn't, AI innovation slows, scales poorly or introduces risk that most organizations can't yet see.

MIND isn't just reporting on this gap. We're minding the conditions that close it, helping organizations achieve visibility into what data exists, extend governance to non-human actors and build enforcement that operates at AI speed. The organizations that build that foundation now aren't just reducing exposure. They're building the only infrastructure that allows AI to become a genuine competitive accelerant.

“undefined”

Jacob Combs CISO, Tandem Diabetes Care

How can CISOs close the gap between AI adoption and data security?

The full report, {children}, is available now. It covers the enforcement gap, the data debt problem, why AI agents behave so differently from human users, how AI initiative failure stays invisible and what a minimum viable security foundation actually looks like in practice. It also maps a clear path forward for CISOs who want security to be the function that enables AI adoption, not the one that slows it down.

If your organization is running AI at scale and the outcomes aren't matching the investment, this research was written for you.

{children}

*** This is a Security Bloggers Network syndicated blog from Top of MIND Blog | RSS authored by Samuel Hill, Product Marketing at MIND. Read the original post at: https://mind.io/blog/data-trust-is-the-hidden-reason-most-ai-initiatives-fail


文章来源: https://securityboulevard.com/2026/04/data-trust-is-the-hidden-reason-most-ai-initiatives-fail/
如有侵权请联系:admin#unsafe.sh