Survey Surfaces Greater Appreciation for AI Risks
好的,我现在要帮用户总结这篇文章的内容,控制在100字以内。首先,我需要快速浏览文章,抓住主要观点。 文章主要讲的是AI带来的网络安全风险。调查结果显示,72%的受访者认为风险达到了前所未有的高度。同时,AI也被用来对抗这些威胁,79%的人正在或计划使用AI来保护系统。但问题在于,很多组织对AI的理解不够深入,而且在部署AI时缺乏足够的安全控制措施。 接下来,我需要把这些关键点浓缩成一句话。要涵盖AI带来的风险、被用来对抗威胁的情况以及存在的问题。同时要注意字数限制。 可能的结构是:先点出AI带来的风险增加,然后提到使用AI来应对威胁的情况,最后指出理解和控制上的不足。 比如:“调查显示,AI的普及加剧了网络安全风险,72%的受访者认为风险创历史新高。尽管79%的企业正利用AI对抗威胁,但65%承认对其理解不足且缺乏有效控制。” 这样既涵盖了主要数据和观点,又符合字数要求。 </think> 调查显示,人工智能的普及加剧了网络安全风险,72%的受访者认为风险创历史新高。尽管79%的企业正利用人工智能对抗威胁,但65%承认对其理解和控制不足。 2025-10-29 13:0:46 Author: securityboulevard.com(查看原文) 阅读量:10 收藏

A survey of 3,500 IT and business leaders across the US, U.K., France, Germany and Australia suggests there is a growing awareness of the inherent cybersecurity risks created by the rise of artificial intelligence (AI), with nearly three-quarters of respondents (72%) noting that risks for their organization have never been higher.

Conducted by Sapio Research on behalf of Vanta, a provider of a platform for automating governance and compliance workflows, the survey finds 59% of business and IT leaders recognize that AI cyber threats are advancing faster than the ability of their security team to deal with them.

Cruise Con 2025

Recent Articles By Author

In fact, organizations in the last year report they have seen an increase in phishing (49%), malware (48%), and identity theft or fraud (47%) which have all been driven by AI.

On the plus side, organizations are also investing in AI to combat these threats, with 79% of respondents currently or planning to use AI agents to protect against AI-cyber attacks. However, nearly two-thirds (65%) also admit their use of agentic AI outpaces their understanding of it, even though just under half (48%) said they have already developed a framework for granting or limiting autonomy in AI systems.

A full 95% said AI and automation have improved security team effectiveness, with half noting that risk assessments and incident response times are faster and more accurate with AI. More than three-quarters (76%) said AI and automation tools are reducing burnout and improving day-to-day productivity

Nevertheless, 61% also said they trust agentic AI to override human decision-making in certain scenarios, such as suspending a risky browser extension or session when a policy violation is detected. A total of 71% also said they feel comfortable with agentic AI giving input on security strategy.

Khush Kashyap, senior director for governance, risk and compliance (GRC) at Vanta, said it’s apparent that as the pace of AI innovation continues to accelerate, most existing GRC processes will need to be revamped to enable policies to be applied to AI agents.

In some cases, AI agents will naturally inherit the policies and permissions that have been applied to the humans who created them. However, there will also be large numbers of autonomous agents that are automating tasks on behalf of an organization. Each of those AI agents will need to have its own explicit set of controls.

The challenge is that many of these AI agents are now being built and deployed without much input from cybersecurity and GRC teams, noted Kashyap. There is, as a result, a tendency to achieve a base level of compliance rather than truly appreciating potential risks, she added.

That can be especially problematic because in the absence of controls, AI agents will access all accessible data regardless of how sensitive it might be, said Kashyap. That data can then be incorporated into outputs that will run afoul of any number of existing regulations, she noted.

Fortunately, the Vanta survey suggests there is a greater appreciation for the value of GRC. A full 82% of respondents said they believe improving security and compliance directly boosts customer trust.

Unfortunately, 61% of respondents also said they spend more time proving security rather than improving it, with 64% agreeing that the security frameworks feel more like ‘security theater’ than actual work.

Respondents also said their organizations only spend half of what they should on security, which, on average, is 10% of IT budgets, versus an ideal 17% of the IT budget.

It’s still early days so far as operationalizing AI is concerned, but there is still no substitute for first GRC principles when it comes to security, said Kashyap. The challenge, of course, is finding the will and the resources needed to implement them.


文章来源: https://securityboulevard.com/2025/10/survey-surfaces-greater-appreciation-for-ai-risks/
如有侵权请联系:admin#unsafe.sh