NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents
嗯,用户让我帮忙总结一篇文章的内容,控制在100个字以内,而且不需要用“文章内容总结”或者“这篇文章”这样的开头。直接写描述就行。好的,首先我得仔细阅读这篇文章,理解它的主要观点。 文章讲的是美国国家标准与技术研究所(NIST)正在为保护人工智能(AI)代理建立一个攻击和缓解的分类法。他们和OWASP GenAI安全项目以及行业伙伴合作,识别部署AI代理带来的攻击面范围。目标是加强用于智能体AI应用和模型的网络安全框架,因为现在这些框架太弱了,无法在企业IT环境中使用。 还有提到安全研究人员在测试中100%成功说服大型语言模型(LLMs)分享敏感数据。Vassilev指出,AI从根本上改变了网络安全,因为现在需要防御的边界实际上是整个互联网。此外,OWASP列出了AI代理的十大威胁,但AI的采用速度已经超过了安全团队的能力。 还有关于AI代理继承创建者的身份和权限的问题,以及影子AI带来的问题可能会加剧。最后提到组织可能还没有积极应对这些威胁,可能会发生很多高调事件。 好的,现在我要把这些信息浓缩到100字以内。重点包括NIST与OWASP合作开发分类法,识别攻击面,加强框架;AI改变网络安全;测试中LLMs泄露数据;需要新的安全指南;影子AI问题加剧;可能高调事件频发。 现在组织语言:NIST与OWASP合作开发分类法,识别攻击面并制定缓解策略;目标是增强智能体AI应用的安全性;研究表明LLMs易受攻击泄露数据;强调网络安全需适应互联网边界变化;同时面临影子AI带来的挑战和潜在高调事件风险。 检查字数:大概在100字左右。确保没有使用“文章内容总结”之类的开头。 </think> 美国国家标准与技术研究所(NIST)正与OWASP GenAI安全项目合作开发人工智能(AI)代理的安全分类法,旨在识别部署AI带来的攻击面并制定缓解策略。研究显示大型语言模型易受攻击泄露敏感数据,强调需加强智能体AI应用的安全性以适应互联网边界变化的网络安全需求,并警惕影子AI带来的潜在威胁和高调事件风险。 2025-12-10 19:8:52 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Avatar photo

The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents.

Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners and the OWASP GenAI Security Project to identify the scope of the attack surface that deploying AI agents creates.

The overall goal is to strengthen cybersecurity frameworks for agentic AI applications and models that today are too weak to be used in enterprise IT environments, he added. In fact, organizations experimenting with AI agents today should only be using data they can live without, said Vassilev. In recent tests, security researchers were 100% successful in convincing large language models (LLMs) to share sensitive data, he added.

In general, AI has fundamentally changed cybersecurity because the perimeter that now needs to be defended is effectively the Internet itself, said Vassilev. It’s still early days so far as agentic AI security is concerned, but there is a clear need for additional security guidance, he noted.

The OWASP GenAI Security Project earlier today identified a top ten list of AI Agent threats that organizations will need to find ways to thwart. The challenge is the pace of AI agent adoption is already far outpacing the ability of cybersecurity teams to keep pace.

In theory, at least, any AI agent created by a human will inherit the identity and permissions that have been assigned to the person who created it. However, there will be classes of AI agents that have been created to autonomously complete tasks on behalf of the organization, many of which will be assigned a new type of non-human identity and associated permissions that will need to be governed and managed as they invoke Model Context Protocol (MCP) servers to access any and all data unless they are specifically prohibited.

Unfortunately, shadow AI issues that are already becoming problematic will also likely be further exacerbated as either internal or external AI agents are employed by end users with little or no regard to the security implications. Cybersecurity teams will then be required to regularly scan for signs of rogue AI agent activity.

It’s not clear how proactively organizations are addressing these potential threats and there may be a significant number of high-profile incidents before organizations address these issues. Hopefully, cybersecurity teams will be able to put some measures in place to once again protect employees from themselves, but it’s already been shown how, in the absence of any security controls, a trivial prompt injection attack can be used to, for example, convince an AI agent or tool to exfiltrate sensitive data.

The challenge now is preventing those attacks from happening without putting cybersecurity teams in the way of AI progress that, at this point, is only going to continue to advance regardless of risk levels that only now seem to be rising higher with each passing day.

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/12/nist-plans-to-build-threat-and-mitigation-taxonomy-for-ai-agents/
如有侵权请联系:admin#unsafe.sh