The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents.
Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners and the OWASP GenAI Security Project to identify the scope of the attack surface that deploying AI agents creates.
The overall goal is to strengthen cybersecurity frameworks for agentic AI applications and models that today are too weak to be used in enterprise IT environments, he added. In fact, organizations experimenting with AI agents today should only be using data they can live without, said Vassilev. In recent tests, security researchers were 100% successful in convincing large language models (LLMs) to share sensitive data, he added.
In general, AI has fundamentally changed cybersecurity because the perimeter that now needs to be defended is effectively the Internet itself, said Vassilev. It’s still early days so far as agentic AI security is concerned, but there is a clear need for additional security guidance, he noted.
The OWASP GenAI Security Project earlier today identified a top ten list of AI Agent threats that organizations will need to find ways to thwart. The challenge is the pace of AI agent adoption is already far outpacing the ability of cybersecurity teams to keep pace.
In theory, at least, any AI agent created by a human will inherit the identity and permissions that have been assigned to the person who created it. However, there will be classes of AI agents that have been created to autonomously complete tasks on behalf of the organization, many of which will be assigned a new type of non-human identity and associated permissions that will need to be governed and managed as they invoke Model Context Protocol (MCP) servers to access any and all data unless they are specifically prohibited.
Unfortunately, shadow AI issues that are already becoming problematic will also likely be further exacerbated as either internal or external AI agents are employed by end users with little or no regard to the security implications. Cybersecurity teams will then be required to regularly scan for signs of rogue AI agent activity.
It’s not clear how proactively organizations are addressing these potential threats and there may be a significant number of high-profile incidents before organizations address these issues. Hopefully, cybersecurity teams will be able to put some measures in place to once again protect employees from themselves, but it’s already been shown how, in the absence of any security controls, a trivial prompt injection attack can be used to, for example, convince an AI agent or tool to exfiltrate sensitive data.
The challenge now is preventing those attacks from happening without putting cybersecurity teams in the way of AI progress that, at this point, is only going to continue to advance regardless of risk levels that only now seem to be rising higher with each passing day.
Recent Articles By Author