Lasso today added an agentic artificial intelligence (AI) service specifically designed to help organizations better secure AI applications before and after they have been deployed.
Ophir Dror, chief product officer, Lasso, said the Lasso Agentic Purple Teaming service scans AI applications based on large language models (LLMs) for vulnerabilities and security weaknesses using probes and dynamic data sets it creates. An autonomous AI agent then proactively applies governance and security policies to enforce cybersecurity policies.
That approach, in effect, combines cybersecurity tasks normally assigned to separate blue and red cybersecurity teams into a single purple team service, said Dror.
Purple teams simulate malicious attacks and penetration testing to identify security vulnerabilities and recommend remediation strategies. That approach enables cybersecurity teams to share information and insights to address weaknesses and remediate vulnerabilities.
The Lasso Agentic Purple Teaming service is based on a set of AI agents that have been specifically trained to be part of a purple cybersecurity team that has been created to assess AI applications, said Dror. Given the unique types of threats, such as prompt injection, a purple team approach to addressing security issues is needed to provide the level of coordination needed to quickly address those types of issues, he added.
At a time when more organizations are building and deploying AI applications, there is an acute shortage of cybersecurity professionals who know how to secure them. AI agents that are being widely adopted are especially vulnerable because they can be easily fooled and are often configured with too many permissions, noted Dror.
The Lasso Agentic Purple Teaming service helps mitigate the AI skills gap among cybersecurity teams by using a service that is augmented by AI agents to enable cybersecurity and application development teams to rapidly address these issues, said Dror. Lasso, in effect, is making a case for fighting fire with fire by employing a set of AI agents to detect and remediate those issues.
While the pace at which organizations are building and deploying AI applications varies widely, they present a rich taarget for cybercriminals given the level of investment required. Cybercriminals are now perfecting a variety of tactics and techniques that range from simply trying to poison the pool of data used to train the AI model to outright stealing the entire model.
Hopefully, cybersecurity teams are moving now to proactively secure AI applications, versus once again playing catch-up to an emerging technology. Unfortunately, the data science teams building AI applications don’t usually have much experience with cybersecurity, so the first wave of these applications being deployed is rife with potential issues. The challenge is that, given the level of scale at which AI applications operate, the level of risk that cybersecurity vulnerabilities and weaknesses represent to the business is already extremely high.
At this point, it’s only a matter of time before there will be a major cybersecurity incident involving AI applications. In the meantime, cybersecurity teams would be well-advised to start proactively looking for where AI applications are being deployed to get ahead of issues that are now all but inevitable.
Recent Articles By Author