Vulnerabilities for AI and ML Applications are Skyrocketing
2024-4-19 05:1:16 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

The number of AI-related Zero Days has tripled since November 2023, according to the latest findings from Protect AI’s huntr community of over 15,000 maintainers and security researchers.

In April 2024 alone, a whopping 48 vulnerabilities have already been uncovered within widely used open source software (OSS) projects such as MLFlow, Ray, and Triton Inference Server.

This figure represents a 220% rise from the 15 vulnerabilities first reported in November, the report noted.

Among these vulnerabilities, a particularly prevalent threat highlighted by the report is Remote Code Execution (RCE), which allows an attacker to run commands or programs on a victim’s computer or server without needing physical access to it. The attacker can take full control of the compromised system, potentially leading to unauthorized access, data breaches, system compromise, and even total system takeover.

The Protect AI’s sobering statistics underscore the accelerating scale and velocity of the AI/ML Zero Day problem, and they suggest a growing need for enhanced security measures in AI development environments.

AIE

Old Vulnerabilities, New Practices

From the perspective of Marcello Salvati, a senior threat researcher at Protect AI, there were several interesting vulnerabilities related to AI/ML tooling in the report.

“If I had to give a couple that stand out, it would probably be the PyTorch Serve RCE and the BentoML RCE,” Salvati said. Both allow attackers to gain RCE on the server running these popular projects.

PyTorch and BentoML are also both inference servers, which means they’re designed to be exposed to users for them to use AI/ML models. “This factor makes these vulnerabilities extremely easy and valuable for an attacker to exploit,” Salvati explained.

The quantity of basic web application vulnerabilities discovered in these AI/ML projects is the report’s biggest surprise. “These types of vulnerabilities are rarely seen in the majority of web applications these days because of the prevalence of secure coding practices and web frameworks with ‘built-in’ security guardrails,” Salvati said.

The resurgence of these types of vulnerabilities indicates that security is an afterthought in AI/ML-related tooling, Salvati said. That goes against all the lessons that have been learned over the past decade or so.

LLM Tooling a Security Weakness

From Protect AI’s perspective, with LLM tooling becoming more popular, security-immature projects are being deployed at a high pace. Organizations may feel compelled to adopt LLM-based security projects due to competitive pressures or the desire to stay ahead in the ever-evolving threat landscape.

However, the rapid adoption of these projects raises concerns about security maturity. In their haste to deploy LLM tools, organizations may overlook crucial aspects of security, such as comprehensive risk assessments, robust testing protocols, and adherence to industry best practices.

As a result, there is a risk of deploying solutions that are not adequately fortified against emerging threats or lack the safeguards to protect sensitive data and assets. Organizations must prioritize security maturity alongside innovation.

Adopting Least Privilege, Zero Trust

AI adoption is proceeding at a breakneck (some would argue, reckless) speed. For organizations and security teams to protect themselves against a rapidly expanding and maturing threat, Salvati said adopting standard web application security is table stakes.

“The concept of least privilege applies here, as does adoption of security models including Zero Trust,” Salvati explained. “Most importantly, train your developers and AI engineers in secure coding practices and basic security principles.” Internal security audits of new AI/ML tools and libraries before deployment also cut the risk.

Considering the rate of acceleration in this AI/ML space it’s extremely hard to make any predictions 12-24 months out. “The only thing I’m certain of is that companies will be breached more often as a consequence of going ‘full steam ahead’ in adopting this tooling,” Salvati cautioned.

AI as Weakness and Advantage

Earlier reports have indicated GenAI adoption by malicious actors is bringing new security risks to organizations, though the same technology could be used defensively.

Indeed, even while IT security teams grapple with the new vulnerabilities AI adoption brings, implementation of AI-based cyber tools could also help organizations that are struggling to meet growing threats.

Photo by engin akyurt on Unsplash

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/04/vulnerabilities-for-ai-and-ml-applications-are-skyrocketing/
如有侵权请联系:admin#unsafe.sh