A survey of 500 security practitioners and decision-makers across the United States and Europe published today finds cyberattacks aimed at artificial intelligence (AI) applications are rising, with prompt injections involving large language models (LLMs) at the top of the list (76%), followed by vulnerable LLM code (66%) and LLM jailbreaking (65%).
Conducted by Traceable by Harness, a provider of a platform for building applications and securing application programming interfaces (APIs), the survey also finds 63% of security practitioners have no way to tell where LLMs are in use across their organization, with three-quarters (75%) expecting shadow AI to eclipse security issues previously caused by shadow adoption of IT technologies.
Nearly as many (72%) said shadow AI is a gaping chasm in their security posture. A full 82% added that AI applications are the new frontier for cybercriminals, with 75% admitting AI security threats are “a whole new kettle of fish” they have never had to consider before.
Adam Arellano, Field CTO for Traceable by Harness, said the survey results make it clear that many AI applications are not being built and deployed using best application security practices. For example, only 43% of respondents said application developers ensure AI applications are always built with security capabilities built in. Just as troubling, only just over a third (34%) let security teams know about a project before they get started building an application. More than half (53%) said they will be notified before an application is deployed in a production environment, while 14% said their teams will only be notified after the application is deployed or a security incident has occurred.
Additionally, most security teams still lack real time visibility into a software bill of materials for AI components (AI-BOM) (63%) and LLM Model outputs (60%).
Nearly three quarters (74%) said application developers still view security issues and concerns as a blocker to AI innovation.
Ultimately, it’s now more a question of what level of severity cybersecurity incidents involving AI applications will reach as the rate they are developed continues to outpace the ability of cybersecurity teams to keep up, noted Arellano. In fact, the survey notes that 61% of new enterprise applications are now designed with AI components in mind and 70% of respondents noted that the APIs used to invoke LLMs are accessing sensitive data.
Most of the LLMs being relied on today to generate code were created using examples of flawed code pulled from across the web. In the absence of any best practices for governing usage of AI coding tools, the more code generated only results in more vulnerabilities being created. The challenge then becomes finding and remediating those vulnerabilities before cybercriminals are able to exploit them. Unfortunately, cybercriminals are also now starting to use AI tools to discover vulnerabilities using exploits that are now able to create faster than ever.
No matter how concerned security professionals are about AI code, the genie at this point is not going back in the bottle. The issue then becomes finding a way to encourage application developers to exercise more diligence by, for example, scanning AI code for vulnerabilities before applications are deployed in production environments. In the meantime, cybersecurity teams should prepare for the worst while continuing to hope their worst fears are not fully realized.
Recent Articles By Author
