August 6, 2025
4 Min Read
AI presents an incredible opportunity for organizations even as it expands the attack surface in new and complex ways. For security leaders, the goal isn't to stop AI adoption but to enable it securely.
Artificial Intelligence is no longer on the horizon; it's here, and it's being built and deployed in the cloud at a staggering pace. From leveraging managed services like Microsoft Azure Cognitive Services and Amazon SageMaker to building custom models on cloud infrastructure, organizations are racing to unlock the competitive advantages of AI.
But this rush to adoption brings a new, high-stakes set of security challenges. The Tenable Cloud AI Risk Report 2025 reveals that the very platforms enabling this revolution are also introducing complex and often overlooked risks.
Our analysis uncovered a stark reality: AI workloads are significantly more vulnerable than their non-AI counterparts. A staggering 70% of cloud workloads with AI software installed have at least one critical, unpatched vulnerability, compared with 50% for workloads without AI software. This makes your most innovative projects your most insecure.
One of the most significant challenges stems from the way managed AI services are built. Cloud providers often layer new AI services on top of existing infrastructure components, a concept we call "Jenga-style" architecture. For example, a managed notebook service might be built on a container service, which in turn runs on a virtual machine.
The problem? Risky defaults and misconfigurations can be inherited from these underlying layers, often without the user's knowledge. This creates a complex and opaque stack of permissions and settings that is incredibly difficult to secure. A default setting that allows root access on an underlying compute instance, for example, could be inherited by the AI service, creating a critical security flaw that isn't visible in the AI service's top-level configuration.
Our research found specific, risky defaults in popular services:
For security leaders, the goal isn't to stop AI adoption but to enable it securely. This requires a proactive and AI-aware security strategy. Here are four recommendations:
AI presents an incredible opportunity, but it also expands the attack surface in new and complex ways. By understanding these unique risks and applying foundational cloud security principles, you can ensure your organization's journey into AI is both innovative and secure.
Discover the full scope of AI and cloud risks in our latest reports.
➡️ Download the Tenable Cloud AI Risk Report 2025 to learn more.
➡️ Download the Tenable Cloud Security Risk Report 2025
➡️ View our on-demand research webinar
JENGA® IS A REGISTERED TRADEMARK OWNED BY POKONOBE ASSOCIATES.
As Senior Director of Cloud Security Marketing, Justin leads the go-to-market strategy for Tenable's Cloud Native Application Protection Platform. Driven by his background in IT and his passion for deeply understanding customers’ desired outcomes, Justin drives opportunities for Tenable to partner with customers and redefine the future of cybersecurity.
Enter your email and never miss timely alerts and security guidance from the experts at Tenable.