Over the past 6-8 months, researchers at my company discovered vulnerabilities across multiple AI tools that allowed external bad actors to steal data, exploit AI browsers, or poison the core memories of AI systems. As we responsibly disclosed these flaws, we found that AI vendors almost universally told us, “It’s not our problem.”
In their view, they were responsible for the security of the AI model itself. Any vulnerability outside of that strict definition — prompt injections, website vulnerabilities, user error, problems in browser security — was, in their view, “out of scope” of their security programs.
The problem, however, is that we don’t live in such a clean, compartmentalized world.
Security vulnerabilities in modern software systems often span multiple layers of the traditional security stack and frequently involve both software vulnerabilities and user behavior. These responses highlighted the lack of clarity around who bears responsibility for fixing emerging vulnerabilities and flaws in AI and agentic systems.
What we need, in short, is a shared responsibility model for AI.
This type of shared responsibility model is not a new concept. In fact, when organizations first began migrating workloads off premises, there was real confusion about who was responsible when something went wrong. Was the cloud provider liable for a breach? Or the customer using the platform? The answer, it turned out, was both. From that confusion emerged the shared responsibility model for public cloud security.
Under the shared responsibility model, cloud providers own the security of the cloud: the infrastructure, hardware, and virtualization layers that power the service. Customers own security in the cloud: their data, identities, configurations, and applications. Some areas fall into gray zones that shift depending on the service model. IaaS gives customers more control and therefore more responsibility. PaaS shifts more to the provider. SaaS shifts it further still. The framework works because it clarifies expectations and acknowledges that both sides have a role to play.
Artificial intelligence now faces the same challenge. The industry needs a shared responsibility model for AI before security gaps widen further.
Currently, AI vendors frequently disclaim responsibility for real-world attacks against their systems, categorizing incidents as “social engineering” or user misuse rather than security vulnerabilities. That framing may be technically defensible, but as AI becomes embedded in browsers, development environments, enterprise workflows, and autonomous agents, it’s increasingly untenable.
AI is a platform, with plugins, extensions, memory systems, agents, and automation capabilities. And just as with the cloud, AI vendors control critical parts of the stack that can introduce systemic risk. The cloud industry already solved this problem conceptually. The same structure applies to AI platforms across three layers:
AI as Software (SaaS): When users interact with AI through standalone applications — chatbots, AI assistants — they’re consuming it as a software service. The vendor is responsible for platform integrity, interface safety, and backend infrastructure. The customer is responsible for how they use it, what data they input, and access management.
AI as a Platform (PaaS): AI browsers, copilots, plugin ecosystems, and tool integrations represent a platform layer. Vendors provide frameworks that allow third-party tools and extensions. Security responsibilities must be shared: vendors must enforce sandboxing, permissions, and safe APIs, while customers must vet extensions, manage integrations, and implement governance policies.
AI Infrastructure (IaaS): At the deepest level, organizations are building autonomous agents, workflows, and AI applications on top of models. Here, the deploying organization bears responsibility for system design, data flows, and operational security. The model provider remains responsible for underlying infrastructure, model integrity, and platform safeguards.
The cloud era taught us that security frameworks must evolve alongside technology, and AI has reached the same inflection point cloud computing hit 10–15 years ago. If we wait for major AI-driven breaches before defining responsibility boundaries, we’re only repeating mistakes we’ve already learned how to avoid.
We know how this story ends when the industry moves too slowly. This time, let’s get ahead of it.