We Need a Shared Responsibility Model for AI
好的,我现在要帮用户总结这篇文章的内容,控制在100字以内。首先,我需要仔细阅读文章,理解其主要观点。 文章讲的是过去6到8个月里,公司的研究人员发现了多个AI工具的漏洞,这些漏洞让外部攻击者可以窃取数据、利用AI浏览器或污染AI系统的记忆。他们负责任地披露了这些漏洞,但AI供应商普遍认为这些问题不是他们的责任。 供应商认为他们只负责AI模型的安全性,其他如提示注入、网站漏洞、用户错误或浏览器安全问题都不在他们的责任范围内。然而,现实中的安全漏洞往往涉及多个层面,包括软件和用户行为。因此,需要一个共享责任的模型来明确各方的责任。 文章还提到云计算领域已经采用了共享责任模型,明确了云服务提供商和客户各自的责任。现在AI也需要类似的模型来应对安全挑战。 总结一下,文章的主要内容是:发现多个AI工具的漏洞后,供应商推卸责任;需要建立共享责任模型来明确各方在AI安全中的职责。 接下来,我需要用简洁的语言将这些要点浓缩到100字以内,并且不需要特定的开头。 </think> 研究人员发现多个AI工具存在漏洞,允许外部攻击者窃取数据或污染系统记忆。供应商普遍推卸责任,认为仅需负责AI模型的安全性。然而,现实中的安全问题往往涉及多层因素和用户行为。文章呼吁建立类似云计算的共享责任模型,明确各方在AI安全中的职责。 2026-4-17 19:23:4 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

Over the past 6-8 months, researchers at my company discovered vulnerabilities across multiple AI tools that allowed external bad actors to steal data, exploit AI browsers, or poison the core memories of AI systems. As we responsibly disclosed these flaws, we found that AI vendors almost universally told us, “It’s not our problem.”

In their view, they were responsible for the security of the AI model itself. Any vulnerability outside of that strict definition — prompt injections, website vulnerabilities, user error, problems in browser security — was, in their view, “out of scope” of their security programs.

The problem, however, is that we don’t live in such a clean, compartmentalized world.

Security vulnerabilities in modern software systems often span multiple layers of the traditional security stack and frequently involve both software vulnerabilities and user behavior. These responses highlighted the lack of clarity around who bears responsibility for fixing emerging vulnerabilities and flaws in AI and agentic systems.

What we need, in short, is a shared responsibility model for AI.

Accountability for AI Risk: We’ve Already Done the Heavy Lifting

This type of shared responsibility model is not a new concept. In fact, when organizations first began migrating workloads off premises, there was real confusion about who was responsible when something went wrong. Was the cloud provider liable for a breach? Or the customer using the platform? The answer, it turned out, was both. From that confusion emerged the shared responsibility model for public cloud security.

Under the shared responsibility model, cloud providers own the security of the cloud: the infrastructure, hardware, and virtualization layers that power the service. Customers own security in the cloud: their data, identities, configurations, and applications. Some areas fall into gray zones that shift depending on the service model. IaaS gives customers more control and therefore more responsibility. PaaS shifts more to the provider. SaaS shifts it further still. The framework works because it clarifies expectations and acknowledges that both sides have a role to play.

Artificial intelligence now faces the same challenge. The industry needs a shared responsibility model for AI before security gaps widen further.

Currently, AI vendors frequently disclaim responsibility for real-world attacks against their systems, categorizing incidents as “social engineering” or user misuse rather than security vulnerabilities. That framing may be technically defensible, but as AI becomes embedded in browsers, development environments, enterprise workflows, and autonomous agents, it’s increasingly untenable.

Applying the Cloud Model to AI

AI is a platform, with plugins, extensions, memory systems, agents, and automation capabilities. And just as with the cloud, AI vendors control critical parts of the stack that can introduce systemic risk. The cloud industry already solved this problem conceptually. The same structure applies to AI platforms across three layers:

AI as Software (SaaS): When users interact with AI through standalone applications — chatbots, AI assistants — they’re consuming it as a software service. The vendor is responsible for platform integrity, interface safety, and backend infrastructure. The customer is responsible for how they use it, what data they input, and access management.

AI as a Platform (PaaS): AI browsers, copilots, plugin ecosystems, and tool integrations represent a platform layer. Vendors provide frameworks that allow third-party tools and extensions. Security responsibilities must be shared: vendors must enforce sandboxing, permissions, and safe APIs, while customers must vet extensions, manage integrations, and implement governance policies.

AI Infrastructure (IaaS): At the deepest level, organizations are building autonomous agents, workflows, and AI applications on top of models. Here, the deploying organization bears responsibility for system design, data flows, and operational security. The model provider remains responsible for underlying infrastructure, model integrity, and platform safeguards.

The cloud era taught us that security frameworks must evolve alongside technology, and AI has reached the same inflection point cloud computing hit 10–15 years ago. If we wait for major AI-driven breaches before defining responsibility boundaries, we’re only repeating mistakes we’ve already learned how to avoid.

We know how this story ends when the industry moves too slowly. This time, let’s get ahead of it.


文章来源: https://securityboulevard.com/2026/04/we-need-a-shared-responsibility-model-for-ai/
如有侵权请联系:admin#unsafe.sh