The CISO’s Guide to Model Context Protocol (MCP)
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内,而且不需要用“文章内容总结”之类的开头。用户提供的文章是关于Model Context Protocol (MCP) 的安全风险和应对策略。 首先,我得通读整篇文章,抓住主要观点。文章讨论了MCP在工程团队中的快速采用,但安全领导者对此持谨慎态度。MCP不仅是一个新的集成标准,还引入了机器之间的动态自主通信,扩大了攻击面,并带来了传统安全工具无法应对的新威胁。 接下来,文章提到了几个关键点:攻击面扩大、权限过度、缺乏可见性、新威胁类型(如检索代理欺骗)、以及应对策略,比如原则最小权限、威胁建模、沙盒测试、社区协作等。 我需要将这些要点浓缩到100字以内。重点应放在MCP带来的风险和安全领导者需要采取的措施上。确保涵盖攻击面扩大、新威胁类型以及应对策略如原则最小权限和社区协作。 现在组织语言:先说明MCP带来的变化和风险,然后提到安全领导者的应对措施,包括原则最小权限、测试、社区工具等。 最后检查字数是否在限制内,并确保表达清晰简洁。 </think> 文章探讨了Model Context Protocol (MCP) 在工程团队中的快速采用及其带来的安全风险。尽管MCP为创新提供了巨大潜力,但其动态机器间通信显著扩大了攻击面,并引入了传统安全工具无法应对的新威胁。文章指出,安全领导者需采取“最低权限原则”审查AI代理权限,并通过沙盒测试和社区协作来应对新兴风险。 2025-10-30 14:26:35 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

As engineering teams race to adopt the Model Context Protocol (MCP) to harness the power of agentic AI, a more cautious conversation dominates security leaders’ mindshare. While the potential for innovation is clear, the primary question for CISOs and CIOs is more fundamental: how are we going to manage the growing risk? 

The answer is complex because MCP represents more than just a new integration standard. It creates a dynamic and autonomous layer of machine-to-machine communication that significantly expands an organization’s attack surface. This brings a new class of threats that traditional security tools, built for predictable human interactions, were simply not designed to tackle. 

Cruise Con 2025

With traditional APIs, say, we secured a predictable entry point—the door. But with MCP, we have to secure the ghost in the machine. Because the biggest risk is no longer just unauthorized access, but an authorized agent making an unforeseen, catastrophic decision. 

And so, for security leaders, successfully navigating this escalating landscape requires a clear-eyed understanding of these emerging risks and a pragmatic strategy for enterprise adoption. 

An Expanded Attack Surface 

The first reality for security leaders to confront is that while MCP is designed to break down data silos, it also dramatically expands the organization’s attack surface. Priyanka Tembey, Co-Founder and CTO at Operant, explains that each new tool or data source connected via the protocol brings its own unique set of compliance requirements and operational risks into a now-interconnected ecosystem. This creates two primary challenges that are top-of-mind for today’s CISOs and Chief AI Officers. 

The first is the significant risk of overprivileged agent access. In the rush to enable functionality, engineering teams may grant AI agents broader permissions than are strictly necessary for their tasks. Tembey warns that this common mistake dramatically increases the potential impact if an agent is compromised, as a single rogue agent could access a wide array of connected systems and data sources. 

The second major concern is the lack of visibility and auditability. Tembey notes that the dynamic, machine-to-machine communication common in agentic workflows often bypasses traditional monitoring tools that are built to track predictable, human-driven interactions. This creates a dangerous visibility gap, making it difficult for security teams to detect anomalies, audit agent behavior for compliance, or trace the origin of a security incident. For security leaders, this means a proactive threat modeling exercise is a non-negotiable first step in any MCP initiative. 

“For us, this expanded attack surface is fundamentally a testing challenge,” comments Sai Krishna, Director of Engineering at LambdaTest, an AI-native software testing platform. “Because AI agents operate dynamically, you can’t just run a traditional security scan and call it a day. We see the solution as providing sandboxed, instant infrastructure for every agent interaction.” This allows security teams to rigorously test agent permissions and behavior in an isolated environment before they ever touch production data, effectively shifting security testing left for the new AI stack. 

But while this sandboxed approach provides the necessary isolation, the challenge can sometimes be more profound than simply containing risk. Because the focus must also shift to validating the agent’s autonomous reasoning within those environments. 

Srinivasan Sekar, also a Director of Engineering at LambdaTest who works alongside Sai Krishna and oversees development for Kane AI (an end-to-end testing agent), expands on this by saying the issue is systemic. “The real change here is that we’re not just testing applications anymore. We’re also testing autonomous decision-makers that can connect different tools and data sources in ways we can’t fully predict when we design them. Conventional security testing assumes a limited number of execution paths; however, agentic systems introduce computational complexity that escalates exponentially with each MCP connection. At LambdaTest, we treat every interaction with an agent as a possible security breach until we can prove otherwise. We have set up systems that can create thousands of temporary test environments at the same time. Each one is set up to record not only what the agent does, but also why it made that choice based on its surroundings and the tools it had at its command.” This level of observability changes security from a checkpoint to a feedback loop that keeps going, which makes agents more trustworthy over time. 

For CISOs, this provides two immediate actions to take. First, mandate a “Principle of Least Privilege” (PoLP) review for every new AI agent before it is deployed, ensuring its permissions—which should be defined in clear API Specifications—are scoped to the narrowest possible function. Second, initiate a “threat-informed validation” program. It involves creating a library of simulated attack scenarios—such as an agent attempting to escalate privileges or access unauthorized data—and continuously running them against agents in a sandboxed test environment. This proactive approach allows security teams to find and fix vulnerabilities before they can be exploited in the production ecosystem. 

A Framework for Safe Adoption (Start Internally) 

Given these challenges, the appropriate response for a security-conscious enterprise is not to block the technology, but to adopt them within a controlled, risk-aware framework. Loïc Berthou, CTO of Qlarifi, offers a pragmatic perspective based on his experience in risk and compliance. He argues that while MCP is a valuable standard for thinking about the future of AI-native APIs, it is not yet mature enough to handle highly sensitive information or business-critical workflows, pointing to gaps in robust security and encryption capabilities. 

This assessment leads to a clear strategic recommendation: a “crawl-walk-run” approach that begins with internal, low-risk experimentation. Berthou advises that organizations should first limit the use of MCP to internal “dog-fooding” on very specific and narrow use cases. The primary goal of this strategy is to deliberately limit the “threat surface” that is exposed to the AI agent, allowing security and engineering teams to learn the protocol’s nuances in a contained environment. 

A perfect example of a safe and effective first step is to expose internal technical documentation via an MCP server. This allows an AI agent to provide up-to-date information to developers, delivering an immediate productivity benefit to the engineering team. Crucially, this use case involves non-sensitive data and is contained entirely within the organization, providing a high-value, low-risk project to build expertise and test security controls before ever considering more critical, external-facing applications. 

Sai Krishna of LambdaTest agrees, noting that this phased approach must be matched with increasingly rigorous testing. “This ‘crawl-walk-run’ model is exactly how we approach validation in this new paradigm. It’s not just about starting with internal data; it’s about scaling the rigor of testing at each step. On our AI-native testing platform, this means an agent might start with simple functional tests against a documentation server. But before it ‘runs,’ it must graduate to full-scale performance and security validation across thousands of sandboxed environments. This ensures that by the time an agent is interacting with critical systems, its behavior is not just functional, but predictable and secure.” 

And so to implement this framework, security leaders can create a formal “AI Use Case Risk Matrix.” This matrix should classify all potential MCP projects based on two axes: data sensitivity and business criticality. This provides a clear, data-driven methodology for approving projects, ensuring that “crawl” phase initiatives are limited to low-risk quadrants. In parallel, leaders can establish a “Graduated Testing Protocol” that maps mandatory security validation requirements—from basic vulnerability scans to full-scale red-teaming exercises—to each risk level. This ensures that as an agent’s access and importance grows, so does the rigor of its security testing. 

What Leaders Are Missing 

Beyond the immediate architectural risks, a successful MCP security strategy must also account for a new and more sophisticated class of threats that are not yet widely discussed. Tembey of Operant warns that security leaders need to look beyond the known threat landscape and prepare for novel, AI-specific attacks that could bypass traditional defenses entirely. 

One of the most insidious of these is what she terms “retrieval-agent deception.” This attack involves poisoning public or third-party datasets with hidden, malicious MCP commands. When a well-meaning AI agent retrieves and processes this poisoned data to formulate a response, it may unknowingly execute the embedded commands, creating a subtle but powerful supply-chain attack vector that is incredibly difficult to detect. 

Tembey also points to long-term strategic risks, such as “quantum-prepared attacks,” where adversaries collect encrypted MCP traffic today with the intention of decrypting it years from now with future quantum computers. Internally, she highlights the growing governance challenge of “Shadow AI,” which occurs when developers, in their eagerness to innovate, connect agents to unapproved tools or data sources, bypassing critical security and compliance reviews and creating significant organizational risk. For security leaders, this evolving landscape means that threat modeling for AI cannot be a one-time event; it must be a continuous, forward-looking process. Srinivasan Sekar says, “These new threats show that security for AI agents can’t be an afterthought; it has to be a part of the development lifecycle”. “What’s missing is a focus on ongoing validation. We think that as a quality engineering platform, you should always try to mimic these new attacks in a safe test environment. Because this lets engineering teams make strong agents that can find and reject bad data or let them know when they’re being asked to do something they shouldn’t. It’s about going from passive defence to active, automated security checks.

For security leaders, the immediate action is to formally update the organization’s threat modeling process. CISOs should mandate that all security reviews now include a dedicated section for “AI-Specific Attack Vectors,” explicitly requiring teams to assess the risks of retrieval-agent deception and Shadow AI. Furthermore, this new intelligence must be fed directly into the testing cycle by creating an “adversarial simulation pipeline.” This involves building a suite of automated tests that actively try to trick agents with poisoned data or probe for connections to unsanctioned tools, turning the security team’s forward-looking threat intelligence into an automated, preventative control.

Hardening Defenses From Runtime Awareness to Community Tools

Defending against this new threat landscape requires a security model that moves beyond static, perimeter-based controls. Tembey of Operant argues for the adoption of “runtime-aware defenses,” a strategy designed for the dynamic nature of agentic AI. Because many new risks live inside the agent’s logic, prompts, and tool responses, she explains that defenses must operate in real time at this new layer. This includes the continuous monitoring of agentic workflows to detect anomalous behavior, the inline redaction of sensitive data before it reaches a tool, and the use of adaptive internal firewalls to block unauthorized data transfers at network egress points.

For CISOs, this means the first actionable step is to begin evaluating a new category of security solutions, which can be thought of as “Agentic Security Posture Management” platforms. The immediate priority is to issue RFIs for tools that provide real-time visibility into agent behavior and can enforce data redaction and egress policies dynamically. This shifts the security budget from a purely preventative posture to one that includes robust, real-time detection and response capabilities tailored for AI.

While building this internal defense is the critical first step, security leaders recognize that no single organization can defend in isolation, which is why community collaboration has become essential. Sai Krishna stresses how important it is to help keep the ecosystem safe for everyone. His team has open-sourced a tool called Secure Hulk in addition to building their own secure architecture.

“We made Secure Hulk because we knew that MCP security can’t be a competitive advantage; it has to be a shared responsibility,” he posits. “This tool lets any organization scan its MCP servers for common vulnerabilities, which lets the whole community find and fix problems before they happen. The whole ecosystem becomes stronger when everyone’s defenses are stronger.”

This highlights a clear directive for security leaders: formally dedicate resources to “Open Source Security Engagement.” A practical implementation of this is to assign a percentage of a security engineer’s time specifically to vetting, contributing to, and adopting community-vetted tools. By making community participation a formal part of the security program, organizations can leverage the collective expertise of the industry to harden their own defenses.

This community-driven approach also extends beyond shared tools to the even more powerful concept of shared intelligence. Sekar says that this community-driven approach goes beyond tools to include shared threat intelligence. “We’re seeing attack patterns that no one company could figure out on their own because they don’t have enough data. We’re building a collective immune system for agentic AI by giving back anonymised telemetry and vulnerability signatures to the community.” This two-pronged approach provides CISOs with a clear and long-term plan for dealing with this new frontier by adding money into advanced internal defences and participating in community-led security efforts.

The final, crucial action for CISOs is to operationalize this exchange of threat intelligence. This means joining industry-specific groups, such as an ISAC (Information Sharing and Analysis Center), and establishing a formal process for contributing anonymized telemetry from internal agentic systems. By actively participating in this collective immune system, organizations not only strengthen the entire ecosystem but also gain early warnings of emerging threats, allowing them to adapt their internal defenses before they are targeted.

Guiding, Not Gating, the AI Frontier

For security leaders, the rise of the Model Context Protocol represents a critical inflection point. This technology offers undeniable transformative potential, but it also fundamentally alters the enterprise threat landscape in ways that require a new security approach. The path forward is not to block this innovation, but to guide it with a pragmatic, risk-based strategy.

And this begins with the cautious approach of starting with contained, internal experiments to limit the initial threat surface and build institutional knowledge. It then requires investment in the new class of runtime-aware defenses needed to monitor dynamic agentic behavior, and active engagement with the broader community to develop and share collective security tools. By embracing this proactive and adaptive security posture, CISOs can transform their role from gatekeepers of the old paradigm to essential architects of a secure and innovative AI-native future.


文章来源: https://securityboulevard.com/2025/10/the-cisos-guide-to-model-context-protocol-mcp/
如有侵权请联系:admin#unsafe.sh