Threat researchers recently disclosed a severe vulnerability in a Figma Model Context Protocol (MCP) server, as reported by The Hacker News. While the specific patch is important, the discovery itself serves as a critical wake-up call for every organization rushing to adopt AI. This incident provides a blueprint for a new class of attacks that target the very infrastructure powering the AI Agent Economy.
To understand the risk, we must first look at the mechanics of this emerging threat.
As businesses integrate AI agents, they require a means for these autonomous systems to communicate with existing applications. The Model Context Protocol (MCP) is a new protocol designed for this purpose, enabling an AI agent to interact with tools like Figma to perform tasks such as creating designs, modifying components, exporting assets, and more.
While powerful, these MCP servers create new, often unmonitored, pathways into sensitive corporate applications. An attacker who can compromise this channel isn’t just bypassing a firewall; they are effectively impersonating a trusted AI agent to manipulate an application from the inside.
The vulnerability allowed for a practical exploit that abused the API’s intended functionality. The exploit chain followed a pattern that leveraged the API channel at every step to turn a legitimate feature into a weapon.
This vulnerability is the tangible manifestation of the exact concerns security and development teams have about AI. Our latest 2025 State of API Security report found that a clear majority of organizations (56%) now view Generative AI as a growing security concern.
The reasons for this are directly related to incidents like this one:
Despite these fears, the push for innovation is relentless. 62% of organizations have already adopted GenAI for some or all of their API development. This creates a dangerous gap between the speed of adoption and the maturity of security practices. Unsurprisingly, this leaves security teams feeling unprepared. The report found that only 15% are “very confident” in their ability to detect and respond to attacks that leverage AI.
This vulnerability is not an isolated incident; it’s a preview of what’s to come. As AI agent adoption grows, attacks against the APIs and protocols that connect them will become more common.
Protecting against this new threat requires a purpose-built approach. At Salt Security, our platform provides the deep context needed to secure your AI transformation by delivering complete visibility into all API traffic, including new AI agent and MCP channels. We help you proactively improve your security posture by identifying the same kinds of misconfigurations and vulnerabilities exploited in this attack. Most importantly, our AI-powered behavioral threat protection baseline normalizes the activity of your AI agents to pinpoint and block sophisticated attacks in real-time, allowing you to innovate with AI securely.
If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security’s research team and learn what attackers already know.
*** This is a Security Bloggers Network syndicated blog from Salt Security blog authored by Eric Schwake. Read the original post at: https://salt.security/blog/anatomy-of-a-modern-threat-deconstructing-the-figma-mcp-vulnerability