August 27, 2025
4 Min Read
Google, with its unparalleled visibility into Gemini, recently alerted its legion of Gmail users about indirect prompt attacks, which exploit AI context sources like emails, calendar invites and files. Coming from a major AI vendor, the frank and direct public alert leaves no doubt that organizations face a tangible AI security threat from this type of attack against generative AI tools.
While the risk from indirect prompt attacks has been discussed publicly for many months, we believe that a turning point in the conversation occurred when Google publicly and candidly warned its vast Gmail user base about this critical AI threat.
“As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,” Google alerted its users in the June blog “Mitigating prompt injection attacks with a layered defense strategy.”
That blog has generated ample discussion, due to the significance of a major AI vendor acknowledging such a widespread risk and emphasizing that the threat has never been more urgent.
Prompt injection is a type of attack where malicious instructions are inserted directly into an AI model’s input to override its intended behavior and achieve a malicious objective. A more manipulative variant, called indirect prompt injection, plants those instructions within external content - outside the user’s prompt such as files, emails, or web pages, that the model later uses as context. This allows attackers to manipulate the model’s outputs without the user’s awareness.
Anyone outside your organization can easily send an email, calendar invite, or file containing such adversarial prompts. Once processed by a generative AI tool such as Google’s Gemini AI, these malicious prompts can lead to data exfiltration, output manipulation, workflow hijacking, or even harmful content generation.
The most concerning aspect? Neither the user nor security teams may realize it’s happening. This is amplified by the fact that very few organizations have any semblance of AI security controls in place, so the industry is mostly relying on people and process to detect, prevent and address this type of attack.
Imagine this: a malicious calendar invite arrives from outside the organization. Hidden in the invite is a prompt injection directing the LLM to falsify the company’s revenue, multiplying it by 10x every time it’s queried.
Now imagine the C-suite, finance, and accounting teams relying on Gemini or a similar tool to prepare quarterly reports - a growing reality among Fortune 500 companies eager to accelerate operations. The result? The company could inadvertently publish fraudulent revenue numbers, triggering disastrous financial and reputational consequences.
This is not limited to Gemini. Any context-aware AI system is vulnerable.
An Outlook email may look like a standard push notification, but invisible characters could jailbreak the AI model when Microsoft Copilot references the email. The jailbreak could force the model to recommend a specific tool, regardless of its original purpose. That tool could then be abused to transfer money from a company account to an attacker’s account.
This is not just phishing. It’s the 2025 evolution of ransomware: AI-driven, invisible and context-based. This makes it critical to understand which AI systems are in your environment, which data is connected to them, what types of user interactions they involve, and which actions they can take.
Context injection is uniquely challenging for several reasons:
Google took a responsible step by warning its users about this threat, surfacing context-poisoning techniques and outlining concrete steps it’s taking to harden Gemini’s security.
But history shows us that simpler adversarial techniques have repeatedly bypassed vendor guardrails. That’s where specialized solutions come in.
With Tenable AI Exposure, part of the Tenable One Exposure Management Platform, we scan AI context sources directly -- identifying injected data before it ever reaches the model. By preventing poisoned context from being processed in the first place, we stop severe security incidents before they happen.
Learn more about how Tenable protects your organization from context injection attacks.
Keren Katz is a leader in AI and cybersecurity, specializing in generative AI threat detection. She is currently a Senior Group Manager of Product, Threat Research, and AI at Tenable, following the acquisition of Apex, where she previously led security detection. Keren also led product at Sygnia - an IR and MXDR security company. She has a background in special operations deep-tech, software engineering, and in founding an ML-based startup. Keren is also a key contributor to the OWASP Foundation and a global speaker on AI, security and product management.
As Tenable’s Chief Security Officer, Head of Research and President of Tenable Public Sector, LLC, Robert Huber oversees the company's global security and research teams, working cross-functionally to reduce risk to the organization, its customers and the broader industry. He has more than 25 years of cyber security experience across the financial, defense, critical infrastructure and technology sectors. Prior to joining Tenable, Robert was a chief security and strategy officer at Eastwind Networks. He was previously co-founder and president of Critical Intelligence, an OT threat intelligence and solutions provider, which cyber threat intelligence leader iSIGHT Partners acquired in 2015. He also served as a member of the Lockheed Martin CIRT, an OT security researcher at Idaho National Laboratory and was a chief security architect for JP Morgan Chase. Robert is a board member and advisor to several security startups and served in the U.S. Air Force and Air National Guard for more than 22 years. Before retiring in 2021, he provided offensive and defensive cyber capabilities supporting the National Security Agency (NSA), United States Cyber Command and state missions.
Enter your email and never miss timely alerts and security guidance from the experts at Tenable.