Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内,而且不需要用“文章内容总结”或“这篇文章”这样的开头。直接写描述即可。 首先,我快速浏览文章,发现主要讲的是AI驱动的浏览器在执行跨网站操作时可能被钓鱼攻击利用。攻击者通过分析AI浏览器的实时信息和行为模式,生成优化的钓鱼页面,诱使AI执行恶意操作。这使得攻击目标从用户转向了AI本身,增加了安全风险。 接下来,我需要提取关键点:AI浏览器、跨网站操作、钓鱼攻击、实时信息暴露、优化钓鱼页面、目标转向AI。然后把这些点浓缩成简洁的句子。 最后,确保字数在100字以内,并且表达清晰准确。 </think> 文章指出,AI驱动的浏览器在跨网站自主操作时可能暴露实时信息和行为模式,使攻击者能够生成并优化钓鱼页面,诱使AI执行恶意操作。这种攻击方式将目标从用户转向AI本身,增加了网络诈骗的风险。 2026-3-11 16:38:0 Author: thehackernews.com(查看原文) 阅读量:1 收藏

Artificial Intelligence / Browser Security

Agentic web browsers that leverage artificial intelligence (AI) capabilities to autonomously execute actions across multiple websites on behalf of a user could be trained and tricked into falling prey to phishing and scam traps.

The attack, at its core, takes advantage of AI browsers' tendency to reason their actions and use it against the model itself to lower their security guardrails, Guardio said in a report shared with The Hacker News ahead of publication.

"The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!," security researcher Shaked Chen said.

"This is what we call Agentic Blabbering: the AI Browser exposing what it sees, what it believes is happening, what it plans to do next, and what signals it considers suspicious or safe."

By intercepting this traffic between the browser and the AI services running on the vendor's servers and feeding it as input to a Generative Adversarial Network (GAN), Guardio said it was able to make Perplexity's Comet AI browser fall victim to a phishing scam in under four minutes.

The research builds on prior techniques like VibeScamming and Scamlexity, which found that vibe-coding platforms and AI browsers could be coaxed into generating scam pages or carrying out malicious actions via hidden prompt injections. In other words, with the AI agent handling the tasks without constant human supervision, there arises a shift in the attack surface wherein a scam no longer has to deceive a user. Rather, it aims to trick the AI model itself.

"If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal," Chen explained. "The scam evolves until the AI Browser reliably walks into the trap another AI set for it."

The idea, in a nutshell, is to build a "scamming machine" that iteratively optimizes and regenerates a phishing page until the agentic browser stops complaining and proceeds to carry out the threat actor's bidding, such as entering a victim's credentials on a bogus web page designed for carrying out a refund scam.

What makes this attack interesting and dangerous is that once the fraudster iterates on a web page until it works against a specific AI browser, it works on all users who rely on the same agent. Put differently, the target has shifted from the human user to the AI browser.

"This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact," Guardio said. "Because when your AI Browser explains why it stopped, it teaches attackers how to bypass it."

The disclosure comes as Trail of Bits demonstrated four prompt injection techniques against the Comet browser to extract users' private information from services like Gmail by exploiting the browser's AI assistant and exfiltrating the data to an attacker’s server when the user asks to summarize a web page under their control.

Last week, Zenity Labs also detailed two zero-click attacks affecting Perplexity's Comet that use indirect prompt injection seeded within meeting invites to exfiltrate local files to an external server (aka PerplexedComet) or hijack a user's 1Password account if the password manager extension is installed and unlocked. The issues, collectively codenamed PerplexedBrowser, have since been addressed by the AI company.

This is achieved by means of a prompt injection technique referred to as intent collision, which occurs "when the agent merges a benign user request with attacker-controlled instructions from untrusted web data into a single execution plan, without a reliable way to distinguish between the two," security researcher Stav Cohen said.

Prompt injection attacks remain a fundamental security challenge for large language models (LLMs) and for integrating them into organizational workflows, largely because completely eliminating these vulnerabilities may not be feasible. In December 2025, OpenAI noted that such weaknesses are "unlikely to ever" be fully resolved in agentic browsers, although the associated risks could be reduced through automated attack discovery, adversarial training, and new system-level safeguards.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/03/researchers-trick-perplexitys-comet-ai.html
如有侵权请联系:admin#unsafe.sh