New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得仔细阅读文章,理解其主要观点。 文章主要讲的是网络安全研究人员发现了一个新的安全问题,涉及到像OpenAI ChatGPT Atlas这样的智能网络浏览器。攻击者可以利用一种名为AI-targeted cloaking的技术,通过设置不同的网站内容来欺骗AI爬虫,从而进行上下文投毒攻击。 这种攻击类似于搜索引擎的cloaking技术,但针对的是AI系统。攻击者通过简单的用户代理检查来改变内容,导致AI系统获取错误的信息。这可能导致信息被误导、偏见被引入,并影响依赖这些信号的系统结果。 此外,文章还提到不同AI工具的安全性问题,有些工具甚至能够执行危险的操作,如密码重置或SQL注入。研究人员指出这些工具缺乏足够的安全措施,容易被攻击者利用。 总结一下,文章的核心是关于AI网络浏览器的安全漏洞及其潜在风险。我需要将这些关键点浓缩到100字以内,并且直接描述内容,不需要使用特定的开头语。 可能的结构是:研究人员发现智能网络浏览器存在安全漏洞;攻击者利用cloaking技术欺骗AI爬虫;可能导致信息误导和系统影响;不同AI工具的安全性问题突出;缺乏足够的安全措施。 现在把这些点整合成一个连贯的句子: 研究人员发现智能网络浏览器存在安全漏洞,攻击者可利用cloaking技术欺骗AI爬虫,导致信息误导和系统影响;不同AI工具的安全性问题突出,缺乏足够的安全措施。 这样大约在100字左右,并且涵盖了主要观点。 </think> 研究人员发现智能网络浏览器存在安全漏洞,攻击者可利用cloaking技术欺骗AI爬虫,导致信息误导和系统影响;不同AI工具的安全性问题突出,缺乏足够的安全措施。 2025-10-29 14:57:0 Author: thehackernews.com(查看原文) 阅读量:5 收藏

Machine Learning / AI Safety

Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks.

In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and Perplexity. The technique has been codenamed AI-targeted cloaking.

The approach is a variation of search engine cloaking, which refers to the practice of presenting one version of a web page to users and a different version to search engine crawlers with the end goal of manipulating search rankings.

DFIR Retainer Services

The only difference in this case is that attackers optimize for AI crawlers from various providers by means of a trivial user agent check that leads to content delivery manipulation.

"Because these systems rely on direct retrieval, whatever content is served to them becomes ground truth in AI Overviews, summaries, or autonomous reasoning," security researchers Ivan Vlahov and Bastien Eymery said. "That means a single conditional rule, 'if user agent = ChatGPT, serve this page instead,' can shape what millions of users see as authoritative output."

SPLX said AI-targeted cloaking, while deceptively simple, can also be turned into a powerful misinformation weapon, undermining trust in AI tools. By instructing AI crawlers to load something else instead of the actual content, it can also introduce bias and influence the outcome of systems leaning on such signals.

"AI crawlers can be deceived just as easily as early search engines, but with far greater downstream impact," the company said. "As SEO [search engine optimization] increasingly incorporates AIO [artificial intelligence optimization], it manipulates reality."

The disclosure comes as an analysis of browser agents against 20 of the most common abuse scenarios, ranging from multi-accounting to card testing and support impersonation, discovered that the products attempted nearly every malicious request without the need for any jailbreaking, the hCaptcha Threat Analysis Group (hTAG) said.

Furthermore, the study found that in scenarios where an action was "blocked," it mostly came down due to the tool missing a technical capability rather than due to safeguards built into them. ChatGPT Atlas, hTAG noted, has been found to carry out risky tasks when they are framed as part of debugging exercises.

CIS Build Kits

Claude Computer Use and Gemini Computer Use, on the other hand, have been identified as capable of executing dangerous account operations like password resets without any constraints, with the latter also demonstrating aggressive behavior when it comes to brute-forcing coupons on e-commerce sites.

hTAG also tested the safety measures of Manus AI, uncovering that it executes account takeovers and session hijacking without any issue, while Perplexity Comet runs unprompted SQL injection to exfiltrate hidden data.

"Agents often went above and beyond, attempting SQL injection without a user request, injecting JavaScript on-page to attempt to circumvent paywalls, and more," it said. "The near-total lack of safeguards we observed makes it very likely that these same agents will also be rapidly used by attackers against any legitimate users who happen to download them."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2025/10/new-ai-targeted-cloaking-attack-tricks.html
如有侵权请联系:admin#unsafe.sh