利用自定义字体投毒 AI 助手:利用渲染与 DOM 差异的攻击
嗯,用户让我用中文总结这篇文章,控制在一百个字以内,而且不需要特定的开头。首先,我需要通读整篇文章,抓住主要信息。 文章讲的是LayerX发现的一种攻击技术,利用AI助手和浏览器渲染之间的差异。攻击者通过自定义字体和CSS,在页面中隐藏恶意指令。AI检查时只看到无害内容,而用户看到的是诱导执行恶意操作的文本。 攻击测试在2025年12月进行,多个AI助手都没能检测到威胁。LayerX负责任地披露了漏洞,但只有微软积极回应。文章建议AI厂商采用双模式分析和扫描隐藏技术。 总结的时候要简洁明了,涵盖攻击方法、影响、测试结果和建议措施。控制在100字以内,确保信息完整且连贯。 </think> LayerX揭示了一种新型网络攻击技术:利用AI助手与浏览器渲染之间的差异,在网页中隐藏恶意指令。攻击者通过自定义字体和CSS,在页面中隐藏真实内容,使其对用户可见而对AI安全检查不可见。测试显示多个AI助手未能检测到威胁。该漏洞凸显AI安全分析的局限性。 2026-3-17 17:23:22 Author: cybersecuritynews.com(查看原文) 阅读量:6 收藏

Simple Custom Font Rendering Can Poison ChatGPT, Claude, Gemini, and Other AI Systems

A novel attack technique that exploits a fundamental blind spot in AI web assistants the gap between what a browser renders for a user and what an AI tool actually reads from the underlying HTML.

Using nothing more than a custom font file and basic CSS, attackers can silently deliver malicious instructions to users while AI safety checks see only harmless content.

The attack, tested in December 2025, exploits a structural disconnect between a webpage’s DOM text and its visual rendering. When an AI assistant analyzes a webpage, it parses the raw HTML structure.

But the browser renders that same page through a visual pipeline, one that interprets fonts, CSS, and glyph mappings to produce what the user actually sees on screen. Attackers can weaponize the space between these two views.

LayerX demonstrated this by building a proof-of-concept page that appeared to visitors as a Bioshock video game fanfiction site. Hidden beneath that facade was a custom font acting as a visual substitution cipher.

The font was engineered to display normal HTML text video game fanfiction as 1-pixel, background-colored gibberish invisible to the user, while rendering a separate encoded payload as readable, large green text urging the user to execute a reverse shell on their own machine.

google

Every AI Assistant Failed

Every non-agentic AI assistant tested, including ChatGPT, Claude, Copilot, Gemini, Grok, Perplexity, and others, failed to detect the threat and instead confirmed the page was safe. In many cases, assistants even encouraged users to follow the malicious on-screen instructions.

This attack requires no JavaScript, no exploit kit, and no browser vulnerability. The browser behaves exactly as designed. The flaw lies in AI tools that treat DOM text as a complete representation of what users see, when in reality, the rendering layer can carry an entirely different message.

Attack Flow (Source: LayerX)

LayerX followed responsible disclosure procedures and reported the findings to all major AI vendors in December 2025. The responses revealed a concerning gap in how AI security is defined:

VendorResponse
MicrosoftAccepted the report; full 90-day remediation period requested
GoogleInitially assigned P2 (High) priority, then de-escalated and closed Jan. 27, 2026
OpenAIRejected as “out of scope” — insufficient impact for triage
AnthropicRejected as social engineering, explicitly out of scope
xAIRejected; directed to [email protected]
PerplexityClassified as a known LLM limitation, not a security vulnerability

Microsoft was the only vendor to fully address the issue and engage the complete disclosure timeline.

The most immediate risk is AI-assisted social engineering when an attacker tricks an AI into vouching for a malicious page; they effectively borrow the AI’s trusted reputation to manipulate the user.

As AI copilots and browser assistants become deeply embedded in enterprise security workflows, these text-only analysis tools create blind spots that attackers can reliably exploit.

LayerX recommends that AI vendors implement dual-mode render-and-diff analysis, treat custom fonts as potential threat surfaces, scan for CSS-based content hiding techniques (such as near-zero opacity and color-matched text), and, critically, avoid issuing confident safety verdicts when they cannot verify a page’s full rendering context.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews


文章来源: https://cybersecuritynews.com/custom-font-poison-ai-systems/
如有侵权请联系:admin#unsafe.sh