DarkMind:定制化LLM中的潜在推理链后门
随着个性化AI需求增长,定制化大语言模型(如GPT)广泛应用,但其推理过程存在安全漏洞。研究提出DarkMind后门攻击,通过利用模型推理链实现隐秘控制,无需注入触发器,实验显示其在多领域有效,强调需加强安全防护。 2025-2-19 06:25:0 Author: arxiv.org(查看原文) 阅读量:6 收藏

View PDF

Abstract:With the growing demand for personalized AI solutions, customized LLMs have become a preferred choice for businesses and individuals, driving the deployment of millions of AI agents across various platforms, e.g., GPT Store hosts over 3 million customized GPTs. Their popularity is partly driven by advanced reasoning capabilities, such as Chain-of-Thought, which enhance their ability to tackle complex tasks. However, their rapid proliferation introduces new vulnerabilities, particularly in reasoning processes that remain largely unexplored. We introduce DarkMind, a novel backdoor attack that exploits the reasoning capabilities of customized LLMs. Designed to remain latent, DarkMind activates within the reasoning chain to covertly alter the final outcome. Unlike existing attacks, it operates without injecting triggers into user queries, making it a more potent threat. We evaluate DarkMind across eight datasets covering arithmetic, commonsense, and symbolic reasoning domains, using five state-of-the-art LLMs with five distinct trigger implementations. Our results demonstrate DarkMind effectiveness across all scenarios, underscoring its impact. Finally, we explore potential defense mechanisms to mitigate its risks, emphasizing the need for stronger security measures.

Submission history

From: Zhen Guo [view email]
[v1] Fri, 24 Jan 2025 21:07:32 UTC (4,346 KB)


文章来源: https://arxiv.org/abs/2501.18617
如有侵权请联系:admin#unsafe.sh