Auto-RT:用于对大型语言模型红队测试的自动越狱策略探索
本文提出了一种基于强化学习的自动红队框架Auto-RT,用于检测大型语言模型的安全漏洞。通过引入早期终止探索和逐步奖励追踪算法等机制,Auto-RT显著提高了攻击策略的优化效率和漏洞检测的成功率,在实验中实现了16.63%的更高成功率和更快的检测速度。 2025-4-27 06:18:34 Author: arxiv.org(查看原文) 阅读量:5 收藏

View PDF HTML (experimental)

Abstract:Automated red-teaming has become a crucial approach for uncovering vulnerabilities in large language models (LLMs). However, most existing methods focus on isolated safety flaws, limiting their ability to adapt to dynamic defenses and uncover complex vulnerabilities efficiently. To address this challenge, we propose Auto-RT, a reinforcement learning framework that automatically explores and optimizes complex attack strategies to effectively uncover security vulnerabilities through malicious queries. Specifically, we introduce two key mechanisms to reduce exploration complexity and improve strategy optimization: 1) Early-terminated Exploration, which accelerate exploration by focusing on high-potential attack strategies; and 2) Progressive Reward Tracking algorithm with intermediate downgrade models, which dynamically refine the search trajectory toward successful vulnerability exploitation. Extensive experiments across diverse LLMs demonstrate that, by significantly improving exploration efficiency and automatically optimizing attack strategies, Auto-RT detects a boarder range of vulnerabilities, achieving a faster detection speed and 16.63\% higher success rates compared to existing methods.

Submission history

From: Yanjiang Liu [view email]
[v1] Fri, 3 Jan 2025 14:30:14 UTC (3,052 KB)


文章来源: https://arxiv.org/abs/2501.01830
如有侵权请联系:admin#unsafe.sh