DeepSeek Reveals AI Safety Risks in Landmark Study
中国AI公司深度求索首次发表同行评审研究,揭示其AI模型安全风险。研究显示开源系统易受恶意攻击,“越狱”技术使模型生成有害内容风险增加。深度求索强调开放源代码促进技术发展但也带来滥用风险,并建议开发者加强安全措施。 2025-9-24 08:51:54 Author: securityboulevard.com(查看原文) 阅读量:31 收藏

Avatar photo

DeepSeek has become the first major artificial intelligence (AI) company to publish peer-reviewed research detailing the safety risks of its models, revealing that open-source AI systems are particularly vulnerable to malicious attacks.

The Chinese AI startup published its findings in the prestigious academic journal Nature, marking a significant shift toward transparency in an industry where Chinese firms have historically been less forthcoming about AI safety concerns compared to their American counterparts.

DeepSeek’s research examined its latest models – the R1 reasoning model released in January 2025 and the V3 base model from December 2024 – using both industry-standard benchmarks and proprietary testing methods. The study found that while these models performed slightly better than competitors like OpenAI’s o1 and GPT-4o in standard safety tests, they showed vulnerabilities when subjected to “jailbreaking” attacks.

Techstrong Gang Youtube

Research revealed DeepSeek’s R1 model became “relatively unsafe” once its external safety mechanisms were disabled, based on tests using 1,120 internal safety questions. More troubling, the study found that all tested models, including those from Alibaba Group’s Qwen2.5, showed “significantly increased rates” of harmful responses when faced with jailbreak techniques – methods that trick AI systems into producing dangerous content by using indirect prompts.

The study highlighted a particular vulnerability in open-source models like R1 and Qwen2.5, which are freely available for download and modification. While this accessibility promotes technological advancement and adoption, it also enables users to potentially remove built-in safety controls.

“We fully recognize that, while open-source sharing facilitates the dissemination of advanced technologies within the community, it also introduces potential risks of misuse,” concluded the paper, which was overseen by DeepSeek CEO Liang Wenfeng.

The company advised developers using open-source models to implement comparable safety measures in their applications.

Beyond safety concerns, the Nature paper disclosed that DeepSeek’s R1 model cost just $294,000 to train – a figure that had been the subject of widespread speculation since the model’s high-profile January launch. The cost represents a fraction of what American companies reportedly spend on similar models, raising questions about the economics of AI development.

The publication has generated significant celebration in China, where DeepSeek has been hailed as the “first LLM company to be peer-reviewed” across social media platforms. Industry experts suggest this transparency could encourage other Chinese AI companies to be more open about their safety practices.

The research also addressed accusations that DeepSeek had “distilled” or copied OpenAI’s models, a controversial practice involving training systems using competitor outputs. The paper refuted those claims.

Avatar photo

Jon Swartz

Jon Swartz is senior content writer at Techstrong Group. Most recently, he was MarketWatch’s senior reporter based in San Francisco covering technology and Silicon Valley. Previously, Swartz was USA Today’s San Francisco bureau chief. He has also written for Forbes, The (London) Independent, London Times, San Francisco Chronicle, and New Orleans Times-Picayune. He has won numerous journalism awards and is a two-time finalist for the Loebs, the Pulitzers of business reporting. Additionally, he frequently appears as a panelist on Fox Business and NBC Bay Area’s Press:Here program. He has been nominated four times for the Pulitzer Prize. Swartz is co-author of “Zero Day Threat: The Shocking Truth of How Banks and Credit Bureaus Help Cyber Crooks Steal Your Money and Identity” and sole author of “Young Wealth.”

jon-swartz has 17 posts and counting.See all posts by jon-swartz


文章来源: https://securityboulevard.com/2025/09/deepseek-reveals-ai-safety-risks-in-landmark-study/
如有侵权请联系:admin#unsafe.sh