The Illusion of Scale: Why LLMs Are Vulnerable to Data Poisoning, Regardless of Size
文章探讨了开源自动审计在AI安全中的重要性,强调其在检测和防御对抗样本、数据中毒及后门攻击方面的作用,以提升模型的可靠性和安全性。 2025-10-18 16:58:8 Author: hackernoon.com(查看原文) 阅读量:17 收藏

New Story

by

byAnthony Laneau@hacker-Antho

Managing Director @ VML | Founder @ Fourth -Mind

October 18th, 2025

Read on Terminal ReaderPrint this storyRead this story w/o Javascript

featured image - The Illusion of Scale: Why LLMs Are Vulnerable to Data Poisoning, Regardless of Size

Audio Presented by

    Speed

    Voice

Anthony Laneau

byAnthony Laneau@hacker-Antho

About Author

Anthony Laneau HackerNoon profile picture

Managing Director @ VML | Founder @ Fourth -Mind

Comments

avatar

TOPICS

THIS ARTICLE WAS FEATURED IN

Related Stories


文章来源: https://hackernoon.com/the-illusion-of-scale-why-llms-are-vulnerable-to-data-poisoning-regardless-of-size?source=rss
如有侵权请联系:admin#unsafe.sh