Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection
嗯,用户让我帮忙总结一篇文章的内容,控制在100个字以内。首先,我需要仔细阅读这篇文章,理解其主要内容和关键点。 文章讲的是网络安全研究人员在Hugging Face上发现了两个恶意的机器学习模型。他们使用了一种不寻常的技术,也就是“破损”的pickle文件来逃避检测。这种方法被称为 2025-2-8 06:17:0 Author: thehackernews.com(查看原文) 阅读量:8 收藏

Artificial Intelligence / Supply Chain Security

Malicious ML Models

Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection.

"The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file," ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. "In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address."

Cybersecurity

The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below -

  • glockr1/ballr7
  • who-r-u0000/0000000000000000000000000000000000000

It's believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario.

The pickle serialization format, used common for distributing ML models, has been repeatedly found to be a security risk, as it offers ways to execute arbitrary code as soon as they are loaded and deserialized.

Malicious ML Models

The two models detected by the cybersecurity company are stored in the PyTorch format, which is nothing but a compressed pickle file. While PyTorch uses the ZIP format for compression by default, the identified models have been found to be compressed using the 7z format.

Consequently, this behavior made it possible for the models to fly under the radar and avoid getting flagged as malicious by Picklescan, a tool used by Hugging Face to detect suspicious Pickle files.

"An interesting thing about this Pickle file is that the object serialization — the purpose of the Pickle file — breaks shortly after the malicious payload is executed, resulting in the failure of the object's decompilation," Zanki said.

Cybersecurity

Further analysis has revealed that such broken pickle files can still be partially deserialized owing to the discrepancy between Picklescan and how deserialization works, causing the malicious code to be executed despite the tool throwing an error message. The open-source utility has since been updated to rectify this bug.

"The explanation for this behavior is that the object deserialization is performed on Pickle files sequentially," Zanki noted.

"Pickle opcodes are executed as they are encountered, and until all opcodes are executed or a broken instruction is encountered. In the case of the discovered model, since the malicious payload is inserted at the beginning of the Pickle stream, execution of the model wouldn't be detected as unsafe by Hugging Face's existing security scanning tools."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2025/02/malicious-ml-models-found-on-hugging.html
如有侵权请联系:admin#unsafe.sh