New malware uses AI to adapt during attacks, report finds
嗯,用户让我帮忙总结一篇文章,控制在一百个字以内,而且不需要用“文章内容总结”这样的开头。首先,我得仔细阅读用户提供的文章内容。 文章主要讲的是国家支持的黑客首次使用了基于大型语言模型的恶意软件。这些恶意软件在运行时可以动态生成恶意脚本,从而逃避检测。Google的研究人员发现了两个例子:PROMPTFLUX和PROMPTSTEAL。前者用于测试,后者被俄罗斯相关的组织用于攻击乌克兰目标。 用户的需求是总结内容,所以我要抓住关键点:国家支持的黑客、首次使用LLM、动态生成脚本、逃避检测、两个案例及其影响。然后把这些信息浓缩到100字以内,确保信息准确且简洁。 可能需要注意的是,不要遗漏重要细节,比如PROMPTSTEAL被用于攻击乌克兰目标,以及这种趋势对未来恶意软件的影响。同时,语言要通顺自然,避免过于技术化的术语。 最后检查一下字数是否符合要求,并确保没有使用禁止的开头语句。 </think> 国家支持的黑客首次利用大型语言模型(LLM)部署恶意软件,在运行中动态生成恶意脚本以逃避检测。Google研究人员发现实验性恶意软件PROMPTFLUX和PROMPTSTEAL分别用于测试和攻击乌克兰目标。这些技术标志着网络威胁向更自主和适应性方向发展。 2025-11-5 14:16:9 Author: therecord.media(查看原文) 阅读量:5 收藏

State-backed hackers are for the first time deploying malware that uses large language models during execution, allowing them to dynamically generate malicious scripts and evade detection, according to new research.

Although cybersecurity experts have observed hackers use AI in recent years to do things like increase the number of victims they reach, researchers at Google said Wednesday that they recently observed malware "that employed AI capabilities mid-execution to dynamically alter the malware's behavior."

The trend should be considered a “significant step towards more autonomous and adaptive malware,” the report says.

In June, researchers found experimental dropper malware tracked as PROMPTFLUX that prompts an LLM to rewrite its own source code in order to evade detection.

PROMPTFLUX, which Google said it has taken steps to disrupt, appears to be in a testing phase and does not have the ability to compromise victim networks or devices, according to the report.

Another new malware, tracked as PROMPTSTEAL, was used in June by Russia-linked APT28 (also known as BlueDelta, Fancy Bear and FROZENLAKE) against Ukrainian targets, and utilized LLMs to generate commands rather than having them hard-coded into the malware. The incident marked Google's "first observation of malware querying a LLM deployed in live operations," the report said.

While researchers called these methods experimental, they said they show how threats are changing and how threat actors can “potentially integrate AI capabilities into future intrusion activity.” 

“Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support,” the report says.

The marketplace for AI tools “purpose-built” to fuel criminal behavior is growing, the report added. Low-level criminals without a lot of technical expertise or money can now find effective tools in underground forums for enhancing the complexity and reach of attacks, according to the report.

“Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings,” the report says.

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.


文章来源: https://therecord.media/new-malware-uses-ai-to-adapt
如有侵权请联系:admin#unsafe.sh