State-backed hackers are for the first time deploying malware that uses large language models during execution, allowing them to dynamically generate malicious scripts and evade detection, according to new research. Although cybersecurity experts have observed hackers use AI in recent years to do things like increase the number of victims they reach, researchers at Google said Wednesday that they recently observed malware "that employed AI capabilities mid-execution to dynamically alter the malware's behavior." The trend should be considered a “significant step towards more autonomous and adaptive malware,” the report says. In June, researchers found experimental dropper malware tracked as PROMPTFLUX that prompts an LLM to rewrite its own source code in order to evade detection. PROMPTFLUX, which Google said it has taken steps to disrupt, appears to be in a testing phase and does not have the ability to compromise victim networks or devices, according to the report. Another new malware, tracked as PROMPTSTEAL, was used in June by Russia-linked APT28 (also known as BlueDelta, Fancy Bear and FROZENLAKE) against Ukrainian targets, and utilized LLMs to generate commands rather than having them hard-coded into the malware. The incident marked Google's "first observation of malware querying a LLM deployed in live operations," the report said. While researchers called these methods experimental, they said they show how threats are changing and how threat actors can “potentially integrate AI capabilities into future intrusion activity.” “Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support,” the report says. The marketplace for AI tools “purpose-built” to fuel criminal behavior is growing, the report added. Low-level criminals without a lot of technical expertise or money can now find effective tools in underground forums for enhancing the complexity and reach of attacks, according to the report. “Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings,” the report says.
Get more insights with the
Recorded Future
Intelligence Cloud.