CrowdStrike in its latest annual Global Threat Report detailed the growing influence of AI on cybercrime and the expanding attack surface, noting that threat actors enabled by the technology grew their operations by 89% year-to-year in 2025.
“Adversaries of all motivations utilized AI technology throughout 2025 to accelerate and optimize their existing techniques,” Adam Meyers, CrowdStrike’s senior vice president of counter adversary, wrote in a blog post. “They explored its use in attack types such as social engineering and information operations, proving their growing proficiency with AI tools. … In addition to using AI tools, adversaries are targeting the AI systems underpinning the modern enterprise.”
Cybercriminal use of AI was on display again last week, with researchers from Israeli security startup Gambit Security saying that an unknown hacker was able to use Anthropic’s Claude AI and OpenAI’s ChatGPT chatbots to find and exploit weaknesses in the networks of Mexico’s government and steal as much as 150GB of data, ranging from 195 million taxpayer records to voting information, government employee credentials, and civil registry files.
The intrusion began in December and ran for about a month, according to a report by Bloomberg.
Citing Gambit research, the news organization said the attacker wrote prompts in Spanish for Claude, jailbreaking the chatbot by telling it that they were trying for a bug bounty and to act like a hacker to find vulnerabilities the Mexican government’s networks. The AI tool also was told to write computer scripts to exploit the weaknesses and to find ways to automate the data theft process.
According to Gambit, Claude initially resisted the prompts through guardrails in the model and pegged the activity as malicious, but eventually the hacker was able to bypass those defenses. When there were issues with Claude or more information was needed, the hacker reportedly used ChatGPT.
The attacker also used the OpenAI chatbot to find information about moving laterally through computer networks, understand which credentials were required to access some systems, and determine how likely it was that the hacking operation would be detected.
Curtis Simpson, chief strategy officer at Gambit – which was founded in 2024 and last week said it emerged from stealth with $61 million in seed and Series AI funding – told Bloomberg that “in total, [ChatGPT] produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use.”
Anthropic has investigated the claims, disrupted the activity, and banned all of the accounts involved, company spokesperson told Engadget, adding that the company’s latest model, Claude Opus 4.6, includes tools to disrupt this kind of activity.
Gambit researchers said the hacker exploited at least 20 security flaws found throughout Mexico’s government infrastructure, including with its federal tax authority, national electoral institute, and state governments in Jalisco, Michoacán, and Tamaulipas.
They uncovered the attack on Mexico’s government while testing new threat hunting techniques to see what hackers were doing, according to Bloomberg.
The extent to which bad actors are using AI in their operations – either by abusing chatbots and other tools to gain access to IT systems or using AI tools to build and run their malicious activities – is getting increasing attention.
Amazon Threat Intelligence analysts said last month that a Russian-speaking threat actor used multiple commercially available GenAI services to compromise more than 600 of Fortinet’s FortiGate network appliances across more than 55 countries this year.
That followed earlier reports, including Anthropic saying in November 2025 that a Chinese nation-state group used its Claude Code developer AI model to run an espionage campaign. In addition, Check Point researchers detailed how a single actor used an AI model to create an advanced malware called “VoidLink,” and Darktrace analysts found a malware sample in its honeypot network they said was generated entirely by AI.
“This reality is changing all the game rules we have ever known,” Gambit co-founder and CEO Alon Gromakov told Bloomberg.
CrowdStrike’s Meyers echoed the sentiment.
“As AI is embedded into development pipelines, SaaS platforms, and operational workflows, AI systems become part of the attack surface,” he wrote. “In 2025, adversaries exploited legitimate GenAI tools at more than 90 organizations by injecting malicious prompts to generate commands for stealing credentials and cryptocurrency.”
In addition, Meyers wrote, “they also exploited vulnerabilities in AI development platforms to establish persistence and deploy ransomware, and published malicious AI servers impersonating trusted services to intercept sensitive data.”
Recent Articles By Author