The FBI’s Atlanta Field Office joined forces with law enforcement authorities in Indonesia to put the kibosh on a phishing organization that stretched around the globe and to date has resulted in $20 million in attempted fraud.
The cybercriminals used the full-service W3LL phishing kit to act as if they were legitimate login pages. When targets gave up their usernames and passwords, the kit would snag that information. And the miscreants would use them to access accounts. In addition, the tool can also grab session data, offering a path around multifactor authentication. All for the bargain basement price of $500.
“The true danger of many phishing schemes lies in their ability to grant attackers access to credentials, enabling them to masquerade as trusted insiders,” says Rex Booth, CISO at SailPoint.
And in a four-year period, up to 2023, the kit had a considerable number of takers on the W3LLSTORE marketplace, which ultimately resulted in the sale of more than 25,000 compromised accounts and more than 17,000 victims were targeted in the state of Georgia and globally, the Times of India reported. That store shuttered in 2023 but that didn’t stop the W3LL phishing kit from proliferating; it was simply rebranded and sold through encrypted messaging apps.
The kit’s developer, referred to by the FBI as “G.L.,” was picked up in Indonesia by authorities. The takedown represents the first time the FBI and Indonesian authorities have worked together in pursuit of cybercriminals.
AI has upped the phishing game. “Traditional phishing emails used to carry clear warning signs such as poor grammar, inconsistent branding, or unusual formatting,” says Nicole Carignan, senior vice president, security & AI strategy, and Field CISO at Darktrace.
But these days, “AI has removed many of those indicators,” she says, explaining. That “attackers can generate highly polished, brand-consistent communications that closely mirror legitimate organizations, and even tailor messages using publicly available or previously compromised data.”
With AI, adversaries can “operate with greater speed and precision” and “campaigns can be created, tested, and refined in real time, producing large volumes of highly targeted messages that are far more likely to succeed,” Carignan says.
“As a result, phishing is no longer just a volume-based threat; it’s become a quality and personalization problem, making it increasingly difficult to detect with the human eye alone,” she adds.
Because AI makes phishing campaigns more sophisticated and harder to detect, Booth says, it’s “imperative for users to adopt robust identity security best practices, including changing passwords frequently and enabling multi-factor authentication, and for organizations to prioritize identity as the new control plane.”
Noting that “people are trained to obey authority, and deepfake and callback phishing attacks are designed to push people into bypassing normal checks,” Hoxhunt Co-founder and CEO Mika Aalto urges organizations “to normalize ‘see something, say something’ behavior and make verification frictionless. Behavioral monitoring tools can help flag unusual actions, but the real challenge is cultural: giving employees confidence that slowing down to verify is expected, supported, and reinforced through Human Risk Management practices.”
Since phishing has evolved beyond static text, “awareness must do the same,” Aalto says, because the “entire concept of ‘security awareness training’ is outdated if it stops at awareness.”
Aalto believes the next generation of defense is “behavioral, not informational” with defenders “moving from telling people what to do to shaping what they actually do, in real time. We are building an innate set of security reflexes and instincts.”
Security pros believe that, unfortunately, the worst is yet to come, making speed an imperative. “We’ve been waiting for this offensive disruption from AI for a while now,” says Booth. “Attacks at scale and superhuman speed are the most obvious first step.”
At least for now, many campaigns “require human intervention to execute,” he says, stressing that the “scarier scenario is when adversary AI starts running rampant through your enterprise without the need for action by the victim.”
Recent Articles By Author