Bionic Hackbots Rise, Powerful Partners to Humans
人工智能和自动化正在重塑网络安全领域,研究人员借助AI工具加速漏洞发现和攻击面分析。报告指出,70%的研究人员使用AI工具,其应用范围扩大270%,但防御资源仍显不足。攻击者利用AI技术如prompt injection发起快速威胁,防御需结合持续安全策略、红蓝队测试及实时监控以应对挑战。 2025-10-27 09:48:37 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

The rapid rise of AI and automation has helped create a new breed of researcher — the bionic hacker. Think of a Steve Austen-type researcher, only instead of body parts replaced by machines, human creativity is being augmented by automation 

These bionic hackers use “AI as a catalyst, accelerating recon, triage, scaling pattern recognition, and probing complex attack surfaces faster than ever before,” according to HackerOne’s 9th Edition Hacker-Powered Security Report 2025/2025 

Cruise Con 2025

The report draws on HackerOne’s more than two million-strong researcher community. HackerOne found “the gap between traditional automation and human testing is closing; not because humans are being replaced, but because they are evolving with AI.” 

That’s a good thing, because the report also found bad actors are exploiting “vulnerabilities at machine speed, often before defenders even recognize the threat.” 

The report’s findings underscore the profound impact of AI on cybersecurity — both for defenders and attackers. Seven in ten AI-native researchers use AI tools, while AI adoption has expanded 270%. The research showed that 1,121 distinct programs included AI in their scope or received a valid AI vulnerability and 41 percent already are testing AI assets as part of their work. Perhaps that’s why more than half of the respondents (58%). have upskilled in AI/ML.  

That’s all well and good, but they may want to ramp up those efforts, since while 84% of CISOs oversee AI security and data privacy, plenty of them say they lack the resources to effectively manage the risks. They will likely pay for lapses in security, as 13% of the organizations had an AI-related incident in 2025 — and 97% of those didn’t have the proper AI access controls. 

They might be getting some help from autonomous hackbots, though — not only were 560 valid reports noted, the researchers found a 49% validity rate, which they say “shows bots excel at surface-level flaws while humans still uncover higher-order issues.” 

Adversaries clearly are using AI to gain the upper hand — HackerOne researchers say that prompt injection spikes are the fastest-growing AI threats, with valid reports shooting up 540%. As further evidence that defenders are fighting a difficult battle, HackerOne paid out $81 million in bounties in 2025, a record. And $2.9 billion in breach losses were avoided, also a record. 

HackerOne noted a shift in AI “from isolated models to a connected ecosystem” where “new protocols and architectures now let systems operate with real-time context, coordinate across tools, and embed directly into security workflows.”  

That’s a shift that the company says is “reshaping both sides of the HackerOne community: Customers are rapidly bringing AI assets into scope, and researchers are growing the skills to test and secure them.” 

Hackone’s Security report “validates what we are seeing across the industry: AI risks have rapidly moved from a watch list item to a front-line security concern, especially when it comes to data security and misuse,” says Diana Kelley, CISO at Noma Security.  

“The 210% surge in valid AI-related vulnerability reports confirms the speed of this threat evolution and the fact that 78% of customers now cite growing concern over AI underscores this,” she says. 

Even more crucial, “65% of the AI vulnerabilities are related to complex AI security issues like prompt injection, exposed endpoints, and model manipulation,” Kelley explains.  

“We are also witnessing the rise of  AI-powered hackers and attackers, humans that leverage  AI-powered autonomous and agentic tools,” she says. 

To manage the emerging threat landscape, blue teams must take “a mature, continuous security approach.” Robust blue team programs complement current trends and the value of ethical hackers, “starting with a full inventory of all AI systems, including agentic components as a baseline for governance and risk management,” Kelley says.  

She suggests that as vulnerabilities increase, adopting an AI Bill of Materials (AIBOM) “is the foundation for effective supply chain security and AI vulnerability management.”  

And she noted that “robust red team and pre-deployment testing remain vital as does runtime monitoring and logging, which round out the approach by providing the visibility to detect and in some cases even block attacks during use.” 

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/10/bionic-hackbots-rise-powerful-partners-to-humans/
如有侵权请联系:admin#unsafe.sh