Artificial intelligence (AI) is emerging as a top concern in the cybersecurity world, with 48% of respondents identifying it as the most significant security risk facing their organizations, according to a HackerOne survey of 500 security professionals.
The report highlights growing worries about the impact of AI on cybersecurity, from data leaks to an anticipated escalation of threats in the digital landscape.
Security professionals’ biggest concerns center around the leaking of training data (35%), unauthorized use of AI within their organizations (33%) and external hacking of AI models (32%).
These vulnerabilities not only jeopardize sensitive data but also expose organizations to sophisticated attacks that exploit AI systems in unexpected ways.
As AI becomes more embedded in enterprise operations, these risks are expected to rise.
The survey also underscores the belief among 58% of respondents that AI may fuel an “arms race” between cybercriminals and security teams, as both sides adopt advanced AI tactics.
This evolving dynamic could lead to more aggressive, automated attacks as cybercriminals leverage AI to exploit vulnerabilities faster than traditional defenses can respond.
Josh Jacobson, director of professional services at HackerOne, said AI is certainly fueling an arms race between security teams and cybercriminals, but it’s also becoming a powerful tool for defenders.
“AI helps automate tedious tasks, analyze large volumes of data and provide deeper insights into potential vulnerabilities, giving security teams more time to focus on strategic issues,” he said.
While attackers are using AI to automate phishing or exploit vulnerabilities, AI can help level the playing field by making defense efforts more efficient and proactive.
“Whoever scales fastest wins,” Jacobson said. “Quick and advanced analysis of vulnerabilities can better inform further fixes to harden assets.”
In contrast, he pointed out threat actors are limited to publicly available, which is limited, or stolen data, which is harder to access and trust.
Julian Brownlow Davies, vice president of advanced services at Bugcrowd, said he agreed AI presents both immense potential and significant risk.
“One of our biggest concerns is how AI can be weaponized for automated cyberattacks,” he explained. “We’re already seeing AI-driven malware that adapts to evade traditional security measures, and the stakes are only going to rise from here.”
Another critical issue is data privacy: AI models are trained on massive datasets, often containing sensitive information.
“If that data is exposed or breached, it can be devastating. And then there’s the question of bias in decision-making,” Davies said.
He cautioned AI can make the wrong decisions if trained on biased or incomplete data, which could lead to real-world harm and/or reputational damage.
“The lack of transparency in many AI systems is a growing concern,” he added. “If we don’t fully understand how decisions are made, it becomes incredibly difficult to assess risk and maintain compliance with evolving regulations.”
Narayana Pappu, CEO at Zendata, said he expects there will be a more internal deployment of AI systems – either on VPCs or on-prem.
“Increased use of open source AI models,” he said. “Small language models and domain-specific AI solutions are some of areas where we can expect activity within the enterprise AI transformation.”
To improve AI security within organizations and prevent unauthorized usage and external threats to AI models, Pappu said he recommends going beyond role-based access controls.
“Having data fingerprinting and testing ensures training data is hermetically sealed to the entity it belongs to and that there is no leakage,” he said. “This would greatly help manage additional risks.”
Recent Articles By Author