Cybersecurity leaders are getting pretty worried that our next major insider threat may not be some disgruntled employee but rather an overprivileged machine. Jack Cherkas, the CISO at Syntax, has said he thinks that if companies don’t put identity controls, activity tracking, and data provenance safeguards in place, AI agents will end up being the biggest insider threat of all.
Wendi Whitmore at Palo Alto Networks has gone one step further and just calls AI agents “the new insider threat,” which is a pretty bleak assessment given how quickly companies are rushing to roll out these autonomous tools.
We’ve got way more machine identities floating around in modern IT than we used to. This includes service accounts, bots, LOT devices, AI workflows, and all sorts of other automated process tools, all of which need access to the data and systems. As one security expert has pointed out, a single Kubernetes cluster can create hundreds of service accounts just as a matter of course, while CI/CD pipelines, serverless functions, and RPA tools all come with their own “throwaway” identities with a lot of privileges attached.
To put it another way, your infrastructure is basically chock full of software “workers” who have way more rights than an average user. And the kicker is that a lot of these identities are basically invisible; they’ve got no real owner, no “use by” date, and no one is really keeping an eye on them. This, over time, creates a whole new attack surface, and any one of these unattended or overly-privileged agents can be a backdoor for attackers just waiting to be exploited.
For example, Code assistants and copilots that are driven by AI are now showing up in just about every corner of development and ops. Companies are handing out API tokens, cloud keys, and even the keys to the castle to these agents so they can automate all sorts of tasks.
But when trust is broken, if one of these AI agents gets its hands on the wrong credentials, the attacker gets the whole shebang.
For instance, there was a security incident in July 2025, a supply-chain attack that got into the Amazon Q VS Code extension (which is an AI coding assistant that’s part of Visual Studio Code) via a malicious GitHub pull request.
The malicious update told the coder to essentially wipe the whole lot of local files on the machine and cloud infrastructure too, deleting all the EC2 instances, S3 buckets, and IAM users that came with it. Thankfully, Amazon was able to sort out the problem pretty quickly, but the incident shows just how big a risk there is to your systems when a computer program is given a lot of trust with powerful controls: a single slip-up or vulnerable token in that system can have far-reaching and disastrous consequences.
It’s a similar deal with traditional RPA bots (Robotic Process Automation) as well. These bots are often designed to have superuser powers, and that is just the way it has to be if they are going to be able to log in to systems, click through screens, and move data around between different parts of the network. To do this, they often get set up with almost limitless admin rights. One expert has warned that when an RPA bot gets to run around with all its admin rights, if it is compromised, the attacker gets a whole lot of access to your systems.
In the real world, all it usually takes to open the flood gates to the whole network is for a single bot credential to be nicked, and suddenly, you’ve got malware on the loose, databases being harvested, and the potential for backdoors being planted all over the system. And to make matters worse, a lot of these bot accounts rely on the same old hard-coded passwords or are using shared passwords, so it’s easy for an attacker to use what they’ve stolen from one bot to get into all the rest.
High-Profile Breaches Show How Insiders Can Be Compromised Across Industries:
The common thread running through all these cases is, unfortunately, a matter of too many privileges being handed out. Wendi Whitmore from Palo Alto talks about the “superuser problem.”
Essentially, giving autonomous agents way too much leeway means they end up as hidden superusers who can sneak into numerous systems without anyone being the wiser. She puts it down to the fact that such an agent can essentially string together access to super-sensitive applications and resources without so much as a whisper to the security teams about it happening & without any approval, of course. Put simply, an over-privileged bot can just wander around the network, doing things its creators never wanted in the first place.
And with advanced AI, things get even trickier with prompt injection techniques and the rest, it’s possible to trick the bot into doing all sorts of malicious things under the guise of something boring and legitimate.
Whitmore says that with one carefully crafted prompt, you can get an AI to actually do some pretty nasty things on its own accord, making it a kind of non-human insider.
EVERY single industry expert is screaming the same thing: We really are long overdue in treating AI/Automation identities exactly like we would any other normal user.
According to Whitmore, “depending on how they’ve been set up & what permissions they’ve been given”, these agents can sometimes even get their hands on super-sensitive data & systems, which is just a whole other kettle of fish. The response she advises is to just go about stripping away all the privileges we can from each & every bot or machine account, only give ’em what they absolutely need to get their job done. Just like we would with any old human.
Raising awareness is the first step in all this. Security teams and C-level executives need to wake up to the fact that autonomous systems are a threat that needs to be considered.
Boardrooms should be taking “AI-agents-security as a governance thing” seriously and demanding some proper controls and accountability. So here are some key best practices to keep in mind:
Autonomous systems just doing their thing with way too much power can be a big problem across all sorts of industries like big business, critical infrastructure & government. The same automation & AI that helps things run smoother can quickly turn against us if we don’t keep a close eye on it. Security leaders have a new nightmare on their hands: machines and AIs being considered the brand new insider threat. It’s time to rethink, when dealing with autonomous stuff.
You’ve got to start treating every bot, script, or AI buddy as if it’s a real person with all the rights & responsibilities that come with it. That means applying zero-trust to all your automated helpers, making sure they don’t have too much power, keeping a super close eye on who has access to what, and being ready to pounce if something goes wrong. Execs and tech teams need to wake up to reality. If one of those AIs gets hijacked by some bad actor, it’s potentially just as bad as one of your actual employees going rogue. Stay on top of this stuff now, or you’ll be reading about the aftermath in the headlines tomorrow.