In the face of escalating threat activity from nation-states exploiting corporate negligence and AI, insiders find themselves increasingly vulnerable to foreign interference, according to a DTEX report.
Drawing insights from over 1,300 investigations within DTEX Systems’ global customer base, the report reveals that nearly a third of malicious insider investigations uncovered anomalous reconnaissance behavior. Among the findings were sophisticated tactics such as unusual research into “crown jewel” topics and manipulation of internal security controls. More than three quarters (75%) of sophisticated malicious insiders attempted to conceal their activity, using tactics including burner email and encrypted messaging accounts.
With more than 90% of organizations expressing concern over internal AI usage, and nearly 20% of departing employees taking sensitive intellectual property, the report expressed an urgent need for enhanced insider threat detection and mitigation strategies.
Darren Guccione, CEO and co-founder at Keeper Security, said insider threats are one of the most challenging threats to protect against as an IT professional. “It takes a multi-layered approach to effectively mitigate these risks,” he said. “Because some roles are more sensitive in nature, robust access controls are necessary.”
Threat modelers must accept the reality that there is no trusted safe zone anymore, Guccione said. They should adjust their models to recognize threats that may be anywhere, including inside the traditional boundary.
“Organizations large and small should implement a zero-trust architecture with least-privilege access to ensure employees only have access to what they need to do their jobs,” Guccione said.
This includes giving access to only what employees need to do their jobs, not granting access indefinitely, periodically checking who has access and monitoring activity.
Nation states have realized that it is far cheaper and easier to recruit talent and socially engineer an insider to gain access to sensitive data and IP than it is to execute traditional Advance Persistent Threat (APT) attacks of the early 2010s (such as Stuxnet), explained Rajan Koo, DTEX chief technology officer. “Corporate cyber controls have improved and so too have social engineering tactics,” he said. “Now, it is far more expensive and difficult to execute malware.”
Paying someone off to steal corporate data or execute a phishing attack is simpler and cheaper, Koo said. It’s simply a case of cost, scale, supply and demand. “The good news is that if you’ve detected anomalous reconnaissance behavior, you’re as far ‘left of boom’ as you could possibly get,” he added.
Several mitigation opportunities exist at this early stage, though it is important to note that there might not be definitive evidence to prove malicious intent.
“It is important to confirm malicious intent in a way that is ethical and negates any possibility of bias,” Koo pointed out.
Organizations should consider flagging the individual as a “person of interest,” thereby increasing monitoring, and then potentially activating controls that limit access to “crown jewels” assets while an investigation is underway.
Koo also recommended collecting an audit trail closest to where the user is touching glass (i.e. the endpoint). That is the best way to reveal internet activity “concealment,” he said. This includes dark web browsing tools (such Tor) and monitoring of VPN technologies at a point where a user can’t conceal the original source of their internet connection.
“Monitoring activity purely from network sensors will not reveal this information,” Koo explained.
Guccione added that access management software – for example a unified privileged access management solution – can help with privileged account and session management, secrets management and enterprise password management.
With more than 90% of organizations being concerned about the internal use of AI, organizations need to understand how employees are using AI systems today. Regardless of the hype around AI, the way people consume, and transfer information is fundamentally changing because of Large Language Models (LLMs) and GenAI.
“AI is a double-edged sword. A clear audit trail of use and misuse needs to be established before usage is permitted or restricted,” Koo explained.
Considering the finding that one in five departing employees take sensitive intellectual property (IP) with them, the first step to strengthening data loss prevention (DLP) strategies and minimizing the risk of data exfiltration is recognizing a problem exists.
“When there’s intent to take data, no DLP strategy will be leak-proof,” Koo said. “Correlating the non-cyber indicators that an employee has been terminated or has handed in their resignation with data loss indicators is key for early detection and remediation.”
Proper offboarding processes are critical to minimize the risk of data exfiltration by employees leaving an organization, Guccione said. “It’s common sense that when an employee leaves the company on bad terms, there is a risk they could seek revenge and intentionally try to cause a data breach. However, even if an employee leaves on good terms, there is still a risk of cybercrime if there aren’t appropriate offboarding processes in place.”