For most security teams, detection feels like a treadmill. The alerts keep coming, rules keep breaking, and platform costs keep rising. Everyone’s tired, but no one’s moving forward.
I’ve seen this pattern in dozens of SOCs across industries, architectures, and maturity levels. And while the alert fatigue and burnout get the headlines, those are symptoms, not causes.
The real problem? Most detection strategies were designed for a different era. And we’re still running them like it’s 2012.
The classic SIEM model was based on one big idea: bring all your logs to one place, write some rules, and trigger alerts. That made sense when your environment was simpler, your adversaries noisier, and your tooling less fragmented.
But today’s reality is radically different. Modern environments are hybrid. Data is scattered across cloud, lake, and on-prem. Attackers move slowly and quietly. And SOCs are being asked to detect advanced behavior patterns using pipelines built for static log correlation.
The architecture hasn’t kept up. Most SIEM platforms were built as on-prem software that later got “cloud-hosted” but not truly cloud-architected. They weren’t designed for elastic data growth, AI-native workflows, or the level of automation modern SOCs now expect. You can only retrofit so far before you hit a wall.
Worse, the burden of making detection work by tuning rules, modeling data, and correlating events always falls on the detection engineer. But that job has quietly morphed into a Frankenstein role: part security expert, part developer, part data wrangler. That’s not sustainable. The platform should be doing more of the lifting.
Too many teams are treating alert volume as a triage problem. “Let’s reduce noise at the tail end,” they’re told. So they buy response tools, layer on SOAR, and try to automate their way out of chaos.
But noise is just the output of broken architecture and strategy. It’s the scrape on your knee, not the uneven ground you keep tripping over.
If you keep treating symptoms, you’ll never fix the root cause. Stronger teams flip the problem on its head: they ask why alerts are noisy in the first place. They invest upstream in data quality, detection design, and contextual enrichment. That’s where false positives (and worse, false negatives) are born. And that’s where they can be prevented.
In my experience, mature detection teams operate more like product teams. They think in terms of strategy, not just coverage. They build systems with iteration in mind.
Here’s what that looks like:
If I could give one piece of advice to SOC leaders this year, it’s this: don’t replicate your old detection stack in a new platform.
That’s the trap. You’re not just buying new tools, you’re rethinking the entire approach. What worked (or didn’t) in your legacy SIEM shouldn’t be your blueprint moving forward. Take the opportunity to design a modern detection strategy from first principles.
And whatever you do, stop treating data collection as detection. Just because you stored a log doesn’t mean you understood it, much less acted on it. Detection isn’t about volume. It’s about signals.
The industry is slowly waking up to this. You can’t just buy detection. You have to build it with the right architecture, the right context, and the right processes in place.
That’s the hard part. But it’s also the path to better outcomes, less burnout, and systems that don’t just alert, but adapt.
In the end, detection isn’t a feature. It’s a function of how well your system understands your threats and how well your team understands your system.
And that’s where the real work begins.
AI is your friend when applied in the right places
Today, an AI SOC is nearly reality. But AI needs data, enrichment, context (local) and above all, learning based on analysts’ tribal knowledge, institutional practices and judgment. The latter – learning – is often the gap between where public LLMs stop and actual real-world response/action starts. This knowledge has to be captured through repetition, feedback, human-in-the-loop at least for the foreseeable future and continuous tuning, enrichment and local context. For this to work, the platform needs to be rich in domain knowledge, ontology and fluent in real-world analyst workflows. And this cannot be applied only at the final – triage – stage. This needs to be factored into the whole lifecycle from data onboarding to detection engineering to triage and investigation.
Recent Articles By Author