Why Venture Capital Is Betting Against Traditional SIEMs
嗯,用户让我总结一篇文章的内容,控制在一百个字以内,而且不需要特定的开头。我得先仔细阅读这篇文章,理解它的主要观点。 文章主要讨论了风险投资大量涌入新一代SIEM挑战者的情况。这些新公司声称传统SIEM存在很多问题,比如成本高、难以扩展、效果不佳等。作者分析了这些公司的五个核心论点,并逐一指出它们的问题所在。 首先,关于集中式架构不再适用的论点,作者认为虽然分布式查询有优势,但实时检测和关联分析仍然需要集中式数据存储。其次,定价模式的问题,作者指出隐藏数据量并不会减少成本,反而可能转移到其他方面。第三,关于检测规则的弱点,作者认为新的方法并没有突破底层限制。第四,缺乏上下文导致误报的问题,并没有得到根本解决。最后,AI驱动的新平台被夸大了作用。 总结起来,文章认为传统SIEM确实存在痛点,但将其视为过时是不准确的。新一代SIEM在某些方面有优势,但也有局限性。 现在我需要用简洁的语言把这些要点浓缩到100字以内。要确保涵盖主要观点:风险投资涌入、新一代SIEM挑战者、五个核心论点及其问题、以及对传统SIEM的评价。 最终总结应该清晰明了,直接描述文章内容。 </think> 过去两年间,新一代安全信息和事件管理(SIEM)初创公司吸引了大量风险投资。这些公司声称传统SIEM存在成本高昂、难以扩展和效果不佳等问题,并提出了五项核心论点:集中式架构不再适用、定价模式不合理、检测规则低效、缺乏上下文导致误报以及AI驱动的新平台能解决这些问题。然而这些论点经不起深入推敲:集中式架构仍有必要用于实时检测和关联分析;隐藏数据量并未减少成本;新的检测方法并未突破底层限制;更多上下文并不自动提高信号质量;AI并未消除对高质量数据和可解释性的需求。尽管传统SIEM确实存在痛点(如成本不透明和可用性问题),但将其视为“过时”是过于夸张的营销口号而非现实评价。 2025-12-17 21:29:4 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

And why most of the arguments do not hold up under scrutiny

Over the past 18 to 24 months, venture capital has flowed into a fresh wave of SIEM challengers including Vega (which raised $65M in seed and Series A at a ~$400M valuation), Perpetual Systems, RunReveal, Iceguard, Sekoia, Cybersift, Ziggiz, and Abstract Security, all pitching themselves as the next generation of security analytics. What unites them is not just funding but a shared narrative that incumbent SIEMs are fundamentally broken: too costly, too siloed, too hard to scale, and too ineffective in the face of modern data volumes and AI-driven threats.

This post does not belabor each startup’s product. Instead it abstracts the shared assertions that justify recent funding and then stresses them to see which hold up under scrutiny. I am not defending incumbents. I am trying to separate real gaps from marketing (and funding) narratives.

The “SIEM is Broken” Narrative

A commonly cited industry report claimed that major SIEM tools cover only about 19% of MITRE ATT&CK techniques despite having access to data that could cover ~87%. That statistic is technically interesting but also deeply misleading: ATT&CK technique coverage is not an operational measure of detection quality or effectiveness, it primarily reflects rule inventory and tuning effort. Nevertheless, it has become a core justification for the “SIEM is obsolete” narrative. I wasn’t able to find the original report to validate what and how they tested, but I have seen SIEMs that cover much more and have big detection teams taking care of these issues.

The Five Core Claims Driving the Market Thesis

Across decks, interviews, and marketing copy, I picked five recurring themes that define what these companies think incumbents get wrong and what investors are underwriting as the path forward.

1. “Centralized SIEM architectures no longer scale”

The claim is that forcing security telemetry into a centralized repository is too expensive and too slow for modern enterprises generating terabytes of logs every day. The proposed fixes include federated queries, analyzing data where it lives, and decoupling detection from ingestion so you never have to move or duplicate all your data.

The challenge is that correlation, state, timelines, and real-time detection require locality. Distributed query engines excel at ad-hoc exploration but are not substitutes for continuous detection pipelines. Federated queries introduce latency, inconsistent performance, and complexity every time you write a detection. Normalization deferred to query time pushes complexity into every rule. You do not eliminate cost, you shift it to unpredictable query execution and compute costs that spike precisely when incidents occur. Centralizing data isn’t a flaw; it is a tradeoff that supports correlation engines, summary indexes, entity timelines, and stateful detections that distributed query models struggle to maintain in real time. In fact, if the SIEM was to store the data in the customer’s S3 bucket, you can keep cost somewhat under control.

2. “SIEM pricing is broken because it charges by data volume”

A frequent refrain is that incumbent SIEMs penalize good security hygiene by tying pricing to ingestion volume, which becomes untenable as data grows. The proposed response is pricing models untethered from volume, open storage, and customer-controlled compute.

The challenge is that cost doesn’t vanish because you hide volume. Compute, memory, enrichment, retention, and query costs all remain. If pricing is detached from ingestion, it typically reappears as unpredictable query charges, usage tiers, or gated features. Volume is not an arbitrary metric; it correlates with the cost a vendor (or customer) incurs. Treating cost as orthogonal to data volume does not make it disappear; it just blinds you to a key cost driver. I have dealt with all the pricing models: by user, by device, by volume, … in the end I needed to make my gross margins work, guess who pays for that?

3. “SIEM detections are weak because they rely on bad rules”

New entrants commonly assert that traditional SIEM rules are noisy, static, and unable to keep up with modern threat techniques. Solutions offered include natural-language detections, detections-as-code, continuous evaluation, and AI-generated rules.

The challenge is that many of these still sit atop the same primitives. For example, SIGMA is widely used as a community detection language, but it is fundamentally limited: it is mostly single-event, cannot express event ordering or causality, has no native temporal abstractions or entity-centric modeling, and cannot natively express thresholds, rates, cardinality, or statistical baselines. Wrapping these limitations in AI or “natural language” does not change the underlying detection physics. You can improve workflow and authoring experience, but you do not fundamentally invent a new class of detection with the same primitives. And guess what, large vendors have pretty significant content teams – I mean detection engineering teams – often tied into their threat research labs. Don’t tell me that a startup has found a more cost effective and higher efficacy way to release detection rules. If that were the case, all these large vendors would be dumb to operate such large teams.

4. “SIEMs lack context, causing false positives”

The argument here is that existing SIEMs flood analysts with alert noise because they lack deep asset context, threat intelligence, or behavioral understanding. New entrants promise tightly integrated TI feeds, cloud context, or built-in behavior analytics.

Context integration has been a focus of incumbent platforms for years. The real hard problem is not accessing context but operationalizing it without drowning analysts. More feeds often mean more noise unless you have mature enrichment pipelines, entity resolution, and risk scoring built into rules that understand multi-stage attack sequences. Adding more sources does not automatically improve signal quality. The noise problem is as much about rule quality and use-case focus as it is about context availability. Apply the same argument here with regards to the quality of threat feeds that I outlined in the last item.

5. “AI-native SIEMs will finally fix detection and response”

Perhaps the most seductive claim is that incumbent SIEMs were built for a pre-AI world and that new platforms built with agentic AI at every layer will finally crack automation, detection, and investigation.

The challenge is that AI does not eliminate the need for structured, high-quality, normalized data, or explainability, or deterministic behavior in high-risk contexts. AI can accelerate workflows, assist with investigation, and suggest hypotheses, but it does not replace the need for precise, reproducible, and auditable detection logic. Most AI-native claims today are improvements in UX and speed, not architectural breakthroughs in detection theory.

The Uncomfortable Conclusion

VC money is flowing because SIEM is operationally hard, expensive, and often unpopular with SOC teams. There is real pain and real gaps, especially around cost transparency, scaling, and usability. But declaring existing SIEMs obsolete because they are imperfect is not a thesis; it is a marketing slogan.

The core assumptions driving this funding wave deserve scrutiny: centralization is treated as a flaw rather than a tradeoff necessary for continuous detection, pricing complaints get conflated with architectural insights, detection quality is blamed on tooling rather than operational realities, and AI is overstated as a panacea.

On the flip side, here are a couple of directions that should be looked at:

  1. Some of the new entrant SIEMs actually make a dent. They are rebuilding their entire pipelines and storage architecture with modern technologies, not old paradigms. They have a clear advantage and don’t have to deal with millions of lines of tech debt. Using an agentic AI architecture could be quite interesting here.
  2. As the AI SOC emerges – and maybe become a reality – we will probably see more and more MCP servers exposing infrastructure information that can be leveraged, from alerts to context to response capabilities. But we’ll need to see how data schemas and all that will evolve.
  3. The one innovation that has already generated some returns for investors is the entire data pipeline world. Companies like Observo (I had the privilege to be an advisor) have truly added something useful to the SIEMs and as I argue in one of my previous blogs, needs to really become a capability baked into each SIEM out there.

Thanks for the feedback, Jesse!

The post Why Venture Capital Is Betting Against Traditional SIEMs first appeared on Future of Tech and Security: Strategy & Innovation with Raffy.

*** This is a Security Bloggers Network syndicated blog from Future of Tech and Security: Strategy &amp; Innovation with Raffy authored by Raffael Marty. Read the original post at: https://raffy.ch/blog/2025/12/17/why-venture-capital-is-betting-against-traditional-siems/


文章来源: https://securityboulevard.com/2025/12/why-venture-capital-is-betting-against-traditional-siems/
如有侵权请联系:admin#unsafe.sh