If AI Becomes the User, What Happens to the SIEM?
嗯,用户让我帮忙总结一篇文章,控制在100字以内,而且不需要特定的开头。首先,我需要理解文章的主要内容。看起来这篇文章讨论的是RSAC 2026大会后的一些趋势,特别是安全运营中心(SOC)的变化。 文章提到攻击速度加快,迫使市场转向主动响应,但这也带来了风险。同时,AI在SOC中的应用还不深入,很多厂商只是在现有SIEM基础上加了一层AI,并没有改变底层架构。此外,架构正在向API驱动和自动化转变,这对SIEM的经济模式提出了挑战。 用户可能是一位安全领域的从业者或者学生,想快速了解RSAC 2026的主要观点。他们需要一个简洁的总结,方便快速获取信息。 接下来,我需要将这些要点浓缩到100字以内。重点包括攻击速度加快、主动响应的需求、AI应用的现状、架构变化以及对SIEM的影响。 最后,确保语言简洁明了,不使用复杂的术语,让读者一目了然。 </think> RSAC 2026揭示了安全市场快速变化但混乱的状态。攻击速度加快推动主动响应需求增加,但AI在安全中的应用仍浅层化。多数厂商集中在现有SIEM上叠加AI功能,缺乏底层架构创新。未来趋势指向API驱动、自动化和架构重构,传统SIEM模式面临挑战。 2026-4-2 13:32:0 Author: raffy.ch(查看原文) 阅读量:1 收藏

RSAC 2026 made one thing very clear to me: the market is moving fast, but it is still deeply confused. The big announcements from Google, Splunk, and Databricks all point in the same direction. Security operations are becoming more agentic, more API-driven, and more automated. But most of the category still looks crowded, early, and only lightly differentiated.

The interesting part is not that everybody now has an AI story. It is where the pressure is landing: attack speed, active response, and the possibility that AI itself becomes the primary user of the security stack.

TL;DR

  • Attacks are now fast enough that human-speed response is no longer a sufficient default.
  • That will push the market toward active response, which is useful but also dangerous if the control logic is not deterministic enough.
  • Most AI SOC vendors still sound similar because many of them sit on top of existing SIEMs and alert streams rather than changing the underlying detection or data architecture.
  • The big SIEM vendors are moving, and one major EDR/SIEM vendor is expanding AI security into on-prem and sovereign environments.
  • If AI becomes the user of security products, the UI matters less, the API matters more, and the economics of expensive SIEM platforms get harder to defend.

Attacks are getting faster

This is the part of the market I think people are still underestimating. CrowdStrike’s 2026 threat report says the average eCrime breakout time dropped to 29 minutes in 2025, and the fastest case it observed was 27 seconds. Databricks used its Lakewatch announcement to make a related point from the vulnerability side, citing research that mean time to exploit has fallen from 23.2 days in 2025 to 1.6 days in 2026.

That changes what matters in the SOC. A lot of SIEM workflows still assume there is time to search, enrich, discuss, and decide. That model was already strained. It gets worse when attacks speed up and when the adversary is using AI to compress its own loop. Search still matters, but a search-centric operating model is not enough if the environment can be compromised end to end in under an hour.

The obvious answer is more active response. The problem is that this is where things get dangerous. If teams start handing more containment and remediation decisions to AI before the systems are ready, we are going to see more self-inflicted outages. The market is moving there anyway, because the alternative is to keep defending at human speed against machine-speed attacks. SOAR was supposed to close part of that gap and clearly did not.

AI SOC is still confusing and mostly sounds the same

That was probably my main emotional reaction leaving RSAC: confusion. There were simply too many vendors with very similar messaging. RSAC says the conference had more than 600 exhibitors this year. I could not independently validate an exact count of 36 AI SOC vendors from public RSAC data, but “roughly three dozen” felt directionally right from the floor, and many of them sounded remarkably similar.

The common pitch was familiar: reduce alerts, triage faster, investigate faster, give the analyst a copilot, automate parts of response. Some of that is clearly useful. But a lot of it still feels like a layer on top of the existing SIEM rather than a rethink of the detection stack itself. If the AI mostly sits on top of alert streams coming out of a legacy backend, then it may improve analyst productivity without materially fixing false negatives, brittle detections, or poor data design upstream.

That is also why I do not think most of this market is really using LLMs in a deep way yet. In most cases, the models are being used for triage, recommendations, summarization, and analyst assistance. That is very different from using LLMs for real detection, broader SOC operations, or meaningful changes to the underlying architecture.

For a more complete framework of where AI SOC and SIEM should be heading, see raffy.ch/SIEM.

That is why so much of the category feels undifferentiated. The interfaces are different, the branding is different, and the demo flows are different, but the center of gravity often looks the same. The latest platform announcements only reinforce that point. If the platform owner adds the agentic layer too, the vendors sitting on top of Chronicle, Splunk, or similar platforms have a much harder moat to defend.

The architecture is shifting

By this point, the vendor movement is established. The more interesting question now is what it does to architecture. SentinelOne adds another signal here by pushing more AI security capability into on-prem, sovereign, and air-gapped environments.

Put together, that points to a broader market shift. Storage matters more. Data routing matters more. Sovereignty and local control matter more. Cheap data lakes, strong analytics layers, and flexible orchestration matter more. Traditional SIEM UI matters less than it used to, and that matters not just for SIEM vendors but also for MDRs that differentiated by putting an AI layer on top of someone else’s backend.

That is also why Splunk’s cost model keeps coming back into the conversation. Splunk is powerful and mature, but if the agent becomes the main consumer of the system, customers start asking a different question: am I paying for the analytics engine, or am I paying for UI, workflow, and operating complexity that an agent increasingly does not care about?

If AI becomes the user, the stack changes

The most important implication may be economic, not just operational. Security products were built for human analysts. The value lived in the UI, the workflow, the search language, the dashboard, and the services needed to make all of that usable. But what happens if the real user becomes Claude Code, Codex, Gemini, or some internal agent instrumented across the entire security stack? Daniel Miessler has been arguing that companies and products increasingly become APIs. Security looks like one of the clearest versions of that shift.

In that world, every product starts to look more like an API than an application. That is exactly where the recent announcements are heading. LimaCharlie’s new lc-soc release is a concrete implementation of the same idea: an open-source “agentic SOC as code” where AI agents are coordinated through the cases system and D&R rules, then deployed and versioned like infrastructure.

If AI becomes the primary user, the UI does not disappear, but it stops being the center of gravity. The agent does not care about your console. It cares about whether the data is accessible, whether the schema is consistent, whether the analytics layer is fast, whether the permissions model is clean, and whether the actions are safe to orchestrate.

That creates real pressure on expensive SIEM economics. If the agent can query multiple tools directly, the premium attached to a deeply monetized UI gets harder to justify. The market may move toward something simpler: cheap storage, a strong analytics layer, and an orchestration layer on top. That does not mean incumbents disappear. It means their value proposition changes. If AI becomes the user, the winners may be the vendors with the best APIs, control points, and data access model.

Evals become part of the control layer

The next problem is trust and determinism. Once you push AI beyond triage and recommendations and let it make or recommend more consequential changes, you need a way to keep the system reliable. That is where eval loops come in.

I heard Josh Saxe make this point at RSAC in the context of AI-first infrastructure management: if agents are going to make changes in live systems, you need strong evaluation around them to keep behavior bounded and repeatable enough to trust. I think the same logic applies directly to security operations. The market is moving toward active response, but the models themselves were not built around strict determinism.

That means the answer is not blind autonomy. It is more likely a layered system where adaptive AI sits inside clearer control boundaries, with evals, policy, and deterministic automation around it. Evals stop being an AI engineering detail and become part of the security control layer itself.

No comments yet.


文章来源: https://raffy.ch/blog/2026/04/02/if-ai-becomes-the-user-what-happens-to-the-siem/
如有侵权请联系:admin#unsafe.sh