CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents
嗯,用户让我帮忙总结一篇文章,控制在一百个字以内,而且不需要用“文章内容总结”或者“这篇文章”这样的开头。直接写描述即可。好的,首先我得仔细阅读用户提供的文章内容,抓住重点。 文章主要讲的是微软开源了一个叫做CTI-REALM的基准测试工具,用于评估AI代理在网络安全检测工程中的表现。这个工具不同于传统的测试方式,它关注的是从威胁情报到实际检测规则的生成过程,而不是简单的知识问答。CTI-REALM覆盖了Linux、AKS和Azure云环境,并且使用真实的工具和数据源来模拟安全分析师的工作流程。 接下来,我需要将这些信息浓缩到100字以内。要突出CTI-REALM的作用、它评估的内容以及其优势。比如,它是一个基准测试工具,用于评估AI在真实环境中的检测能力,包括生成检测规则和使用真实工具等。 可能的结构是:微软开源CTI-REALM基准测试工具,用于评估AI代理在网络安全中的表现。该工具模拟真实环境,评估从威胁情报到检测规则生成的能力,并使用KQL查询和Sigma规则进行评分。 这样大概在100字左右,并且直接描述了文章内容,没有使用任何开头的套话。 </think> 微软开源CTI-REALM基准测试工具,用于评估AI代理在网络安全中的表现。该工具模拟真实环境,评估从威胁情报到检测规则生成的能力,并使用KQL查询和Sigma规则进行评分。 2026-3-20 16:19:0 Author: www.microsoft.com(查看原文) 阅读量:1 收藏

Excerpt: CTI-REALM is Microsoft’s open-source benchmark for evaluating AI agents on real-world detection engineering—turning cyber threat intelligence (CTI) into validated detections. Instead of measuring “CTI trivia,” CTI-REALM tests end-to-end workflows: reading threat reports, exploring telemetry, iterating on KQL queries, and producing Sigma rules and KQL-based detection logic that can be scored against ground truth across Linux, AKS, and Azure cloud environments.


Security is Microsoft’s top priority. Every day, we process more than 100 trillion security signals across endpoints, cloud infrastructure, identity, and global threat intelligence. That’s the scale modern cyber defense demands, and AI is a core part of how we protect Microsoft and our customers worldwide. At the same time, security is, and always will be, a team sport.

That’s why Microsoft is committed to AI model diversity and to helping defenders apply the latest AI responsibly. We created CTI‑REALM and open‑sourced it so the broader industry can test models, write better code, and build more secure systems together.


CTI-REALM (Cyber Threat Real World Evaluation and LLM Benchmarking) is Microsoft’s open-source benchmark that evaluates AI agents on end-to-end detection engineering. Building on work like ExCyTIn-Bench, which evaluates agents on threat investigation, CTI-REALM extends the scope to the next stage of the security workflow: detection rule generation. Rather than testing whether a model can answer CTI trivia or classify techniques in isolation, CTI-REALM places agents in a realistic, tool-rich environment and asks them to do what security analysts do every day: read a threat intelligence report, explore telemetry, write and refine KQL queries, and produce validated detection rules.

We curated 37 CTI reports from public sources (Microsoft Security, Datadog Security Labs, Palo Alto Networks, and Splunk), selecting those that could be faithfully simulated in a sandboxed environment and that produced telemetry suitable for detection rule development. The benchmark spans three platforms: Linux endpoints, Azure Kubernetes Service (AKS), and Azure cloud infrastructure with ground-truth scoring at every stage of the analytical workflow.

Why CTI-REALM exists

Existing cybersecurity benchmarks primarily test parametric knowledge: can a model name the MITRE technique behind a log entry, or classify a TTP from a report? These are useful signals. However, they miss the harder question: can an agent operationalize that knowledge into detection logic that finds attacks in production telemetry?

No current benchmark evaluates this complete workflow. CTI-REALM fills that gap by measuring:

  • Operationalization, not recall: Agents must translate narrative threat intelligence into working Sigma rules and KQL queries, validated against real attack telemetry.
  • The full workflow: Scoring captures intermediate decision quality—CTI report selection, MITRE technique mapping, data source identification, iterative query refinement. Scoring is not just limited to the final output.
  • Realistic tooling: Agents use the same types of tools security analysts rely on: CTI repositories, schema explorers, a Kusto query engine, MITRE ATT&CK and Sigma rule databases.

Business Impact

CTI-REALM gives security engineering leaders a repeatable, objective way to prove whether an AI model improves detection coverage and analyst output.

Traditional benchmarks tend to provide a single aggregate score where a model either passes or fails but doesn’t always tell the team why. CTI-REALM’s checkpoint-based scoring answers this directly. It reveals whether a model struggles with CTI comprehension, query construction, or detection specificity. This helps teams make informed decisions about where human review and guardrails are needed.

Why CTI-REALM matters for business

  • Measures operationalization, not trivia: Focuses on translating narrative threat intel into detection logic that can be validated against ground truth.
  • Captures the workflow: Evaluates intermediate steps (e.g., technique extraction, telemetry identification, iterative refinement) in addition to the final rule quality.
  • Supports safer adoption: Helps teams benchmark models before considering any downstream use and reinforces the need for human review before operational deployment.

Latest results

We evaluated 16 frontier model configurations on CTI-REALM-50 (50 tasks spanning all three platforms).

Animated Gif Image
Model performance on CTI-REALM-50, sorted by normalized reward.

What the numbers tell us

  • Anthropic models lead across the board. Claude occupies the top three positions (0.587–0.637), driven by significantly stronger tool-use and iterative query behavior compared to OpenAI models.
  • More reasoning isn’t always better. Within the GPT-5 family, medium reasoning consistently beats high across all three generations, suggesting overthinking hurts in agentic settings.
  • Cloud detection is the hardest problem. Performance drops sharply from Linux (0.585) to AKS (0.517) to Cloud (0.282), reflecting the difficulty of correlating across multiple data sources in APT-style scenarios.
  • CTI tools matter. Removing CTI-specific tools degraded every model’s output by up to 0.150 points, with the biggest impact on final detection rule quality rather than intermediate steps.
  • Structured guidance closes the gap. Providing a smaller model with human-authored workflow tips closed about a third of the performance gap to a much larger model, primarily by improving threat technique identification.

For complete details around techniques and results, please refer to the paper here: [2603.13517] CTI-REALM: Benchmark to Evaluate Agent Performance on Security Detection Rule Generation Capabilities.

Get involved

CTI-REALM is open-source and free to access. CTI-REALM will be available on the Inspect AI repo soon. You can access it here: UKGovernmentBEIS/inspect_evals: Collection of evals for Inspect AI.

Model developers and security teams are invited to contribute, benchmark, and share results via the official GitHub repository. For questions or partnership opportunities, reach out to the team at msecaimrbenchmarking@microsoft[.]com.

CTI-REALM helps teams evaluate whether an agent can reliably turn threat intelligence into detections before relying on it in security operations.

References

  1. Microsoft raises the bar: A smarter way to measure AI for cybersecurity | Microsoft Security Blog
  2. [2603.13517] CTI-REALM: Benchmark to Evaluate Agent Performance on Security Detection Rule Generation Capabilities
  3. CTI-REALM: Cyber Threat Intelligence Detection Rule Development Benchmark by arjun180-new · Pull Request #1270 · UKGovernmentBEIS/inspect_evals

文章来源: https://www.microsoft.com/en-us/security/blog/2026/03/20/cti-realm-a-new-benchmark-for-end-to-end-detection-rule-generation-with-ai-agents/
如有侵权请联系:admin#unsafe.sh