Detection Engineering: Practicing Detection-as-Code – Versioning – Part 5
文章讨论了软件工程中版本控制的重要性,并将其扩展到检测工程(Detection-as-Code)。通过日历版本(CalVer)和语义版本(SemVer)两种方案,文章详细介绍了如何管理检测规则和内容包的版本更新,并通过自动化脚本确保版本一致性。此外,文章还探讨了可追溯性和生成发布说明的方法,以帮助团队有效跟踪和管理检测内容的变化。 2025-9-9 08:0:0 Author: blog.nviso.eu(查看原文) 阅读量:8 收藏


In software engineering, versioning is the process of assigning unique identifiers to different states or iterations of a software product. The identifiers (a.k.a version numbers) help developers and users track updates, changes, or bug fixes made to the software over time. Versioning is essential for managing software development, ensuring compatibility, and communicating changes to the end users.

In detection engineering, and especially when practicing Detection-as-Code, versioning is just as important. Versioning in the detection library helps us maintain traceability and track changes to individual detections and content packs. It can help us pinpoint the exact state of specific detections at a given point in time, provides a clear history of updates and facilitates troubleshooting and debugging by identifying which version introduced particular changes.

The two most common versioning schemes are Calendar Versioning [1] and Semantic Versioning [2]. In this part, we are going to explore how we could adapt those versioning schemes in our repository.

Calendar Versioning

Calendar versioning, often referred to as CalVer, is a versioning scheme where the version number is based on the release date. Typically, the format includes the year and month of the release, but it can also be something like YYYY.MM.DD (e.g. 2025.08.23). Calendar versioning is particularly useful for projects with regular release cycles, as it is very intuitive and allows for easy understanding of the timing of releases.

An example of calendar versioning in the scope of detection engineering is the Sigma repository releases [3]. Sigma rules are also being tagged with the modified date which can be used as an identifier for updates on each iteration of the rule.

Semantic Versioning

Semantic versioning, often referred to as SemVer, is a versioning system that uses a three-part number format – MAJOR.MINOR.PATCH (e.g. v1.0.1). Each part of the version number conveys specific information:

  • MAJOR – Incremented when there are incompatible changes that may affect backward compatibility.
  • MINOR – Incremented when new features are added in a backward-compatible manner.
  • PATCH – Incremented for backward-compatible bug fixes.

This approach helps developers and users understand the nature of changes in each update, ensuring that the impact of updates on existing implementations is clear. Semantic versioning is particularly useful for effective dependency management and communication during the evolution of software projects and very commonly used in APIs.

An example of Semantic versioning in the scope of detection engineering is the Azure Sentinel detections repository [4].

Versioning in Detection-as-Code

When talking about versioning in the context of Detection Engineering and specifically Detection-as-Code, we are talking about detection library releases, detection versioning, and content pack versioning. We are going to explore those areas and see how we can adopt or adjust the principles that we discussed so far, in our detection library.

Detection Versioning

In Part 2, we defined our metadata file and content pack formats, for which the Semantic versioning scheme was used. Although the Semantic versioning scheme definition is very focused on software, we can adopt it and adjust it to detections and assign an analogous meaning to the MAJOR.MINOR.PATCH format.

  • MAJOR – A major version change indicates significant changes to the detection rules that may also modify the detection logic. This could involve a complete overwrite of the detection logic, or changes that where caused by modifications to other components used by the rule, e.g. changes in the parser of the logs.
  • MINOR – A minor version change reflects enhancements to the detection rules. This might include enhancing the metadata of the rule, for example, by adding investigation steps, tags, updating the description etc, but also modifications to the queries like re-arranging conditions to improve performance, renaming fields and others.
  • PATCH – A patch version change, includes bug fixes to the detection rules. This involves changes that fix and incorrect behaviour of the rule without altering the rule’s core detection logic.

By assigning meaning to versions with Semantic versioning, we can very easily track over time which detections required the most effort by the detection engineering team and why. This is a major advantage, when we do periodic reviews of the detection library, and needs to figure out which detections have been through the most iterations. That way we can review the detection content effectively and possibly identify problematic detections.

Content Pack Versioning

Considering that content packs are collections of rules, changes to the version of detections increment the version of the content pack as well. However there may be other cases that can cause the version of the content pack to be incremented.

  • MAJOR – A major version change indicates significant updates to the content pack that may not be backwards compatible. For example, that could be the addition of rules that only work with a specific version of parser or the addition of a batch of rules that greatly enhances the detection capabilities of the content pack.
  • MINOR – A minor version change represents enhancements to the detections of the content pack, or additions of new ones. For example, the addition or removal of detections to the content pack, without however greatly affecting the detection capabilities of the pack as would be the case if we added or removed 1 detection from a 100 rule pack.
  • PATCH – A patch version change includes bug fixes to the content pack. This could involve fixing bugs in existing detections, or removing a detection from the content pack that has caused a blowout of alerts.

Versioning in content packs is just as important as it points to which content packs are the most stable and mature, or have not been reviewed for a some time. It also helps us understand what were our detection capabilities at a given point in time, allowing us to easily deduce why some indicators may have been missed in an incident.

Detections and Content Pack Build Validation for Versioning

With our versioning scheme established, the last step is to ensure that any modifications to detection rules or content packs are consistently accompanied by a version update.

We will once again utilize Build Validations [5] that we checked in Part 3, to automate a versioning checking mechanism. When a pull request is created towards the main branch, a script will be executed to ensure that versioning is applied, and if not, block the merge. But first, let us go through steps will help us understand how we are going to achieve that.

Azure DevOps Pipelines expose some local variables that can be used in Build Validations to get information on the Pull Request [6].

  • System.PullRequest.PullRequestId – The ID of the pull request that caused this build.
  • System.PullRequest.SourceBranch – The name of the target branch for a pull request.
  • System.PullRequest.TargetBranch – The branch that is being reviewed in a pull request.
  • System.PullRequest.SourceCommitId – The commit that is being reviewed in a pull request.

We can use the following command from the Azure DevOps pipeline to fetch the latest changes from the target branch. Since we are applying the Build Validation policy to the main branch, in our case the target branch is going to be the main branch.

git fetch origin $(System.PullRequest.TargetBranch)

Bash

We can then execute the following, to identify the files that are being modified by the pull request.

git diff --name-only --pretty= origin/main..HEAD

Bash

For this example, we updated a sentinel detection.

For every modified file, we then execute the following command to display a diff of the changes, which we can then parse to identify if the version needs to be incremented.

git --no-pager diff --unified=0 origin/main..HEAD -- detections/os/windows/download_via_certutil_exe/download_via_certutil_exe_sentinel.json

Bash

The way we identify if the version is incremented correctly, in the repository that we designed in Part 2, is the following:

  • If the modified file is a content pack, we look directly for correct incrementation of the version field.
  • If the modified file is a metadata YAML file, we look directly for correct incrementation of the version field.
  • If the modified file is a rule file, we identify the YAML metadata file for that detection and look for correct incrementation of the version field.
  • If the modified file is a metadata file or a rule file, we additionally look for an version modification of any content packs that reference this detection.

To put everything together we use the following script which validates the version incrementation in files modified in a pull request using the logic that we described above.

import sys
import re
import subprocess
import os
import json


def run_command(command: list) -> str:
    """Executes a shell command and returns the output."""
    try:
        # print(f"[R] Running command: {' '.join(command)}")
        output = subprocess.check_output(command, text=True, encoding="utf-8", errors="replace").strip()
        # print(f"[O] Command output:\n{'\n'.join(['\t'+line for line in output.splitlines()])}")
        return output
    except subprocess.CalledProcessError as e:
        print(f"##vso[task.logissue type=error]Error executing command: {' '.join(command)}")
        print(f"##vso[task.logissue type=error]Error message: {str(e)}")
        return ""
    except UnicodeDecodeError as e:
        print(f"##vso[task.logissue type=error]Unicode decode error: {e}")
        return ""


def get_pr_modified_files() -> list:
    """Get the pr modified files"""
    return run_command(["git", "diff", "--name-only", "--pretty=" "", "origin/main..HEAD"]).splitlines()


def get_pr_modified_file_diff_lines(file: str) -> str:
    """Get the pr modified file diff"""
    return run_command(["git", "--no-pager", "diff", "--unified=0", "origin/main..HEAD", "--", file]).splitlines()


def extract_version(line: str, version_regex) -> str:
    """Extract the version number from a line."""
    match = version_regex.search(line)
    return match.group(1) if match else None


def is_version_incremented_correctly(old_version: str, new_version: str) -> bool:
    """Check if the version is incremented correctly according to specified rules."""
    old_version_parts = old_version.split(".")
    new_version_parts = new_version.split(".")

    # Convert version parts to integers for comparison
    old_version_parts = [int(part) for part in old_version_parts]
    new_version_parts = [int(part) for part in new_version_parts]

    # Destructure the version parts for clarity
    old_major, old_minor, old_patch = old_version_parts
    new_major, new_minor, new_patch = new_version_parts

    # Check if major version is incremented
    if new_major > old_major:
        # Major version incremented, minor and patch must be reset to 0
        return new_minor == 0 and new_patch == 0

    # Check if minor version is incremented
    if new_minor > old_minor:
        # Minor version incremented, patch must be reset to 0
        return new_patch == 0

    # Check if patch version is incremented
    if new_patch > old_patch:
        # Patch version can be incremented freely if no other changes
        return new_major == old_major and new_minor == old_minor

    # If no version part is incremented correctly
    return False


def check_version_in_file(file: str, remove_pattern, add_pattern) -> bool:
    """Check if file has both added and removed version lines."""
    diff_lines = get_pr_modified_file_diff_lines(file)
    removed_version_found = False
    added_version_found = False
    version_incremented_correctly = False

    old_version = None
    new_version = None
    for line in diff_lines:
        if remove_pattern.search(line):
            removed_version_found = True
            old_version = extract_version(line, remove_pattern)
        elif add_pattern.search(line):
            added_version_found = True
            new_version = extract_version(line, add_pattern)

    if removed_version_found and added_version_found:
        print("  - Version updated.")
    else:
        print("  - Version NOT updated.")

    if old_version and new_version:
        print(f"  - {old_version} -> {new_version}")
        version_incremented_correctly = is_version_incremented_correctly(old_version, new_version)
        print(f"  - Version incremented correctly: {version_incremented_correctly}")

    return removed_version_found and added_version_found and version_incremented_correctly


def get_content_packs_for_detection(detection: str) -> list:
    detection = os.path.dirname(detection.removeprefix("detections/"))
    content_packs = []

    for filename in os.listdir("content_packs"):
        if not filename.endswith(".json"):
            continue

        full_path = os.path.join("content_packs", filename)
        with open(full_path, "r", encoding="utf-8") as f:
            try:
                data = json.load(f)
            except json.JSONDecodeError:
                print(f"Invalid JSON in {filename}")
                continue

        content_pack_detections = data.get("detections", [])
        if detection in content_pack_detections:
            content_packs.append(os.path.join("content_packs", filename))

    return content_packs


def validate_version():
    version_updated_files = []
    version_not_updated_files = []

    cp_version_pattern_remove = re.compile(r'^-\s*"version"\s*:\s*"(\d+\.\d+\.\d+)"')
    cp_version_pattern_added = re.compile(r'^\+\s*"version"\s*:\s*"(\d+\.\d+\.\d+)"')
    de_version_pattern_remove = re.compile(r"^-\s*version\s*:\s*(\d+\.\d+\.\d+)")
    de_version_pattern_added = re.compile(r"^\+\s*version\s*:\s*(\d+\.\d+\.\d+)")

    pr_modified_files = get_pr_modified_files()
    print(f"Modified Files:\n{', '.join(pr_modified_files)}")

    # Content packs first as they will be used in the detections check below
    for pr_modified_file in pr_modified_files:
        if pr_modified_file.startswith("content_pack"):
            # for content_pack check version directly in file
            print(f"Checking file: {pr_modified_file}")
            if check_version_in_file(pr_modified_file, cp_version_pattern_remove, cp_version_pattern_added):
                version_updated_files.append(pr_modified_file)
            else:
                version_not_updated_files.append(pr_modified_file)

    # Detections checks for version updates
    for pr_modified_file in pr_modified_files:
        if pr_modified_file.startswith("detections"):
            print(f"Checking file: {pr_modified_file}")
            if pr_modified_file.endswith("_meta.yml"):
                # Check version in _meta.yml file directly
                if check_version_in_file(pr_modified_file, de_version_pattern_remove, de_version_pattern_added):
                    version_updated_files.append(pr_modified_file)
                else:
                    version_not_updated_files.append(pr_modified_file)

            elif pr_modified_file.endswith("_sentinel.json"):
                # check the _meta.yml file for version changes
                base_name = pr_modified_file.removesuffix("_sentinel.json")
                meta_file = base_name + "_meta.yml"

                # Check if meta file is changed in this pr
                if meta_file in pr_modified_files:
                    if check_version_in_file(meta_file, de_version_pattern_remove, de_version_pattern_added):
                        version_updated_files.append(meta_file)
                    else:
                        version_not_updated_files.append(meta_file)
                else:
                    print(f"  - Metadata file for {pr_modified_file} not updated at all.")
                    version_not_updated_files.append(meta_file)
            else:
                pass

            # If a detection belonging to a content pack increments is modified, content pack version must be modified too.
            included_in_content_packs = get_content_packs_for_detection(pr_modified_file)
            for content_pack in included_in_content_packs:
                if content_pack not in version_updated_files:
                    print(
                        f"  - Detection {pr_modified_file} modified but version not incremented in content pack {content_pack} that references it."
                    )
                    version_not_updated_files.append(content_pack)

    if version_not_updated_files:
        print("##vso[task.logissue type=error] The following files did NOT increment their version:")
        for f in version_not_updated_files:
            print(f"##vso[task.logissue type=error]  - {f}")
        sys.exit(1)
    else:
        print("All relevant files have updated their version.")


def main():
    validate_version()


if __name__ == "__main__":
    main()

Python

We then, create the pipeline for that script and add it in the Build Validation Policy of the main branch.

name: Validate Version

trigger: none

jobs:
- job: ValidateVersion
  displayName: "Validate Version"
  steps:
    - checkout: self
      fetchDepth: 0
      persistCredentials: true
    - script: |
        echo "PR ID: $(System.PullRequest.PullRequestId)"
        echo "Source Branch: $(System.PullRequest.SourceBranch)"
        echo "Target Branch: $(System.PullRequest.TargetBranch)"
        echo "Latest Commit: $(System.PullRequest.SourceCommitId)"
        git fetch origin $(System.PullRequest.TargetBranch)
      displayName: 'Fetch Target Branch'
    - script: |
        python pipelines/scripts/validate_version.py
      displayName: 'Run Validate Version'

YAML

Once we create a pull request, the Build Validation runs automatically and blocks us from merging changes without correcting any versioning errors.

In this example, we modified the detection file, but we did not increment the version in the metadata file, or the content pack that references this detection, like we specified above.

We update the versions in both files and we see that the Build Validation is now successful.

Traceability

One of the biggest benefits of using versioning is traceability. By adding the detection and content pack versions to detection rules before deploying them and making sure that this information propagates to the generated security alerts, we can very quickly navigate from the alert to the version of the rule at that given point in time.

Tracing the version of the rule in our repository is easy. We can execute the following Git command to search the commit history of the repository for changes involving the string “1.2.0” in a specific file. The “-p” option displays the differences introduced in each commit, providing a detailed view of changes.

git log -p -S "1.2.0" -- detections\cloud\azure\azure_ad_azure_discovery_using_offensive_tools\azure_ad_azure_discovery_using_offensive_tools_meta.yml

Bash

Finally, by using the following command, and the commit id from the previous output, we can fetch the version of the rule at that given point in time and see what the query was.

git checkout 14693f7c8b35c977ece8f664ea28e7d1270cada3 -- detections/cloud/azure/azure_ad_azure_discovery_using_offensive_tools/*

Bash

Detection Library Release Versioning

If you offer detection content as part of a service or product, you are most likely working with detection content releases, meaning that you release content on regular intervals. We discussed in Part 2 different branching strategies that utilize releases. One way that you can manage releases is by tagging a commit on the main branch with a “release-*” tag that will include the date following the CalVer scheme discussed above.

We will now look into a way to generate release notes automatically, summarizing the changes between each release. But before providing the script, we are once again going through the Git commands that we are going to use.

First, we are listing the available release tags in the project by executing the command:

We then checkout the detections/* and content_packs/* directories for each tagged commit.

git checkout release-2025-08-03 -- content_packs/* detections/*

Bash

Then, for each content pack and referenced detection in that version of the detection library, we are collecting versions and metadata into a JSON structure. We compare each release with the one before it to identify changes in content packs and detections, so that we know for each of them whether it is new, updated, or existing. Content packs and detection of the first release are all considered new additions to the repository. The JSON structure should look like this.

We are going to utilize the following Jinja [7] template, to convert this JSON into an MD file that we can use for presentation of our release notes.

# Release Notes
{%- for release, content_packs in releases.items() %}
## {{ release }}
{% for content_pack_filename, content_pack_data in content_packs.items() -%}
{%- if content_pack_data.status == 'new' or content_pack_data.status == 'updated' %}
<details>
<summary><b>{{content_pack_data.status | title}}: {{ content_pack_data.name }} ({{ content_pack_filename }}) - {{content_pack_data.version}}</b></summary>
{{ content_pack_data.detections.description }}
#### Detections:
{%- for detection in content_pack_data.detections -%}
{%- if detection.status == 'new' or detection.status == 'updated' %}
**{{ detection.status| title}}**: <a href="{{repo_url}}?path=/detections/{{detection.path}}&version=GT{{release}}">{{ detection.name }}</a> - {{ detection.version }}
{% endif -%}
{%- endfor %}
</details>
{% endif -%}
{%- endfor -%}
{%- endfor -%}

Markdown

We implement the logic above into the following script.

import subprocess
import os
import json
import yaml
import argparse
from jinja2 import Environment, FileSystemLoader


def run_command(command: list) -> str:
    """Executes a shell command and returns the output."""
    try:
        # print(f"[R] Running command: {' '.join(command)}")
        output = subprocess.check_output(command, text=True, encoding="utf-8", errors="replace").strip()
        # print(f"[O] Command output:\n{'\n'.join(['\t'+line for line in output.splitlines()])}")
        return output
    except subprocess.CalledProcessError as e:
        print(f"##vso[task.logissue type=error] Error executing command: {' '.join(command)}")
        print(f"##vso[task.logissue type=error] Error message: {str(e)}")
        return ""
    except UnicodeDecodeError as e:
        print(f"##vso[task.logissue type=error] Unicode decode error: {e}")
        return ""


def get_release_tags() -> list:
    """Get the list of tags in the repo"""
    return run_command(["git", "tag", "-l", "release-*"]).splitlines()


def checkout_dirs_to_ref(ref: str, dirs: str) -> str:
    """Checkout to specific tag"""
    return run_command(["git", "checkout", ref, "--"] + dirs)


def collect_content_pack_info():
    content_packs = {}

    for filename in os.listdir("content_packs"):
        if not filename.endswith(".json"):
            continue

        full_path = os.path.join("content_packs", filename)
        with open(full_path, "r", encoding="utf-8") as f:
            try:
                data = json.load(f)
            except json.JSONDecodeError:
                print(f"Invalid JSON in {filename}")
                continue

        content_pack_info = {
            "name": data.get("name", "n/a"),
            "description": data.get("description", "n/a"),
            "version": data.get("version", "n/a"),
            "detections": [],
            "status": "n/a",
        }

        for detection_path in data.get("detections", []):
            # Build the full path to the meta.yml file
            detection_full_path = os.path.join("detections", detection_path)
            detection_file_base = os.path.basename(detection_path)
            meta_file = os.path.join(detection_full_path, f"{detection_file_base}_meta.yml")

            detection_entry = {
                "path": detection_path,
                "base_name": detection_file_base,
                "version": "n/a",
                "name": "n/a",
                "description": "n/a",
                "status": "n/a",
            }

            if os.path.exists(meta_file):
                with open(meta_file, "r", encoding="utf-8") as yf:
                    try:
                        yaml_data = yaml.safe_load(yf)
                        detection_entry["version"] = yaml_data.get("version", "n/a")
                        detection_entry["name"] = yaml_data.get("title", "n/a")
                        detection_entry["description"] = yaml_data.get("description", "n/a")
                    except yaml.YAMLError:
                        print(f"##vso[task.logissue type=error]Invalid YAML in {meta_file}")
            else:
                print(f"##vso[task.logissue type=error]Missing meta file: {meta_file}")

            content_pack_info["detections"].append(detection_entry)

        content_packs[filename] = content_pack_info

    return content_packs


def update_status(releases):
    sorted_releases = sorted(releases.keys(), reverse=True)

    for i in range(len(sorted_releases)):
        current_release = sorted_releases[i]
        current_content_packs = releases[current_release]

        if i == len(sorted_releases) - 1:  # Last release
            for (
                content_pack_filename,
                content_pack_data,
            ) in current_content_packs.items():
                content_pack_data["status"] = "new"
                for detection in content_pack_data["detections"]:
                    detection["status"] = "new"
        else:
            previous_release = sorted_releases[i + 1]
            previous_content_packs = releases[previous_release]

            for (
                content_pack_filename,
                content_pack_data,
            ) in current_content_packs.items():
                if content_pack_filename not in previous_content_packs:
                    content_pack_data["status"] = "new"
                    for detection in content_pack_data["detections"]["detections"]:
                        detection["status"] = "new"
                else:
                    previous_content_pack_data = previous_content_packs[content_pack_filename]
                    if content_pack_data["version"] != previous_content_pack_data["version"]:
                        content_pack_data["status"] = "updated"
                    else:
                        content_pack_data["status"] = "existing"

                    for detection in content_pack_data["detections"]:
                        detection_path = detection["path"]
                        previous_detection = next(
                            (d for d in previous_content_pack_data["detections"] if d["path"] == detection_path),
                            None,
                        )

                        if not previous_detection:
                            detection["status"] = "new"
                        elif detection["version"] != previous_detection["version"]:
                            detection["status"] = "updated"
                        else:
                            detection["status"] = "existing"

    return releases


def generate_release_notes(repo_url: str):
    releases = {}
    release_tags = get_release_tags()
    release_tags = sorted(release_tags, reverse=True)
    print(f"Release Tags: {', '.join(release_tags)}")
    for release_tag in release_tags:
        releases[release_tag] = {}
        print(f"Checking out to tag {release_tag}...")
        checkout_dirs_to_ref(release_tag, ["content_packs/*", "detections/*"])
        print(f"Gathering pack information for {release_tag}...")
        releases[release_tag] = collect_content_pack_info()

    checkout_dirs_to_ref("origin/main", ["content_packs/*", "detections/*"])
    releases = update_status(releases)
    print(json.dumps(releases, indent=4))
    env = Environment(loader=FileSystemLoader("pipelines/scripts/templates"))
    template = env.get_template("release_notes.jinja")
    md_content = template.render(releases=releases, repo_url=repo_url)
    md_path = os.path.join("documentation", "_release_notes.md")
    with open(md_path, "w") as f:
        f.write(md_content)


def main():
    parser = argparse.ArgumentParser(description="Process Git commits for specific changes.")
    parser.add_argument("--repo-url", type=str, required=True, help="Specify the repository url.")

    args = parser.parse_args()

    generate_release_notes(repo_url=args.repo_url)


if __name__ == "__main__":
    main()

Python

We configure the pipeline, so that when we create a release tag, our script above will be executed, and the pipeline will commit the release notes MD file to the repository under the documentation directory.

name: Change Log Generation

trigger:
  branches:
    include:
    - "main"
  paths:
    include:
    - "detections/*"
    - "content_packs/*"
    exclude:
    - ".git*"

jobs:
- job: ChangeLogGeneration
  displayName: "Change Log Generation"
  condition: eq(variables['Build.SourceBranchName'], 'main')
  steps:
    - checkout: self
      fetchDepth: 0
      persistCredentials: true
    - script: |
        git fetch origin main
        git checkout -b main origin/main
      displayName: "Fetch Branches Locally"
    - script: |
        python pipelines/scripts/change_log.py --repo-url https://dev.azure.com/<ORGANIZATION_NAME>/DAC/_git/Detection-as-Code --output-directory documentation/
      displayName: 'Run Change Log Generation'
    - bash: |
        git config --global user.email "[email protected]"
        git config --global user.name "Azure DevOps Pipeline"
        [[ $(git status --porcelain) ]] || { echo "No changes, exiting now..."; exit 0; }
        echo "Changes detected, let's commit!"
        git pull origin $(Build.SourceBranchName)
        git checkout $(Build.SourceBranchName)
        git add documentation/
        git commit --allow-empty  -m "Automated commit from pipeline - Update Change Log"
        git push origin $(Build.SourceBranchName)
      displayName: Commit Change Log to Repo

YAML

In order to issue a commit from the pipeline we need to provide the “Contribute” permission to the Azure DevOps’s project Build Service account. We can do that by navigating to the Project Settings -> Repositories -> Selecting our Repository -> Clicking on Security -> Selecting the main branch -> Selecting the DAC Build Service Account and assigning the Contribute permission.

Our pipeline is executed upon creation of a “release-*” tag, and generates the release notes MD file which is then uploaded into the repository under the documentation directory. It will also be available through the wiki page that we created in Part 4.

The release notes MD file is shown in the following screenshot and includes the changes that happen between each release.

The detection names are clickable and will take us to the version of the detection in the repository that existed in that release.

Wrapping Up

In detection engineering, particularly with Detection-as-Code, versioning ensures traceability and facilitates effective management of detection rules and content packs. By applying schemes like Calendar Versioning and Semantic Versioning, teams maintain a clear image about the nature and impact of changes, aiding in troubleshooting, dependency management, and communication of changes.

The next part of this blog series is going to be about ways of deploying detections to the target platforms.

References

[1] https://calver.org/
[2] https://semver.org/
[3] https://github.com/SigmaHQ/sigma/releases
[4] https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityEvent/AdminSDHolder_Modifications.yaml
[5] https://learn.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops&tabs=browser#build-validation
[6] https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables
[7] https://jinja.palletsprojects.com/en/stable/intro/

About the Author

schat-avatar

Stamatis Chatzimangou

Stamatis is a member of the Threat Detection Engineering team at NVISO’s CSIRT & SOC and is mainly involved in Use Case research and development.


文章来源: https://blog.nviso.eu/2025/09/09/detection-engineering-practicing-detection-as-code-versioning-part-5/
如有侵权请联系:admin#unsafe.sh