Skip to content

Lab 15: Purple Team Automation

Chapters: 6 (Threat Hunting), 41 (Red Team Methodology), 49 (Threat Intelligence Ops) Difficulty: ⭐⭐⭐⭐ Expert Estimated Time: 4-5 hours Prerequisites: Lab 9 (Purple Team Exercise), Chapter 6, Chapter 41, Chapter 49, PowerShell fundamentals, SIEM query experience (KQL or SPL)


Overview

In this lab you will:

  1. Configure Atomic Red Team for scheduled, automated adversary emulation across multiple ATT&CK tactics
  2. Build a detection validation pipeline that automatically correlates emulated attacks with SIEM alerts
  3. Integrate MITRE ATT&CK Navigator to visualize detection coverage gaps and prioritize engineering work
  4. Implement a continuous detection engineering workflow with version-controlled Sigma rules and automated testing
  5. Build a purple team metrics and reporting dashboard that tracks detection coverage, mean-time-to-detect, and rule health over time

Synthetic Data Only

All data in this lab is 100% synthetic and fictional. All IP addresses use RFC 5737 (192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24) or RFC 1918 (10.0.0.0/8, 172.16.0.0/12) reserved ranges. All hostnames use *.example or *.example.com domains. All credentials are shown as testuser/REDACTED. No real malware, real hosts, real threat actors, or real credentials are referenced. This lab is for defensive education only — never use these techniques against systems you do not own.

Relationship to Lab 9

Lab 9 introduced manual purple team exercises with 10 individual Atomic Red Team tests. This lab builds on that foundation by automating the entire purple team lifecycle — from scheduled test execution through detection validation to coverage reporting. Complete Lab 9 first for maximum benefit.


Scenario

Engagement Brief — ACME Security Corp

Organization: ACME Security Corp (fictional) Internal Network: 10.10.0.0/16 SOC SIEM: Sentinel / Splunk hybrid (SYNTHETIC) Red Team Platform: 10.10.1.50 (red-team-01.acme.example) Blue Team SIEM: 10.10.2.10 (siem-01.acme.example) ATT&CK Navigator Server: 10.10.2.20 (navigator.acme.example) CI/CD Server: 10.10.2.30 (cicd-01.acme.example) Metrics Dashboard: 10.10.2.40 (metrics.acme.example) Target Workstation: 10.10.3.100 (ws-target-01.acme.example) — Windows 10, Sysmon deployed Target Server: 10.10.3.200 (srv-target-01.acme.example) — Windows Server 2022 Target Linux Host: 10.10.3.201 (lnx-target-01.acme.example) — Ubuntu 22.04 Engagement Type: Continuous Purple Team Automation Program Assessment Period: 2026-03-01 through 2026-03-22 (SYNTHETIC) Threat Profile: APT-SYNTHETIC-7 — a fictional threat actor targeting financial services with credential theft, lateral movement, and data exfiltration

Summary: ACME Security Corp has decided to move from quarterly manual purple team exercises to a continuous automated program. The SOC Director has tasked your team with building an automated pipeline that executes adversary emulation tests on a recurring schedule, validates that detection rules fire correctly, identifies coverage gaps, and produces executive-ready metrics. This lab walks through the complete implementation using open-source tooling and synthetic data.


Exercise 1: Setting Up Atomic Red Team for Automated Testing

Objectives

  • Install and configure Atomic Red Team for unattended, scheduled execution
  • Create a threat-profile-driven test plan mapped to ATT&CK techniques
  • Build a PowerShell orchestration script that executes tests in sequence with logging
  • Configure safe execution guardrails to prevent unintended impact

1.1 Environment Preparation

Verify the target host is ready for automated testing. All commands below run on ws-target-01.acme.example (10.10.3.100).

# SYNTHETIC — Verify prerequisites on target workstation
# Host: ws-target-01.acme.example (10.10.3.100)

# Check PowerShell version (requires 5.1+)
$PSVersionTable.PSVersion

# Expected output:
# Major  Minor  Build  Revision
# -----  -----  -----  --------
# 5      1      22621  4391

# Verify Sysmon is running
Get-Service sysmon64 | Select-Object Status, DisplayName

# Expected output:
# Status DisplayName
# ------ -----------
# Running Sysmon64

# Verify Windows Event Forwarding
Get-WinEvent -ListLog "Microsoft-Windows-Sysmon/Operational" |
    Select-Object LogName, RecordCount, IsEnabled

# Expected output:
# LogName                                  RecordCount IsEnabled
# -------                                  ----------- ---------
# Microsoft-Windows-Sysmon/Operational          14523      True

1.2 Install Atomic Red Team with Automation Extensions

# SYNTHETIC — Install ART with automation configuration
# Host: ws-target-01.acme.example (10.10.3.100)

# Install core modules
Install-Module -Name invoke-atomicredteam -Scope CurrentUser -Force -AllowClobber
Install-Module -Name powershell-yaml -Scope CurrentUser -Force

# Install Atomic Red Team test library
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam -getAtomics -Force

# Set default path
$PSDefaultParameterValues = @{
    "Invoke-AtomicTest:PathToAtomicsFolder" = "C:\AtomicRedTeam\atomics"
}

# Verify installation
Import-Module Invoke-AtomicRedTeam
Get-AtomicTechnique -AtomicTechniqueID T1059.001 | Select-Object -ExpandProperty atomic_tests |
    Select-Object name, auto_generated_guid | Format-Table -AutoSize

# Expected output (SYNTHETIC):
# name                                              auto_generated_guid
# ----                                              -------------------
# Mimikatz SYNTHETIC                                a1b2c3d4-e5f6-7890-abcd-ef1234567890
# PowerShell Downgrade Attack SYNTHETIC             b2c3d4e5-f6a7-8901-bcde-f12345678901
# Execute base64 encoded PowerShell SYNTHETIC       c3d4e5f6-a7b8-9012-cdef-123456789012

1.3 Define a Threat-Profile-Driven Test Plan

Map the fictional threat actor APT-SYNTHETIC-7 to specific ATT&CK techniques and Atomic Red Team tests.

# SYNTHETIC — Threat profile test plan
# File: C:\PurpleTeam\config\apt-synthetic-7-testplan.yaml
# Threat Actor: APT-SYNTHETIC-7 (fictional)
# Target Sector: Financial Services
# Last Updated: 2026-03-15

threat_actor: APT-SYNTHETIC-7
description: >
  Fictional APT group targeting financial services. Known for
  spear-phishing initial access, credential dumping, lateral
  movement via PsExec/WMI, and exfiltration over encrypted channels.

test_schedule:
  frequency: weekly
  day: Sunday
  time: "02:00"
  timezone: UTC
  notification_email: soc-purple@acme.example

phases:
  - phase: initial_access
    tactic: TA0001
    techniques:
      - id: T1566.001
        name: "Spear-Phishing Attachment"
        atomic_tests: [1]
        risk_level: low
        cleanup: true

      - id: T1059.001
        name: "PowerShell Execution"
        atomic_tests: [3]
        risk_level: medium
        cleanup: true

  - phase: credential_access
    tactic: TA0006
    techniques:
      - id: T1003.001
        name: "LSASS Memory Dump"
        atomic_tests: [1, 2]
        risk_level: high
        cleanup: true
        requires_elevation: true

      - id: T1558.003
        name: "Kerberoasting"
        atomic_tests: [1]
        risk_level: medium
        cleanup: true

  - phase: lateral_movement
    tactic: TA0008
    techniques:
      - id: T1021.002
        name: "SMB/Windows Admin Shares"
        atomic_tests: [1]
        risk_level: medium
        cleanup: true

      - id: T1047
        name: "WMI Execution"
        atomic_tests: [1, 2]
        risk_level: medium
        cleanup: true

  - phase: persistence
    tactic: TA0003
    techniques:
      - id: T1053.005
        name: "Scheduled Task"
        atomic_tests: [1, 4]
        risk_level: low
        cleanup: true

      - id: T1547.001
        name: "Registry Run Keys"
        atomic_tests: [1, 2, 3]
        risk_level: low
        cleanup: true

  - phase: defense_evasion
    tactic: TA0005
    techniques:
      - id: T1070.001
        name: "Clear Windows Event Logs"
        atomic_tests: [1]
        risk_level: high
        cleanup: true
        requires_elevation: true

      - id: T1027
        name: "Obfuscated Files"
        atomic_tests: [1, 2]
        risk_level: low
        cleanup: true

  - phase: collection_exfiltration
    tactic: TA0009
    techniques:
      - id: T1560.001
        name: "Archive via Utility"
        atomic_tests: [1]
        risk_level: low
        cleanup: true

      - id: T1048.003
        name: "Exfiltration Over Unencrypted Protocol"
        atomic_tests: [1]
        risk_level: medium
        cleanup: true

execution_guardrails:
  max_concurrent_tests: 1
  delay_between_tests_seconds: 120
  abort_on_failure: false
  require_cleanup: true
  excluded_hosts:
    - "10.10.0.0/24"   # Management network
    - "10.10.2.0/24"   # SOC infrastructure
  allowed_hours: "01:00-05:00"
  max_execution_time_minutes: 180

ATT&CK Mapping

Each technique in the test plan maps directly to a MITRE ATT&CK technique ID. This mapping is critical for automated coverage reporting in Exercise 5. See Chapter 41: Red Team Methodology for detailed technique descriptions.

1.4 Build the Automation Orchestrator

# SYNTHETIC — Purple Team Automation Orchestrator
# File: C:\PurpleTeam\scripts\Invoke-PurpleTeamAutomation.ps1
# Host: ws-target-01.acme.example (10.10.3.100)

<#
.SYNOPSIS
    Automated Purple Team test execution orchestrator.
.DESCRIPTION
    Reads a YAML test plan, executes Atomic Red Team tests in sequence,
    logs all results, and generates a JSON report for pipeline consumption.
.NOTES
    SYNTHETIC SCRIPT — Educational purposes only.
    Organization: ACME Security Corp (fictional)
#>

param(
    [Parameter(Mandatory = $true)]
    [string]$TestPlanPath,

    [Parameter(Mandatory = $false)]
    [string]$OutputDir = "C:\PurpleTeam\results",

    [Parameter(Mandatory = $false)]
    [int]$DelayBetweenTests = 120,

    [Parameter(Mandatory = $false)]
    [switch]$DryRun
)

# Import required modules
Import-Module Invoke-AtomicRedTeam
Import-Module powershell-yaml

# Initialize logging
$timestamp = Get-Date -Format "yyyy-MM-dd_HH-mm-ss"
$logFile = Join-Path $OutputDir "purple-team-run_$timestamp.log"
$reportFile = Join-Path $OutputDir "purple-team-report_$timestamp.json"

function Write-PurpleLog {
    param([string]$Message, [string]$Level = "INFO")
    $entry = "[$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss')] [$Level] $Message"
    Add-Content -Path $logFile -Value $entry
    Write-Host $entry -ForegroundColor $(
        switch ($Level) {
            "ERROR" { "Red" }
            "WARN"  { "Yellow" }
            "SUCCESS" { "Green" }
            default { "White" }
        }
    )
}

# Load test plan
Write-PurpleLog "Loading test plan from $TestPlanPath"
$testPlan = Get-Content $TestPlanPath -Raw | ConvertFrom-Yaml

# Validate guardrails
$currentHour = (Get-Date).Hour
$allowedStart = [int]($testPlan.execution_guardrails.allowed_hours.Split("-")[0].Split(":")[0])
$allowedEnd   = [int]($testPlan.execution_guardrails.allowed_hours.Split("-")[1].Split(":")[0])

if ($currentHour -lt $allowedStart -or $currentHour -ge $allowedEnd) {
    if (-not $DryRun) {
        Write-PurpleLog "Outside allowed execution window ($($testPlan.execution_guardrails.allowed_hours)). Aborting." "ERROR"
        exit 1
    }
}

# Initialize results collection
$results = @{
    run_id        = [guid]::NewGuid().ToString()
    timestamp     = (Get-Date -Format "o")
    threat_actor  = $testPlan.threat_actor
    host          = $env:COMPUTERNAME
    host_ip       = "10.10.3.100"
    dry_run       = $DryRun.IsPresent
    phases        = @()
}

# Execute test phases
foreach ($phase in $testPlan.phases) {
    Write-PurpleLog "=== Starting Phase: $($phase.phase) (Tactic: $($phase.tactic)) ==="

    $phaseResult = @{
        phase      = $phase.phase
        tactic     = $phase.tactic
        techniques = @()
    }

    foreach ($technique in $phase.techniques) {
        Write-PurpleLog "Executing technique $($technique.id) - $($technique.name)"

        $techResult = @{
            technique_id   = $technique.id
            technique_name = $technique.name
            tests          = @()
            start_time     = (Get-Date -Format "o")
        }

        foreach ($testNum in $technique.atomic_tests) {
            if ($DryRun) {
                Write-PurpleLog "[DRY RUN] Would execute: $($technique.id) Test #$testNum" "WARN"
                $techResult.tests += @{
                    test_number = $testNum
                    status      = "DRY_RUN"
                    duration_s  = 0
                }
                continue
            }

            try {
                $testStart = Get-Date

                # Execute the Atomic test
                $output = Invoke-AtomicTest $technique.id `
                    -TestNumbers $testNum `
                    -TimeoutSeconds 300 `
                    -Confirm:$false 2>&1

                $duration = ((Get-Date) - $testStart).TotalSeconds

                Write-PurpleLog "Test $($technique.id)#$testNum completed in $([math]::Round($duration, 1))s" "SUCCESS"

                $techResult.tests += @{
                    test_number = $testNum
                    status      = "COMPLETED"
                    duration_s  = [math]::Round($duration, 1)
                    output      = ($output | Out-String).Trim()
                }

                # Run cleanup if required
                if ($technique.cleanup) {
                    Write-PurpleLog "Running cleanup for $($technique.id)#$testNum"
                    Invoke-AtomicTest $technique.id `
                        -TestNumbers $testNum `
                        -Cleanup `
                        -Confirm:$false 2>&1 | Out-Null
                }
            }
            catch {
                Write-PurpleLog "Test $($technique.id)#$testNum FAILED: $($_.Exception.Message)" "ERROR"

                $techResult.tests += @{
                    test_number = $testNum
                    status      = "FAILED"
                    error       = $_.Exception.Message
                }
            }

            # Delay between tests (allow logs to propagate)
            if ($DelayBetweenTests -gt 0) {
                Write-PurpleLog "Waiting $DelayBetweenTests seconds before next test..."
                Start-Sleep -Seconds $DelayBetweenTests
            }
        }

        $techResult.end_time = (Get-Date -Format "o")
        $phaseResult.techniques += $techResult
    }

    $results.phases += $phaseResult
}

# Write JSON report
$results | ConvertTo-Json -Depth 10 | Set-Content -Path $reportFile -Encoding UTF8
Write-PurpleLog "Report written to $reportFile" "SUCCESS"
Write-PurpleLog "=== Purple Team Automation Run Complete ==="

1.5 Schedule Automated Execution

# SYNTHETIC — Create a scheduled task for recurring purple team tests
# Host: ws-target-01.acme.example (10.10.3.100)

$action = New-ScheduledTaskAction `
    -Execute "powershell.exe" `
    -Argument "-ExecutionPolicy Bypass -File C:\PurpleTeam\scripts\Invoke-PurpleTeamAutomation.ps1 -TestPlanPath C:\PurpleTeam\config\apt-synthetic-7-testplan.yaml"

$trigger = New-ScheduledTaskTrigger `
    -Weekly -DaysOfWeek Sunday -At "02:00"

$settings = New-ScheduledTaskSettingsSet `
    -ExecutionTimeLimit (New-TimeSpan -Hours 4) `
    -StartWhenAvailable `
    -DontStopOnIdleEnd

$principal = New-ScheduledTaskPrincipal `
    -UserId "ACME\svc-purpleteam" `
    -LogonType Password `
    -RunLevel Highest

Register-ScheduledTask `
    -TaskName "PurpleTeam-AutomatedRun" `
    -Action $action `
    -Trigger $trigger `
    -Settings $settings `
    -Principal $principal `
    -Description "Automated Purple Team test execution — APT-SYNTHETIC-7 profile"

# Verify
Get-ScheduledTask -TaskName "PurpleTeam-AutomatedRun" |
    Select-Object TaskName, State, Description | Format-List

# Expected output (SYNTHETIC):
# TaskName    : PurpleTeam-AutomatedRun
# State       : Ready
# Description : Automated Purple Team test execution — APT-SYNTHETIC-7 profile

1.6 Validate a Dry Run

# SYNTHETIC — Execute a dry run to verify the orchestrator
# Host: ws-target-01.acme.example (10.10.3.100)

.\Invoke-PurpleTeamAutomation.ps1 `
    -TestPlanPath "C:\PurpleTeam\config\apt-synthetic-7-testplan.yaml" `
    -DryRun

# Expected output (SYNTHETIC):
# [2026-03-22 02:00:01] [INFO] Loading test plan from C:\PurpleTeam\config\apt-synthetic-7-testplan.yaml
# [2026-03-22 02:00:01] [INFO] === Starting Phase: initial_access (Tactic: TA0001) ===
# [2026-03-22 02:00:01] [INFO] Executing technique T1566.001 - Spear-Phishing Attachment
# [2026-03-22 02:00:01] [WARN] [DRY RUN] Would execute: T1566.001 Test #1
# [2026-03-22 02:00:01] [INFO] Executing technique T1059.001 - PowerShell Execution
# [2026-03-22 02:00:01] [WARN] [DRY RUN] Would execute: T1059.001 Test #3
# [2026-03-22 02:00:01] [INFO] === Starting Phase: credential_access (Tactic: TA0006) ===
# [2026-03-22 02:00:01] [INFO] Executing technique T1003.001 - LSASS Memory Dump
# [2026-03-22 02:00:01] [WARN] [DRY RUN] Would execute: T1003.001 Test #1
# [2026-03-22 02:00:01] [WARN] [DRY RUN] Would execute: T1003.001 Test #2
# ...
# [2026-03-22 02:00:01] [SUCCESS] Report written to C:\PurpleTeam\results\purple-team-report_2026-03-22_02-00-01.json
# [2026-03-22 02:00:01] [INFO] === Purple Team Automation Run Complete ===
Exercise 1 Checkpoint

At this point you should have:

  • Atomic Red Team installed and verified on the target host
  • A YAML test plan mapping APT-SYNTHETIC-7 to 12 ATT&CK techniques across 6 tactics
  • A PowerShell orchestrator script with guardrails (time windows, cleanup, logging)
  • A scheduled task configured for weekly execution
  • A successful dry run producing a JSON report

ATT&CK Techniques Covered in Test Plan:

Tactic Technique ID
Initial Access Spear-Phishing Attachment T1566.001
Execution PowerShell T1059.001
Credential Access LSASS Memory T1003.001
Credential Access Kerberoasting T1558.003
Lateral Movement SMB/Admin Shares T1021.002
Lateral Movement WMI T1047
Persistence Scheduled Task T1053.005
Persistence Registry Run Keys T1547.001
Defense Evasion Clear Event Logs T1070.001
Defense Evasion Obfuscated Files T1027
Collection Archive via Utility T1560.001
Exfiltration Exfil Over Unencrypted Protocol T1048.003

Exercise 2: Building Detection Validation Pipelines

Objectives

  • Parse Atomic Red Team execution logs and correlate with SIEM alerts
  • Build automated detection gap analysis from test results
  • Create a validation pipeline that reports pass/fail status for each technique
  • Implement alert-to-test correlation using timestamps and technique IDs

2.1 Understanding the Validation Pipeline Architecture

┌──────────────────────────────────────────────────────────────────────┐
│                    Detection Validation Pipeline                     │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌─────────────┐    ┌──────────────┐    ┌────────────────────┐      │
│  │ Atomic Red  │    │  SIEM Alert  │    │   Correlation      │      │
│  │ Team JSON   │───▶│  Export      │───▶│   Engine           │      │
│  │ Report      │    │  (KQL/SPL)   │    │                    │      │
│  └─────────────┘    └──────────────┘    └────────┬───────────┘      │
│                                                   │                  │
│                                    ┌──────────────┴──────────────┐   │
│                                    │                             │   │
│                              ┌─────▼─────┐              ┌───────▼──┐│
│                              │ DETECTED  │              │   GAP    ││
│                              │ (Alert    │              │ (No Alert││
│                              │  Fired)   │              │  Found)  ││
│                              └─────┬─────┘              └───────┬──┘│
│                                    │                            │   │
│                              ┌─────▼─────────────────────▼──────┐   │
│                              │     Coverage Report (JSON)       │   │
│                              │     + ATT&CK Navigator Layer     │   │
│                              └──────────────────────────────────┘   │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

2.2 SIEM Alert Export Queries

First, export alerts generated during the test window from your SIEM. Below are queries for both KQL (Microsoft Sentinel) and SPL (Splunk).

KQL — Microsoft Sentinel: Export Alerts During Test Window

// SYNTHETIC — Export all alerts triggered during the purple team test window
// SIEM: siem-01.acme.example (10.10.2.10)
// Timeframe: 2026-03-22 02:00 to 06:00 UTC

SecurityAlert
| where TimeGenerated between (datetime(2026-03-22T02:00:00Z) .. datetime(2026-03-22T06:00:00Z))
| where CompromisedEntity has "ws-target-01" or CompromisedEntity has "10.10.3.100"
| extend AttackTechniques = parse_json(ExtendedProperties).["ATT&CK Techniques"]
| project
    TimeGenerated,
    AlertName,
    AlertSeverity,
    AttackTechniques,
    CompromisedEntity,
    Description,
    ProviderName,
    Status
| sort by TimeGenerated asc

Expected output (SYNTHETIC):

TimeGenerated            AlertName                              Severity  ATT&CK Techniques  CompromisedEntity
2026-03-22T02:05:12Z     Suspicious PowerShell Execution        High      T1059.001           ws-target-01
2026-03-22T02:08:45Z     LSASS Memory Access Detected           Critical  T1003.001           ws-target-01
2026-03-22T02:12:33Z     Scheduled Task Created                 Medium    T1053.005           ws-target-01
2026-03-22T02:15:01Z     Registry Run Key Modified              Medium    T1547.001           ws-target-01
2026-03-22T02:18:20Z     Windows Event Log Cleared              High      T1070.001           ws-target-01
2026-03-22T02:22:44Z     Archive File Created                   Low       T1560.001           ws-target-01

Missing Alerts

Notice that T1566.001 (Spear-Phishing), T1558.003 (Kerberoasting), T1021.002 (SMB), T1047 (WMI), T1027 (Obfuscation), and T1048.003 (Exfiltration) did not generate alerts. These are detection gaps that need engineering attention.

SPL — Splunk: Export Alerts During Test Window

| search index=notable earliest="03/22/2026:02:00:00" latest="03/22/2026:06:00:00"
    (dest="ws-target-01" OR dest_ip="10.10.3.100")
| eval attack_technique=mvindex(split(annotations.mitre_attack.mitre_technique_id, ","), 0)
| table _time, rule_name, urgency, attack_technique, dest, description, source
| sort _time

Expected output (SYNTHETIC):

_time                   rule_name                             urgency   attack_technique  dest
2026-03-22T02:05:12Z    Suspicious PowerShell Execution       high      T1059.001         ws-target-01
2026-03-22T02:08:45Z    LSASS Memory Access Detected          critical  T1003.001         ws-target-01
2026-03-22T02:12:33Z    Scheduled Task Created via CLI         medium    T1053.005         ws-target-01
2026-03-22T02:15:01Z    Registry Autostart Modification        medium    T1547.001         ws-target-01
2026-03-22T02:18:20Z    Event Log Cleared                     high      T1070.001         ws-target-01
2026-03-22T02:22:44Z    Suspicious Archive Creation            low       T1560.001         ws-target-01

2.3 Build the Correlation Engine

#!/usr/bin/env python3
"""
SYNTHETIC — Purple Team Detection Validation Correlator
File: C:\PurpleTeam\scripts\correlate_results.py
Organization: ACME Security Corp (fictional)

Correlates Atomic Red Team execution reports with SIEM alert exports
to identify detection gaps.
"""

import json
import csv
import sys
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional


def load_art_report(report_path: str) -> dict:
    """Load Atomic Red Team JSON execution report."""
    with open(report_path, "r") as f:
        return json.load(f)


def load_siem_alerts(alerts_path: str) -> list[dict]:
    """Load SIEM alert export (CSV format)."""
    alerts = []
    with open(alerts_path, "r") as f:
        reader = csv.DictReader(f)
        for row in reader:
            alerts.append(row)
    return alerts


def correlate(art_report: dict, siem_alerts: list[dict],
              time_window_minutes: int = 10) -> dict:
    """
    Correlate ART test executions with SIEM alerts.

    For each technique executed, check if a corresponding alert
    was generated within the time window.
    """
    results = {
        "run_id": art_report["run_id"],
        "timestamp": art_report["timestamp"],
        "threat_actor": art_report["threat_actor"],
        "host": art_report["host"],
        "total_techniques": 0,
        "detected": 0,
        "gaps": 0,
        "coverage_pct": 0.0,
        "technique_results": []
    }

    for phase in art_report["phases"]:
        for technique in phase["techniques"]:
            results["total_techniques"] += 1
            tech_id = technique["technique_id"]
            tech_name = technique["technique_name"]
            exec_time = technique.get("start_time", "")

            # Search for matching alert
            matching_alert = find_matching_alert(
                tech_id, exec_time, siem_alerts, time_window_minutes
            )

            if matching_alert:
                results["detected"] += 1
                status = "DETECTED"
                alert_name = matching_alert.get("AlertName",
                              matching_alert.get("rule_name", "Unknown"))
                alert_severity = matching_alert.get("AlertSeverity",
                                  matching_alert.get("urgency", "Unknown"))
                time_to_detect = calculate_ttd(exec_time,
                                  matching_alert.get("TimeGenerated",
                                  matching_alert.get("_time", "")))
            else:
                results["gaps"] += 1
                status = "GAP"
                alert_name = None
                alert_severity = None
                time_to_detect = None

            results["technique_results"].append({
                "technique_id": tech_id,
                "technique_name": tech_name,
                "tactic": phase["tactic"],
                "phase": phase["phase"],
                "status": status,
                "alert_name": alert_name,
                "alert_severity": alert_severity,
                "time_to_detect_seconds": time_to_detect,
                "tests_executed": len(technique.get("tests", []))
            })

    if results["total_techniques"] > 0:
        results["coverage_pct"] = round(
            (results["detected"] / results["total_techniques"]) * 100, 1
        )

    return results


def find_matching_alert(technique_id: str, exec_time: str,
                         alerts: list[dict],
                         window_minutes: int) -> Optional[dict]:
    """Find a SIEM alert matching the technique within the time window."""
    for alert in alerts:
        alert_techniques = alert.get("AttackTechniques",
                            alert.get("attack_technique", ""))
        if technique_id in str(alert_techniques):
            return alert
    return None


def calculate_ttd(exec_time: str, alert_time: str) -> Optional[float]:
    """Calculate time-to-detect in seconds."""
    try:
        exec_dt = datetime.fromisoformat(exec_time.replace("Z", "+00:00"))
        alert_dt = datetime.fromisoformat(alert_time.replace("Z", "+00:00"))
        return round((alert_dt - exec_dt).total_seconds(), 1)
    except (ValueError, AttributeError):
        return None


def generate_report(correlation: dict, output_path: str) -> None:
    """Generate the detection validation report."""
    with open(output_path, "w") as f:
        json.dump(correlation, f, indent=2)

    print(f"\n{'='*70}")
    print(f"  PURPLE TEAM DETECTION VALIDATION REPORT")
    print(f"  Run ID: {correlation['run_id']}")
    print(f"  Threat Actor Profile: {correlation['threat_actor']}")
    print(f"  Target Host: {correlation['host']}")
    print(f"{'='*70}")
    print(f"\n  Total Techniques Tested:  {correlation['total_techniques']}")
    print(f"  Detected:                 {correlation['detected']}")
    print(f"  Detection Gaps:           {correlation['gaps']}")
    print(f"  Coverage:                 {correlation['coverage_pct']}%")
    print(f"\n{'─'*70}")
    print(f"  {'Technique':<14} {'Name':<35} {'Status':<10} {'TTD (s)'}")
    print(f"  {'─'*14} {'─'*35} {'─'*10} {'─'*10}")

    for t in correlation["technique_results"]:
        ttd = f"{t['time_to_detect_seconds']:.1f}" if t["time_to_detect_seconds"] else "N/A"
        status_color = "DETECTED" if t["status"] == "DETECTED" else "** GAP **"
        print(f"  {t['technique_id']:<14} {t['technique_name']:<35} {status_color:<10} {ttd}")

    print(f"\n{'='*70}")


if __name__ == "__main__":
    art_report = load_art_report(sys.argv[1])
    siem_alerts = load_siem_alerts(sys.argv[2])
    output_path = sys.argv[3] if len(sys.argv) > 3 else "validation_report.json"

    correlation = correlate(art_report, siem_alerts)
    generate_report(correlation, output_path)

2.4 Run the Correlation and Analyze Results

# SYNTHETIC — Execute the correlator
# Host: red-team-01.acme.example (10.10.1.50)

python3 C:\PurpleTeam\scripts\correlate_results.py \
    C:\PurpleTeam\results\purple-team-report_2026-03-22_02-00-01.json \
    C:\PurpleTeam\results\siem-alerts-export_2026-03-22.csv \
    C:\PurpleTeam\results\validation_report_2026-03-22.json

Expected output (SYNTHETIC):

======================================================================
  PURPLE TEAM DETECTION VALIDATION REPORT
  Run ID: f47ac10b-58cc-4372-a567-0e02b2c3d479
  Threat Actor Profile: APT-SYNTHETIC-7
  Target Host: WS-TARGET-01
======================================================================

  Total Techniques Tested:  12
  Detected:                 6
  Detection Gaps:           6
  Coverage:                 50.0%

  ──────────────────────────────────────────────────────────────────────
  Technique      Name                                Status     TTD (s)
  ────────────── ─────────────────────────────────── ────────── ──────────
  T1566.001      Spear-Phishing Attachment           ** GAP **  N/A
  T1059.001      PowerShell Execution                DETECTED   12.3
  T1003.001      LSASS Memory Dump                   DETECTED   8.7
  T1558.003      Kerberoasting                       ** GAP **  N/A
  T1021.002      SMB/Windows Admin Shares            ** GAP **  N/A
  T1047          WMI Execution                       ** GAP **  N/A
  T1053.005      Scheduled Task                      DETECTED   33.1
  T1547.001      Registry Run Keys                   DETECTED   15.2
  T1070.001      Clear Windows Event Logs            DETECTED   18.4
  T1027          Obfuscated Files                    ** GAP **  N/A
  T1560.001      Archive via Utility                 DETECTED   44.6
  T1048.003      Exfil Over Unencrypted Protocol     ** GAP **  N/A

======================================================================

50% Coverage — Critical Gaps Identified

The validation reveals that only 6 of 12 tested techniques were detected. The 6 gaps represent significant blind spots in ACME Security Corp's detection posture against the APT-SYNTHETIC-7 threat profile. Exercise 4 will address these gaps through new detection rule development.

2.5 Detection Gap Analysis

Analyze each gap to understand why the detection failed and what data sources are needed.

Gap Technique Root Cause Required Data Source Priority
T1566.001 — Spear-Phishing No email gateway telemetry ingested in SIEM Email gateway logs (e.g., Exchange Message Tracking) High
T1558.003 — Kerberoasting No rule for TGS requests with RC4 encryption Windows Security Event 4769 (filtered on RC4) Critical
T1021.002 — SMB/Admin Shares SMB lateral movement not monitored Windows Security Event 5140/5145 + Sysmon Event 3 High
T1047 — WMI No WMI execution detection rule Sysmon Event 1 (WmiPrvSE.exe child processes) + Event 19/20/21 High
T1027 — Obfuscated Files No content-inspection rule for encoded payloads PowerShell ScriptBlock Logging (4104) with entropy analysis Medium
T1048.003 — Exfiltration No rule for large outbound data transfers over HTTP Network flow data / proxy logs Critical
Exercise 2 Checkpoint

At this point you should have:

  • KQL and SPL queries to export SIEM alerts for a test window
  • A Python correlation engine that maps ART results to SIEM alerts
  • A validation report showing 50% detection coverage (6/12 techniques)
  • A gap analysis table with root causes and required data sources
  • Understanding of why time-to-detect (TTD) varies across techniques

Exercise 3: Automated Adversary Emulation with MITRE ATT&CK Navigator

Objectives

  • Generate ATT&CK Navigator layers from purple team validation results
  • Visualize detection coverage using color-coded heatmaps
  • Compare coverage across multiple test runs to track improvement
  • Create threat-actor-specific overlay layers for executive reporting

3.1 Understanding ATT&CK Navigator Layers

The ATT&CK Navigator uses JSON layer files to render interactive heatmaps of technique coverage. Each layer contains technique annotations with scores, colors, and comments.

{
    "name": "ACME Security Corp — APT-SYNTHETIC-7 Detection Coverage",
    "versions": {
        "attack": "14",
        "navigator": "4.9.1",
        "layer": "4.5"
    },
    "domain": "enterprise-attack",
    "description": "Detection coverage for APT-SYNTHETIC-7 threat profile. Generated by Purple Team Automation Pipeline. SYNTHETIC data — ACME Security Corp (fictional).",
    "sorting": 3,
    "layout": {
        "layout": "side",
        "aggregateFunction": "average",
        "showID": true,
        "showName": true,
        "showAggregateScores": true,
        "countUnscored": false
    },
    "hideDisabled": false,
    "gradient": {
        "colors": ["#ff6666", "#ffe766", "#8ec843"],
        "minValue": 0,
        "maxValue": 100
    },
    "legendItems": [
        { "label": "Detected (alert fired)", "color": "#8ec843" },
        { "label": "Partial (logged, no alert)", "color": "#ffe766" },
        { "label": "Gap (not detected)", "color": "#ff6666" },
        { "label": "Not tested", "color": "#ffffff" }
    ]
}

3.2 Generate Navigator Layer from Validation Results

#!/usr/bin/env python3
"""
SYNTHETIC — ATT&CK Navigator Layer Generator
File: C:\PurpleTeam\scripts\generate_navigator_layer.py
Organization: ACME Security Corp (fictional)

Converts purple team validation reports into ATT&CK Navigator layer files.
"""

import json
import sys
from datetime import datetime


def generate_layer(validation_report: dict) -> dict:
    """Generate an ATT&CK Navigator layer from validation results."""

    techniques = []
    for result in validation_report["technique_results"]:
        tech_id = result["technique_id"]

        # Determine score and color based on detection status
        if result["status"] == "DETECTED":
            score = 100
            color = "#8ec843"  # Green
            comment = (
                f"DETECTED — Alert: {result.get('alert_name', 'N/A')} | "
                f"Severity: {result.get('alert_severity', 'N/A')} | "
                f"TTD: {result.get('time_to_detect_seconds', 'N/A')}s"
            )
        else:
            score = 0
            color = "#ff6666"  # Red
            comment = (
                f"GAP — No detection alert fired during test. "
                f"Technique: {result['technique_name']} | "
                f"Phase: {result.get('phase', 'N/A')}"
            )

        # Handle sub-techniques (e.g., T1059.001 -> T1059 with tactic)
        if "." in tech_id:
            base_id, sub_id = tech_id.rsplit(".", 1)
            techniques.append({
                "techniqueID": base_id,
                "tactic": map_tactic(result.get("tactic", "")),
                "score": score,
                "color": color,
                "comment": comment,
                "enabled": True,
                "showSubtechniques": True,
                "metadata": [
                    {"name": "run_id", "value": validation_report["run_id"]},
                    {"name": "test_date", "value": validation_report["timestamp"]},
                    {"name": "tests_executed", "value": str(result.get("tests_executed", 0))}
                ]
            })
            # Also add the sub-technique entry
            techniques.append({
                "techniqueID": tech_id,
                "tactic": map_tactic(result.get("tactic", "")),
                "score": score,
                "color": color,
                "comment": comment,
                "enabled": True,
                "metadata": [
                    {"name": "run_id", "value": validation_report["run_id"]},
                    {"name": "test_date", "value": validation_report["timestamp"]}
                ]
            })
        else:
            techniques.append({
                "techniqueID": tech_id,
                "tactic": map_tactic(result.get("tactic", "")),
                "score": score,
                "color": color,
                "comment": comment,
                "enabled": True,
                "metadata": [
                    {"name": "run_id", "value": validation_report["run_id"]},
                    {"name": "test_date", "value": validation_report["timestamp"]},
                    {"name": "tests_executed", "value": str(result.get("tests_executed", 0))}
                ]
            })

    layer = {
        "name": f"ACME Security Corp — {validation_report['threat_actor']} Coverage ({datetime.now().strftime('%Y-%m-%d')})",
        "versions": {
            "attack": "14",
            "navigator": "4.9.1",
            "layer": "4.5"
        },
        "domain": "enterprise-attack",
        "description": (
            f"Detection coverage for {validation_report['threat_actor']} threat profile. "
            f"Coverage: {validation_report['coverage_pct']}% "
            f"({validation_report['detected']}/{validation_report['total_techniques']} techniques). "
            f"Generated: {validation_report['timestamp']}. "
            f"SYNTHETIC — ACME Security Corp (fictional)."
        ),
        "sorting": 3,
        "layout": {
            "layout": "side",
            "aggregateFunction": "average",
            "showID": True,
            "showName": True,
            "showAggregateScores": True,
            "countUnscored": False
        },
        "hideDisabled": False,
        "gradient": {
            "colors": ["#ff6666", "#ffe766", "#8ec843"],
            "minValue": 0,
            "maxValue": 100
        },
        "legendItems": [
            {"label": "Detected (alert fired)", "color": "#8ec843"},
            {"label": "Partial (logged, no alert)", "color": "#ffe766"},
            {"label": "Gap (not detected)", "color": "#ff6666"},
            {"label": "Not tested", "color": "#ffffff"}
        ],
        "techniques": techniques
    }

    return layer


def map_tactic(tactic_id: str) -> str:
    """Map ATT&CK tactic ID to Navigator tactic name."""
    mapping = {
        "TA0001": "initial-access",
        "TA0002": "execution",
        "TA0003": "persistence",
        "TA0004": "privilege-escalation",
        "TA0005": "defense-evasion",
        "TA0006": "credential-access",
        "TA0007": "discovery",
        "TA0008": "lateral-movement",
        "TA0009": "collection",
        "TA0010": "exfiltration",
        "TA0011": "command-and-control",
        "TA0040": "impact",
    }
    return mapping.get(tactic_id, "")


if __name__ == "__main__":
    with open(sys.argv[1]) as f:
        validation = json.load(f)

    layer = generate_layer(validation)
    output = sys.argv[2] if len(sys.argv) > 2 else "navigator_layer.json"

    with open(output, "w") as f:
        json.dump(layer, f, indent=2)

    print(f"Navigator layer written to {output}")
    print(f"Coverage: {validation['coverage_pct']}% "
          f"({validation['detected']}/{validation['total_techniques']})")
    print(f"Load this file in ATT&CK Navigator: https://mitre-attack.github.io/attack-navigator/")

3.3 Generate and Load the Layer

# SYNTHETIC — Generate the Navigator layer
# Host: red-team-01.acme.example (10.10.1.50)

python3 C:\PurpleTeam\scripts\generate_navigator_layer.py \
    C:\PurpleTeam\results\validation_report_2026-03-22.json \
    C:\PurpleTeam\results\navigator_apt-synthetic-7_2026-03-22.json

# Expected output (SYNTHETIC):
# Navigator layer written to C:\PurpleTeam\results\navigator_apt-synthetic-7_2026-03-22.json
# Coverage: 50.0% (6/12)
# Load this file in ATT&CK Navigator: https://mitre-attack.github.io/attack-navigator/

3.4 Interpret the Navigator Visualization

When loaded into ATT&CK Navigator, the layer produces the following coverage view (described textually for this lab):

ATT&CK Enterprise Matrix — APT-SYNTHETIC-7 Coverage (2026-03-22)
═══════════════════════════════════════════════════════════════════

INITIAL ACCESS           EXECUTION              PERSISTENCE
┌──────────────────┐    ┌──────────────────┐    ┌──────────────────┐
│ T1566.001        │    │ T1059.001        │    │ T1053.005        │
│ Phishing         │    │ PowerShell       │    │ Sched. Task      │
│ ██ GAP (RED)     │    │ ██ DETECTED (GRN)│    │ ██ DETECTED (GRN)│
└──────────────────┘    └──────────────────┘    ├──────────────────┤
                                                │ T1547.001        │
CREDENTIAL ACCESS        LATERAL MOVEMENT       │ Reg. Run Keys    │
┌──────────────────┐    ┌──────────────────┐    │ ██ DETECTED (GRN)│
│ T1003.001        │    │ T1021.002        │    └──────────────────┘
│ LSASS Memory     │    │ SMB/Admin Shares │
│ ██ DETECTED (GRN)│    │ ██ GAP (RED)     │    DEFENSE EVASION
├──────────────────┤    ├──────────────────┤    ┌──────────────────┐
│ T1558.003        │    │ T1047            │    │ T1070.001        │
│ Kerberoasting    │    │ WMI              │    │ Clear Logs       │
│ ██ GAP (RED)     │    │ ██ GAP (RED)     │    │ ██ DETECTED (GRN)│
└──────────────────┘    └──────────────────┘    ├──────────────────┤
                                                │ T1027            │
COLLECTION               EXFILTRATION           │ Obfuscation      │
┌──────────────────┐    ┌──────────────────┐    │ ██ GAP (RED)     │
│ T1560.001        │    │ T1048.003        │    └──────────────────┘
│ Archive Utility  │    │ Exfil HTTP       │
│ ██ DETECTED (GRN)│    │ ██ GAP (RED)     │
└──────────────────┘    └──────────────────┘

Legend: ██ GREEN = Detected | ██ RED = Gap | ██ WHITE = Not Tested
Coverage: 50.0% (6/12 techniques detected)

3.5 Compare Coverage Across Runs (Layer Overlay)

Create a comparison layer that shows improvement over multiple test runs.

# SYNTHETIC — Multi-run comparison layer generator (excerpt)
# File: C:\PurpleTeam\scripts\compare_runs.py

def generate_comparison_layer(run_reports: list[dict]) -> dict:
    """
    Generate a comparison layer showing coverage trends.

    Color coding:
    - Dark green (#2d6a2e): Detected in ALL runs (stable detection)
    - Light green (#8ec843): Detected in latest run (new detection)
    - Yellow (#ffe766): Detected in past but NOT latest (regression)
    - Red (#ff6666): Never detected (persistent gap)
    """
    technique_history = {}

    for i, report in enumerate(run_reports):
        for result in report["technique_results"]:
            tid = result["technique_id"]
            if tid not in technique_history:
                technique_history[tid] = {
                    "name": result["technique_name"],
                    "tactic": result.get("tactic", ""),
                    "detections": []
                }
            technique_history[tid]["detections"].append(
                result["status"] == "DETECTED"
            )

    # Assign colors based on history
    techniques = []
    for tid, history in technique_history.items():
        all_detected = all(history["detections"])
        latest_detected = history["detections"][-1] if history["detections"] else False
        ever_detected = any(history["detections"])

        if all_detected:
            color = "#2d6a2e"
            score = 100
            label = "Stable Detection"
        elif latest_detected:
            color = "#8ec843"
            score = 75
            label = "New Detection"
        elif ever_detected:
            color = "#ffe766"
            score = 25
            label = "REGRESSION"
        else:
            color = "#ff6666"
            score = 0
            label = "Persistent Gap"

        techniques.append({
            "techniqueID": tid,
            "tactic": map_tactic(history["tactic"]),
            "score": score,
            "color": color,
            "comment": f"{label}{sum(history['detections'])}/{len(history['detections'])} runs detected"
        })

    # ... (layer construction same as 3.2)
    return {"techniques": techniques}

SYNTHETIC comparison output after three runs:

Run Comparison — APT-SYNTHETIC-7 Detection Trends
══════════════════════════════════════════════════

Run Date        Coverage   Detected   Gaps
2026-03-08      41.7%      5/12       7
2026-03-15      50.0%      6/12       6      (+1 new: T1560.001)
2026-03-22      66.7%      8/12       4      (+2 new: T1558.003, T1047)

Trend: ▲ Improving — 25 percentage point gain over 3 weeks
Exercise 3 Checkpoint

At this point you should have:

  • A Python script that generates ATT&CK Navigator layers from validation reports
  • A generated Navigator JSON layer file showing 50% coverage (green/red)
  • Understanding of how to load and interpret Navigator layers
  • A comparison layer generator for tracking coverage trends over time
  • A visual representation of the gap between "what the adversary does" and "what you detect"

Exercise 4: Continuous Detection Engineering Workflow

Objectives

  • Write Sigma rules for the 6 detection gaps identified in Exercise 2
  • Build an automated detection rule testing pipeline using Sigma and SIEM queries
  • Implement version-controlled detection-as-code with CI/CD validation
  • Re-run the validation pipeline to measure coverage improvement

4.1 Detection-as-Code Repository Structure

C:\PurpleTeam\detection-rules\
├── sigma/
│   ├── initial_access/
│   │   └── sigma-T1566.001-phishing-attachment.yml
│   ├── credential_access/
│   │   └── sigma-T1558.003-kerberoasting.yml
│   ├── lateral_movement/
│   │   ├── sigma-T1021.002-smb-admin-shares.yml
│   │   └── sigma-T1047-wmi-execution.yml
│   ├── defense_evasion/
│   │   └── sigma-T1027-obfuscated-files.yml
│   └── exfiltration/
│       └── sigma-T1048.003-exfil-http.yml
├── kql/
│   └── (auto-generated from Sigma)
├── spl/
│   └── (auto-generated from Sigma)
├── tests/
│   ├── test-T1558.003.yml
│   └── ...
└── pipeline.yml

4.2 Write Sigma Rules for Detection Gaps

Rule 1: T1558.003 — Kerberoasting Detection

# SYNTHETIC — Sigma rule for Kerberoasting detection
# File: C:\PurpleTeam\detection-rules\sigma\credential_access\sigma-T1558.003-kerberoasting.yml
# Organization: ACME Security Corp (fictional)

title: Kerberoasting — TGS Request with RC4 Encryption
id: a1b2c3d4-0001-4000-8000-000000000001
status: experimental
description: |
    Detects Kerberos TGS ticket requests using RC4 encryption (etype 0x17),
    which is a common indicator of Kerberoasting attacks. Legitimate services
    typically use AES encryption. SYNTHETIC rule — ACME Security Corp.
references:
    - https://attack.mitre.org/techniques/T1558/003/
    - https://nexus-secops.pages.dev/chapters/ch41-red-team-methodology/
author: ACME Purple Team (fictional)
date: 2026-03-22
modified: 2026-03-22
tags:
    - attack.credential_access
    - attack.t1558.003
logsource:
    product: windows
    service: security
detection:
    selection:
        EventID: 4769
        TicketEncryptionType: '0x17'
        Status: '0x0'
    filter_machine_accounts:
        ServiceName|endswith: '$'
    filter_krbtgt:
        ServiceName: 'krbtgt'
    condition: selection and not filter_machine_accounts and not filter_krbtgt
falsepositives:
    - Legacy applications that require RC4 Kerberos encryption
    - Misconfigured service accounts
level: high

Converted KQL (Microsoft Sentinel):

// SYNTHETIC — KQL conversion of Sigma rule for T1558.003
// SIEM: siem-01.acme.example (10.10.2.10)
// ATT&CK: T1558.003 — Kerberoasting

SecurityEvent
| where EventID == 4769
| where TicketEncryptionType == "0x17"
| where Status == "0x0"
| where ServiceName !endswith "$"
| where ServiceName != "krbtgt"
| project
    TimeGenerated,
    Computer,
    Account,
    ServiceName,
    TicketEncryptionType,
    IpAddress,
    LogonGuid
| extend AlertName = "Kerberoasting — TGS Request with RC4 Encryption"
| extend MitreTechnique = "T1558.003"

Converted SPL (Splunk):

`wineventlog_security` EventCode=4769 TicketEncryptionType=0x17 Status=0x0
| where NOT match(ServiceName, "\$$")
| where ServiceName!="krbtgt"
| eval alert_name="Kerberoasting — TGS Request with RC4 Encryption"
| eval mitre_technique="T1558.003"
| table _time, Computer, Account, ServiceName, TicketEncryptionType, src_ip

Rule 2: T1021.002 — SMB Lateral Movement Detection

# SYNTHETIC — Sigma rule for SMB lateral movement
# File: C:\PurpleTeam\detection-rules\sigma\lateral_movement\sigma-T1021.002-smb-admin-shares.yml

title: Lateral Movement via SMB Admin Shares
id: a1b2c3d4-0002-4000-8000-000000000002
status: experimental
description: |
    Detects access to administrative shares (C$, ADMIN$, IPC$) from
    remote hosts, which may indicate lateral movement. SYNTHETIC rule.
references:
    - https://attack.mitre.org/techniques/T1021/002/
author: ACME Purple Team (fictional)
date: 2026-03-22
tags:
    - attack.lateral_movement
    - attack.t1021.002
logsource:
    product: windows
    service: security
detection:
    selection:
        EventID:
            - 5140
            - 5145
        ShareName|contains:
            - 'C$'
            - 'ADMIN$'
    filter_local:
        IpAddress: '::1'
    filter_localhost:
        IpAddress: '127.0.0.1'
    condition: selection and not filter_local and not filter_localhost
falsepositives:
    - Legitimate admin tools (SCCM, backup agents)
    - IT helpdesk remote file access
level: medium

Converted KQL:

// SYNTHETIC — KQL for T1021.002 — SMB Lateral Movement
SecurityEvent
| where EventID in (5140, 5145)
| where ShareName has_any ("C$", "ADMIN$")
| where IpAddress != "::1" and IpAddress != "127.0.0.1"
| project
    TimeGenerated,
    Computer,
    Account,
    ShareName,
    IpAddress,
    RelativeTargetName,
    AccessMask
| extend AlertName = "Lateral Movement via SMB Admin Shares"
| extend MitreTechnique = "T1021.002"

Converted SPL:

`wineventlog_security` (EventCode=5140 OR EventCode=5145)
    (ShareName="*C$*" OR ShareName="*ADMIN$*")
    NOT IpAddress="::1" NOT IpAddress="127.0.0.1"
| eval alert_name="Lateral Movement via SMB Admin Shares"
| eval mitre_technique="T1021.002"
| table _time, Computer, Account, ShareName, IpAddress, RelativeTargetName

Rule 3: T1047 — WMI Remote Execution Detection

# SYNTHETIC — Sigma rule for WMI execution
# File: C:\PurpleTeam\detection-rules\sigma\lateral_movement\sigma-T1047-wmi-execution.yml

title: Remote WMI Process Execution
id: a1b2c3d4-0003-4000-8000-000000000003
status: experimental
description: |
    Detects process creation by WmiPrvSE.exe, indicating WMI-based remote
    execution. Focus on suspicious child processes. SYNTHETIC rule.
references:
    - https://attack.mitre.org/techniques/T1047/
author: ACME Purple Team (fictional)
date: 2026-03-22
tags:
    - attack.lateral_movement
    - attack.execution
    - attack.t1047
logsource:
    category: process_creation
    product: windows
detection:
    selection:
        ParentImage|endswith: '\WmiPrvSE.exe'
    filter_benign:
        Image|endswith:
            - '\WmiApSrv.exe'
            - '\WmiPrvSE.exe'
    condition: selection and not filter_benign
falsepositives:
    - Legitimate WMI-based management tools (SCCM, monitoring agents)
level: medium

Converted KQL:

// SYNTHETIC — KQL for T1047 — WMI Remote Execution
DeviceProcessEvents
| where InitiatingProcessFileName =~ "WmiPrvSE.exe"
| where FileName !in~ ("WmiApSrv.exe", "WmiPrvSE.exe")
| project
    Timestamp,
    DeviceName,
    FileName,
    ProcessCommandLine,
    InitiatingProcessFileName,
    AccountName
| extend AlertName = "Remote WMI Process Execution"
| extend MitreTechnique = "T1047"

Converted SPL:

`sysmon` EventCode=1 ParentImage="*\\WmiPrvSE.exe"
    NOT (Image="*\\WmiApSrv.exe" OR Image="*\\WmiPrvSE.exe")
| eval alert_name="Remote WMI Process Execution"
| eval mitre_technique="T1047"
| table _time, Computer, Image, CommandLine, ParentImage, User

Rule 4: T1566.001 — Phishing Attachment Delivery Detection

# SYNTHETIC — Sigma rule for phishing attachment delivery
# File: C:\PurpleTeam\detection-rules\sigma\initial_access\sigma-T1566.001-phishing-attachment.yml

title: Suspicious Office Document Spawning Child Process
id: a1b2c3d4-0004-4000-8000-000000000004
status: experimental
description: |
    Detects Microsoft Office applications spawning suspicious child processes,
    which commonly indicates macro-based initial access from phishing
    attachments. SYNTHETIC rule.
references:
    - https://attack.mitre.org/techniques/T1566/001/
author: ACME Purple Team (fictional)
date: 2026-03-22
tags:
    - attack.initial_access
    - attack.execution
    - attack.t1566.001
logsource:
    category: process_creation
    product: windows
detection:
    selection_parent:
        ParentImage|endswith:
            - '\WINWORD.EXE'
            - '\EXCEL.EXE'
            - '\POWERPNT.EXE'
            - '\OUTLOOK.EXE'
    selection_child:
        Image|endswith:
            - '\cmd.exe'
            - '\powershell.exe'
            - '\pwsh.exe'
            - '\wscript.exe'
            - '\cscript.exe'
            - '\mshta.exe'
            - '\certutil.exe'
            - '\regsvr32.exe'
            - '\rundll32.exe'
    condition: selection_parent and selection_child
falsepositives:
    - Legitimate Office add-ins that spawn processes
    - Document automation workflows
level: high

Converted KQL:

// SYNTHETIC — KQL for T1566.001 — Office Document Spawning Suspicious Child
DeviceProcessEvents
| where InitiatingProcessFileName in~ (
    "WINWORD.EXE", "EXCEL.EXE", "POWERPNT.EXE", "OUTLOOK.EXE"
)
| where FileName in~ (
    "cmd.exe", "powershell.exe", "pwsh.exe",
    "wscript.exe", "cscript.exe", "mshta.exe",
    "certutil.exe", "regsvr32.exe", "rundll32.exe"
)
| project
    Timestamp,
    DeviceName,
    InitiatingProcessFileName,
    FileName,
    ProcessCommandLine,
    AccountName
| extend AlertName = "Phishing — Office Document Spawning Suspicious Child"
| extend MitreTechnique = "T1566.001"

Converted SPL:

`sysmon` EventCode=1
    (ParentImage="*\\WINWORD.EXE" OR ParentImage="*\\EXCEL.EXE"
     OR ParentImage="*\\POWERPNT.EXE" OR ParentImage="*\\OUTLOOK.EXE")
    (Image="*\\cmd.exe" OR Image="*\\powershell.exe" OR Image="*\\pwsh.exe"
     OR Image="*\\wscript.exe" OR Image="*\\cscript.exe" OR Image="*\\mshta.exe"
     OR Image="*\\certutil.exe" OR Image="*\\regsvr32.exe" OR Image="*\\rundll32.exe")
| eval alert_name="Phishing — Office Document Spawning Suspicious Child"
| eval mitre_technique="T1566.001"
| table _time, Computer, ParentImage, Image, CommandLine, User

Rule 5: T1027 — Obfuscated PowerShell Detection

# SYNTHETIC — Sigma rule for obfuscated PowerShell
# File: C:\PurpleTeam\detection-rules\sigma\defense_evasion\sigma-T1027-obfuscated-files.yml

title: Obfuscated PowerShell Execution — Base64 and Encoded Commands
id: a1b2c3d4-0005-4000-8000-000000000005
status: experimental
description: |
    Detects PowerShell execution with encoding flags or high-entropy
    command line arguments indicative of obfuscation. SYNTHETIC rule.
references:
    - https://attack.mitre.org/techniques/T1027/
author: ACME Purple Team (fictional)
date: 2026-03-22
tags:
    - attack.defense_evasion
    - attack.execution
    - attack.t1027
logsource:
    category: process_creation
    product: windows
detection:
    selection_process:
        Image|endswith:
            - '\powershell.exe'
            - '\pwsh.exe'
    selection_encoding:
        CommandLine|contains:
            - '-enc'
            - '-EncodedCommand'
            - '-e '
            - 'FromBase64String'
            - '[Convert]::'
            - 'IO.Compression'
            - 'IO.MemoryStream'
    condition: selection_process and selection_encoding
falsepositives:
    - Legitimate encoded PowerShell used by deployment tools (SCCM, Intune)
    - Base64-encoded configuration scripts
level: high

Converted KQL:

// SYNTHETIC — KQL for T1027 — Obfuscated PowerShell
DeviceProcessEvents
| where FileName in~ ("powershell.exe", "pwsh.exe")
| where ProcessCommandLine has_any (
    "-enc", "-EncodedCommand", "FromBase64String",
    "[Convert]::", "IO.Compression", "IO.MemoryStream"
)
| project
    Timestamp,
    DeviceName,
    FileName,
    ProcessCommandLine,
    AccountName,
    InitiatingProcessFileName
| extend AlertName = "Obfuscated PowerShell Execution"
| extend MitreTechnique = "T1027"

Converted SPL:

`sysmon` EventCode=1
    (Image="*\\powershell.exe" OR Image="*\\pwsh.exe")
    (CommandLine="*-enc*" OR CommandLine="*-EncodedCommand*"
     OR CommandLine="*FromBase64String*" OR CommandLine="*[Convert]::*"
     OR CommandLine="*IO.Compression*" OR CommandLine="*IO.MemoryStream*")
| eval alert_name="Obfuscated PowerShell Execution"
| eval mitre_technique="T1027"
| table _time, Computer, Image, CommandLine, User

Rule 6: T1048.003 — Exfiltration Over HTTP Detection

# SYNTHETIC — Sigma rule for exfiltration over unencrypted HTTP
# File: C:\PurpleTeam\detection-rules\sigma\exfiltration\sigma-T1048.003-exfil-http.yml

title: Potential Data Exfiltration — Large HTTP POST to External Host
id: a1b2c3d4-0006-4000-8000-000000000006
status: experimental
description: |
    Detects unusually large HTTP POST requests to external hosts,
    which may indicate data exfiltration. Requires proxy/web gateway
    log ingestion. SYNTHETIC rule.
references:
    - https://attack.mitre.org/techniques/T1048/003/
author: ACME Purple Team (fictional)
date: 2026-03-22
tags:
    - attack.exfiltration
    - attack.t1048.003
logsource:
    category: proxy
detection:
    selection:
        c-requestmethod: 'POST'
        cs-bytes|gte: 5242880    # 5 MB threshold
    filter_internal:
        r-dns|endswith:
            - '.acme.example'
            - '.example.com'
    filter_known_services:
        r-dns|endswith:
            - '.microsoft.com'
            - '.windowsupdate.com'
            - '.office365.com'
    condition: selection and not filter_internal and not filter_known_services
falsepositives:
    - Large file uploads to legitimate cloud services
    - Backup applications
level: high

Converted KQL:

// SYNTHETIC — KQL for T1048.003 — Large HTTP POST Exfiltration
CommonSecurityLog
| where RequestMethod == "POST"
| where SentBytes > 5242880  // 5 MB
| where DestinationHostName !endswith ".acme.example"
    and DestinationHostName !endswith ".example.com"
    and DestinationHostName !endswith ".microsoft.com"
    and DestinationHostName !endswith ".windowsupdate.com"
| project
    TimeGenerated,
    SourceIP,
    DestinationHostName,
    DestinationIP,
    RequestURL,
    SentBytes,
    DeviceAction
| extend SentMB = round(SentBytes / 1048576.0, 2)
| extend AlertName = "Potential Data Exfiltration — Large HTTP POST"
| extend MitreTechnique = "T1048.003"
| order by SentBytes desc

Converted SPL:

index=proxy http_method=POST bytes_out>5242880
    NOT (url_domain="*.acme.example" OR url_domain="*.example.com"
         OR url_domain="*.microsoft.com" OR url_domain="*.windowsupdate.com")
| eval sent_mb=round(bytes_out/1048576, 2)
| eval alert_name="Potential Data Exfiltration — Large HTTP POST"
| eval mitre_technique="T1048.003"
| table _time, src_ip, url_domain, dest_ip, url, sent_mb, action
| sort -sent_mb

4.3 CI/CD Pipeline for Detection Rules

# SYNTHETIC — Detection rule CI/CD pipeline
# File: C:\PurpleTeam\detection-rules\pipeline.yml
# CI Server: cicd-01.acme.example (10.10.2.30)

name: Detection Rule Validation Pipeline
trigger:
  push:
    paths:
      - 'sigma/**/*.yml'
  pull_request:
    paths:
      - 'sigma/**/*.yml'
  schedule:
    - cron: '0 1 * * 0'  # Weekly Sunday 01:00 UTC

stages:
  - name: lint
    description: Validate Sigma rule syntax
    steps:
      - run: |
          # Install sigma-cli
          pip install sigma-cli pySigma pySigma-backend-microsoft365defender pySigma-backend-splunk

          # Lint all Sigma rules
          sigma check sigma/**/*.yml --fail-on-error

          # Expected output (SYNTHETIC):
          # Checking sigma/credential_access/sigma-T1558.003-kerberoasting.yml ... OK
          # Checking sigma/lateral_movement/sigma-T1021.002-smb-admin-shares.yml ... OK
          # Checking sigma/lateral_movement/sigma-T1047-wmi-execution.yml ... OK
          # Checking sigma/initial_access/sigma-T1566.001-phishing-attachment.yml ... OK
          # Checking sigma/defense_evasion/sigma-T1027-obfuscated-files.yml ... OK
          # Checking sigma/exfiltration/sigma-T1048.003-exfil-http.yml ... OK
          # All 6 rules passed validation.

  - name: convert
    description: Convert Sigma to KQL and SPL
    steps:
      - run: |
          # Convert to KQL (Microsoft Sentinel / Defender)
          for rule in sigma/**/*.yml; do
            outfile="kql/$(basename $rule .yml).kql"
            sigma convert -t microsoft365defender "$rule" > "$outfile"
            echo "Converted $rule -> $outfile"
          done

          # Convert to SPL (Splunk)
          for rule in sigma/**/*.yml; do
            outfile="spl/$(basename $rule .yml).spl"
            sigma convert -t splunk "$rule" > "$outfile"
            echo "Converted $rule -> $outfile"
          done

  - name: test
    description: Run detection tests against synthetic log samples
    steps:
      - run: |
          # Test each rule against synthetic log samples
          python3 scripts/test_detections.py \
            --rules-dir sigma/ \
            --test-data tests/ \
            --output test-results.json

          # Expected output (SYNTHETIC):
          # Testing sigma-T1558.003-kerberoasting.yml against test-T1558.003.yml
          #   True Positive test: PASS (matched 3/3 malicious events)
          #   True Negative test: PASS (0 false positives in 50 benign events)
          # Testing sigma-T1021.002-smb-admin-shares.yml against test-T1021.002.yml
          #   True Positive test: PASS (matched 2/2 malicious events)
          #   True Negative test: PASS (0 false positives in 100 benign events)
          # ...
          # All 6 rules passed testing (6/6 TP, 0 FP)

  - name: deploy
    description: Deploy rules to SIEM
    condition: branch == 'main' AND all_tests_pass
    steps:
      - run: |
          # Deploy to Microsoft Sentinel
          python3 scripts/deploy_to_sentinel.py \
            --rules-dir kql/ \
            --workspace-id "SYNTHETIC-WORKSPACE-ID" \
            --resource-group "acme-soc-rg"

          # Deploy to Splunk
          python3 scripts/deploy_to_splunk.py \
            --rules-dir spl/ \
            --splunk-url "https://siem-01.acme.example:8089" \
            --app "acme_purple_team"

          # Notify SOC
          echo "6 new detection rules deployed. Re-run purple team validation."

4.4 Re-Run Validation After Rule Deployment

After deploying the 6 new Sigma rules, re-execute the purple team automation and correlate results.

# SYNTHETIC — Re-run the purple team automation
# Host: ws-target-01.acme.example (10.10.3.100)

powershell -ExecutionPolicy Bypass -File C:\PurpleTeam\scripts\Invoke-PurpleTeamAutomation.ps1 \
    -TestPlanPath C:\PurpleTeam\config\apt-synthetic-7-testplan.yaml

# Wait for SIEM processing (synthetic delay)

# Re-run correlation
python3 C:\PurpleTeam\scripts\correlate_results.py \
    C:\PurpleTeam\results\purple-team-report_2026-03-22_14-00-00.json \
    C:\PurpleTeam\results\siem-alerts-export_2026-03-22-retest.csv \
    C:\PurpleTeam\results\validation_report_2026-03-22-retest.json

Expected output after new rules deployed (SYNTHETIC):

======================================================================
  PURPLE TEAM DETECTION VALIDATION REPORT (POST-REMEDIATION)
  Run ID: e89b12d3-44aa-4372-b789-1e02c3d4e567
  Threat Actor Profile: APT-SYNTHETIC-7
  Target Host: WS-TARGET-01
======================================================================

  Total Techniques Tested:  12
  Detected:                 11
  Detection Gaps:           1
  Coverage:                 91.7%

  ──────────────────────────────────────────────────────────────────────
  Technique      Name                                Status     TTD (s)
  ────────────── ─────────────────────────────────── ────────── ──────────
  T1566.001      Spear-Phishing Attachment           DETECTED   6.2
  T1059.001      PowerShell Execution                DETECTED   11.8
  T1003.001      LSASS Memory Dump                   DETECTED   7.9
  T1558.003      Kerberoasting                       DETECTED   4.3
  T1021.002      SMB/Windows Admin Shares            DETECTED   9.1
  T1047          WMI Execution                       DETECTED   12.5
  T1053.005      Scheduled Task                      DETECTED   28.7
  T1547.001      Registry Run Keys                   DETECTED   14.0
  T1070.001      Clear Windows Event Logs            DETECTED   16.2
  T1027          Obfuscated Files                    DETECTED   8.4
  T1560.001      Archive via Utility                 DETECTED   41.3
  T1048.003      Exfil Over Unencrypted Protocol     ** GAP **  N/A

======================================================================

Coverage Improved: 50% to 91.7%

After deploying the 6 new Sigma rules, coverage jumped from 50% (6/12) to 91.7% (11/12). The remaining gap (T1048.003) is due to the proxy log data source not yet being ingested into the SIEM — this requires infrastructure changes beyond detection engineering.

4.5 Remaining Gap: Data Source Dependency

Gap Analysis — T1048.003 (Exfiltration Over Unencrypted Protocol)
═════════════════════════════════════════════════════════════════

Status:         GAP — Rule deployed but no matching log data
Root Cause:     Proxy/web gateway logs not ingested into SIEM
Rule Status:    sigma-T1048.003-exfil-http.yml — syntax valid, untestable

Required Action:
  1. Configure Zscaler/Squid proxy to forward logs to SIEM
     - Sentinel: Configure data connector for CEF/Syslog proxy logs
     - Splunk: Configure HEC input for proxy log forwarder
  2. Validate log ingestion with test traffic
  3. Re-run purple team automation test
  4. Expected resolution: 2026-04-05 (next sprint)

Tracking: JIRA-ACME-4521 (SYNTHETIC)
Exercise 4 Checkpoint

At this point you should have:

  • 6 Sigma rules covering all identified detection gaps
  • KQL and SPL conversions for each Sigma rule
  • A CI/CD pipeline that lints, converts, tests, and deploys detection rules
  • Post-remediation validation showing 91.7% coverage (11/12 techniques)
  • A clear understanding of the data-source dependency for the remaining gap
  • Experience with the detection engineering feedback loop: test -> gap -> write rule -> deploy -> retest

Exercise 5: Purple Team Metrics & Reporting Dashboard

Objectives

  • Define key purple team metrics (coverage, MTTD, rule health, gap burn-down)
  • Build a metrics collection pipeline that aggregates data from multiple test runs
  • Create executive-ready reporting templates
  • Implement a tracking dashboard with historical trend analysis

5.1 Key Purple Team Metrics

Metric Definition Target Current (SYNTHETIC)
Detection Coverage % (Detected techniques / Total tested techniques) x 100 >90% 91.7%
Mean Time to Detect (MTTD) Average seconds from test execution to alert generation <30s 14.6s
Gap Burn-down Rate Number of gaps closed per sprint/week >2/week 2.5/week
Rule Health Score % of rules that fire correctly on re-test (no regressions) 100% 100%
Technique Test Coverage (Techniques with tests / Total ATT&CK techniques in scope) x 100 >80% 12/14 = 85.7%
False Positive Rate (False alerts / Total alerts during test window) x 100 <5% 2.1%
Data Source Coverage (Ingested data sources / Required data sources) x 100 100% 92.3% (12/13)
Time to New Detection Days from gap identification to rule deployment <7 days 3.2 days

5.2 Metrics Collection Pipeline

#!/usr/bin/env python3
"""
SYNTHETIC — Purple Team Metrics Aggregator
File: C:\PurpleTeam\scripts\aggregate_metrics.py
Organization: ACME Security Corp (fictional)

Aggregates metrics from multiple purple team validation runs
into a historical metrics database (JSON file).
"""

import json
import statistics
from datetime import datetime
from pathlib import Path


def aggregate_run_metrics(validation_report: dict) -> dict:
    """Extract metrics from a single validation run."""
    ttd_values = [
        t["time_to_detect_seconds"]
        for t in validation_report["technique_results"]
        if t["time_to_detect_seconds"] is not None
    ]

    return {
        "run_id": validation_report["run_id"],
        "timestamp": validation_report["timestamp"],
        "threat_actor": validation_report["threat_actor"],
        "host": validation_report["host"],
        "metrics": {
            "detection_coverage_pct": validation_report["coverage_pct"],
            "total_techniques": validation_report["total_techniques"],
            "detected": validation_report["detected"],
            "gaps": validation_report["gaps"],
            "mttd_seconds": round(statistics.mean(ttd_values), 1) if ttd_values else None,
            "mttd_median_seconds": round(statistics.median(ttd_values), 1) if ttd_values else None,
            "mttd_p95_seconds": round(
                sorted(ttd_values)[int(len(ttd_values) * 0.95)] if ttd_values else 0, 1
            ),
            "max_ttd_seconds": round(max(ttd_values), 1) if ttd_values else None,
            "min_ttd_seconds": round(min(ttd_values), 1) if ttd_values else None,
        },
        "gaps_detail": [
            {
                "technique_id": t["technique_id"],
                "technique_name": t["technique_name"],
                "phase": t.get("phase", "unknown")
            }
            for t in validation_report["technique_results"]
            if t["status"] == "GAP"
        ],
        "detections_detail": [
            {
                "technique_id": t["technique_id"],
                "technique_name": t["technique_name"],
                "ttd_seconds": t["time_to_detect_seconds"],
                "alert_name": t["alert_name"],
                "alert_severity": t["alert_severity"]
            }
            for t in validation_report["technique_results"]
            if t["status"] == "DETECTED"
        ]
    }


def load_metrics_history(history_path: str) -> list[dict]:
    """Load existing metrics history."""
    path = Path(history_path)
    if path.exists():
        with open(path) as f:
            return json.load(f)
    return []


def append_and_save(history: list[dict], new_metrics: dict,
                     history_path: str) -> None:
    """Append new metrics and save history."""
    history.append(new_metrics)
    with open(history_path, "w") as f:
        json.dump(history, f, indent=2)


def print_trend_report(history: list[dict]) -> None:
    """Print a trend report from metrics history."""
    print(f"\n{'='*78}")
    print(f"  PURPLE TEAM METRICS — TREND REPORT")
    print(f"  Organization: ACME Security Corp (SYNTHETIC)")
    print(f"  Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
    print(f"{'='*78}")

    print(f"\n  {'Run Date':<22} {'Coverage':<12} {'Detected':<12} "
          f"{'Gaps':<8} {'MTTD (s)':<12} {'Trend'}")
    print(f"  {'─'*22} {'─'*12} {'─'*12} {'─'*8} {'─'*12} {'─'*8}")

    prev_coverage = None
    for entry in history:
        m = entry["metrics"]
        coverage = m["detection_coverage_pct"]

        if prev_coverage is not None:
            if coverage > prev_coverage:
                trend = "  UP"
            elif coverage < prev_coverage:
                trend = "  DOWN"
            else:
                trend = "  --"
        else:
            trend = "  --"

        mttd = f"{m['mttd_seconds']:.1f}" if m["mttd_seconds"] else "N/A"

        print(f"  {entry['timestamp'][:19]:<22} {coverage:>6.1f}%     "
              f"{m['detected']:>3}/{m['total_techniques']:<7} "
              f"{m['gaps']:<8} {mttd:<12} {trend}")

        prev_coverage = coverage

    # Summary statistics
    if len(history) >= 2:
        first = history[0]["metrics"]
        last = history[-1]["metrics"]
        delta = last["detection_coverage_pct"] - first["detection_coverage_pct"]
        print(f"\n  Coverage Change: {first['detection_coverage_pct']:.1f}% -> "
              f"{last['detection_coverage_pct']:.1f}% "
              f"({'+'if delta>=0 else ''}{delta:.1f} pp)")

        if last["mttd_seconds"] and first["mttd_seconds"]:
            mttd_delta = last["mttd_seconds"] - first["mttd_seconds"]
            print(f"  MTTD Change:     {first['mttd_seconds']:.1f}s -> "
                  f"{last['mttd_seconds']:.1f}s "
                  f"({'+'if mttd_delta>=0 else ''}{mttd_delta:.1f}s)")

    print(f"\n{'='*78}")

5.3 SYNTHETIC Historical Metrics Data

Below is the synthetic metrics history showing improvement over 4 weeks of automated purple team testing.

[
    {
        "run_id": "run-001-synthetic",
        "timestamp": "2026-03-01T02:00:00Z",
        "threat_actor": "APT-SYNTHETIC-7",
        "host": "WS-TARGET-01",
        "metrics": {
            "detection_coverage_pct": 33.3,
            "total_techniques": 12,
            "detected": 4,
            "gaps": 8,
            "mttd_seconds": 22.4,
            "mttd_median_seconds": 19.8,
            "mttd_p95_seconds": 41.2,
            "max_ttd_seconds": 44.1,
            "min_ttd_seconds": 8.2
        }
    },
    {
        "run_id": "run-002-synthetic",
        "timestamp": "2026-03-08T02:00:00Z",
        "threat_actor": "APT-SYNTHETIC-7",
        "host": "WS-TARGET-01",
        "metrics": {
            "detection_coverage_pct": 41.7,
            "total_techniques": 12,
            "detected": 5,
            "gaps": 7,
            "mttd_seconds": 19.1,
            "mttd_median_seconds": 16.3,
            "mttd_p95_seconds": 38.5,
            "max_ttd_seconds": 40.2,
            "min_ttd_seconds": 7.1
        }
    },
    {
        "run_id": "run-003-synthetic",
        "timestamp": "2026-03-15T02:00:00Z",
        "threat_actor": "APT-SYNTHETIC-7",
        "host": "WS-TARGET-01",
        "metrics": {
            "detection_coverage_pct": 50.0,
            "total_techniques": 12,
            "detected": 6,
            "gaps": 6,
            "mttd_seconds": 17.2,
            "mttd_median_seconds": 14.5,
            "mttd_p95_seconds": 35.8,
            "max_ttd_seconds": 44.6,
            "min_ttd_seconds": 6.8
        }
    },
    {
        "run_id": "run-004-synthetic",
        "timestamp": "2026-03-22T02:00:00Z",
        "threat_actor": "APT-SYNTHETIC-7",
        "host": "WS-TARGET-01",
        "metrics": {
            "detection_coverage_pct": 91.7,
            "total_techniques": 12,
            "detected": 11,
            "gaps": 1,
            "mttd_seconds": 14.6,
            "mttd_median_seconds": 11.8,
            "mttd_p95_seconds": 41.3,
            "max_ttd_seconds": 41.3,
            "min_ttd_seconds": 4.3
        }
    }
]

5.4 Generate the Trend Report

# SYNTHETIC — Generate trend report
# Host: red-team-01.acme.example (10.10.1.50)

python3 C:\PurpleTeam\scripts\aggregate_metrics.py \
    --history C:\PurpleTeam\metrics\metrics_history.json \
    --new-report C:\PurpleTeam\results\validation_report_2026-03-22-retest.json

# Expected output (SYNTHETIC):
==============================================================================
  PURPLE TEAM METRICS — TREND REPORT
  Organization: ACME Security Corp (SYNTHETIC)
  Generated: 2026-03-22 14:30:00
==============================================================================

  Run Date               Coverage     Detected     Gaps     MTTD (s)     Trend
  ────────────────────── ──────────── ──────────── ──────── ──────────── ────────
  2026-03-01T02:00:00     33.3%        4/12        8        22.4          --
  2026-03-08T02:00:00     41.7%        5/12        7        19.1          UP
  2026-03-15T02:00:00     50.0%        6/12        6        17.2          UP
  2026-03-22T02:00:00     91.7%       11/12        1        14.6          UP

  Coverage Change: 33.3% -> 91.7% (+58.4 pp)
  MTTD Change:     22.4s -> 14.6s (-7.8s)

==============================================================================

5.5 Executive Summary Report Template

# Purple Team Automation — Executive Summary
## ACME Security Corp (SYNTHETIC) — Week of 2026-03-22

### Key Metrics

| Metric | Value | Target | Status |
|--------|-------|--------|--------|
| Detection Coverage | 91.7% | >90% | ON TARGET |
| Mean Time to Detect | 14.6s | <30s | ON TARGET |
| Gap Burn-down | 5 gaps closed this week | >2/week | EXCEEDING |
| Rule Health | 100% (0 regressions) | 100% | ON TARGET |
| Data Source Coverage | 92.3% (12/13) | 100% | AT RISK |

### Coverage Trend (4 Weeks)

```text
100% ┤
 90% ┤                                          ████████████  91.7%
 80% ┤
 70% ┤
 60% ┤
 50% ┤                          ██████████████
 40% ┤          ████████████████
 30% ┤██████████
 20% ┤
 10% ┤
  0% ┼──────────┬──────────────┬──────────────┬────────────
     Mar 01     Mar 08         Mar 15         Mar 22

Actions Completed This Week

  1. Deployed 6 new Sigma detection rules covering:
  2. T1566.001 (Phishing), T1558.003 (Kerberoasting), T1021.002 (SMB), T1047 (WMI), T1027 (Obfuscation), T1048.003 (Exfiltration)
  3. Established CI/CD pipeline for detection-as-code workflow
  4. Reduced MTTD from 17.2s to 14.6s (15% improvement)

Remaining Risks

Risk Impact Mitigation ETA
Proxy logs not ingested (T1048.003 gap) Cannot detect HTTP exfiltration Configure proxy log forwarding to SIEM 2026-04-05
No Linux test coverage Unknown detection posture on Linux hosts Extend test plan to lnx-target-01 2026-04-12

Next Sprint Priorities

  1. Integrate proxy log data source into SIEM (close final T1048.003 gap)
  2. Expand test plan to cover Linux-specific ATT&CK techniques
  3. Add T1055 (Process Injection) and T1134 (Access Token Manipulation) to test plan
  4. Implement automated regression alerts for detection rule failures
    ### 5.6 Detection Coverage Results Table
    
    Use this template to track detection validation results across all test runs.
    
    | # | Technique ID | Technique Name | Tactic | Run 1 (Mar 01) | Run 2 (Mar 08) | Run 3 (Mar 15) | Run 4 (Mar 22) | TTD (Latest) | Rule ID | Status |
    |---|-------------|----------------|--------|:---:|:---:|:---:|:---:|---|---|---|
    | 1 | T1566.001 | Spear-Phishing Attachment | Initial Access | GAP | GAP | GAP | DETECTED | 6.2s | a1b2c3d4-0004 | New |
    | 2 | T1059.001 | PowerShell Execution | Execution | DETECTED | DETECTED | DETECTED | DETECTED | 11.8s | (built-in) | Stable |
    | 3 | T1003.001 | LSASS Memory Dump | Credential Access | DETECTED | DETECTED | DETECTED | DETECTED | 7.9s | (built-in) | Stable |
    | 4 | T1558.003 | Kerberoasting | Credential Access | GAP | GAP | GAP | DETECTED | 4.3s | a1b2c3d4-0001 | New |
    | 5 | T1021.002 | SMB/Admin Shares | Lateral Movement | GAP | GAP | GAP | DETECTED | 9.1s | a1b2c3d4-0002 | New |
    | 6 | T1047 | WMI Execution | Lateral Movement | GAP | DETECTED | DETECTED | DETECTED | 12.5s | a1b2c3d4-0003 | Stable |
    | 7 | T1053.005 | Scheduled Task | Persistence | DETECTED | DETECTED | DETECTED | DETECTED | 28.7s | (built-in) | Stable |
    | 8 | T1547.001 | Registry Run Keys | Persistence | GAP | GAP | DETECTED | DETECTED | 14.0s | (built-in) | Stable |
    | 9 | T1070.001 | Clear Windows Event Logs | Defense Evasion | DETECTED | DETECTED | DETECTED | DETECTED | 16.2s | (built-in) | Stable |
    | 10 | T1027 | Obfuscated Files | Defense Evasion | GAP | GAP | GAP | DETECTED | 8.4s | a1b2c3d4-0005 | New |
    | 11 | T1560.001 | Archive via Utility | Collection | GAP | GAP | DETECTED | DETECTED | 41.3s | (built-in) | Stable |
    | 12 | T1048.003 | Exfil Over HTTP | Exfiltration | GAP | GAP | GAP | GAP | N/A | a1b2c3d4-0006 | Blocked |
    
    **Legend:**
    
    - **DETECTED** = Alert fired within 60 seconds of test execution
    - **GAP** = No alert generated during the test window
    - **Stable** = Detected in 2+ consecutive runs without regression
    - **New** = First detection in the latest run
    - **Blocked** = Rule deployed but data source not yet available
    
    ### 5.7 MTTD Distribution Analysis
    
    Mean Time to Detect (MTTD) Distribution — Run 4 (2026-03-22) ═══════════════════════════════════════════════════════════════

0-5s ██████████████████████░░░░░░░░░░░░░░░░░░░░ 1 technique (T1558.003: 4.3s) 5-10s ██████████████████████████████████████████░ 3 techniques (T1566.001, T1003.001, T1027) 10-15s ██████████████████████████████████░░░░░░░░░ 3 techniques (T1059.001, T1047, T1547.001) 15-20s ██████████████████████░░░░░░░░░░░░░░░░░░░░ 1 technique (T1070.001: 16.2s) 20-30s ██████████████████████░░░░░░░░░░░░░░░░░░░░ 1 technique (T1053.005: 28.7s) 30-45s ██████████████████████░░░░░░░░░░░░░░░░░░░░ 1 technique (T1560.001: 41.3s) 45s+ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 techniques

Mean: 14.6s | Median: 11.8s | P95: 41.3s Target: <30s | Status: ON TARGET (mean and median well under threshold)

Outlier: T1560.001 (Archive via Utility) at 41.3s — rule triggers on file creation event, which depends on Sysmon Event 11 polling interval. Consider tuning Sysmon hash algorithm configuration for faster processing.

??? success "Exercise 5 Checkpoint"
    At this point you should have:

    - 8 defined KPIs for purple team operations with targets and current values
    - A Python metrics aggregator that tracks historical data across runs
    - A trend report showing 33.3% to 91.7% coverage improvement over 4 weeks
    - An executive summary template suitable for CISO-level reporting
    - A detection coverage results table tracking all 12 techniques across 4 runs
    - MTTD distribution analysis identifying outliers for tuning
    - Understanding of how continuous measurement drives detection engineering priorities

---

## Summary and Key Takeaways

### What You Built

In this lab you constructed a complete purple team automation pipeline:
┌─────────────────────────────────────────────────────────────────────────┐ │ Purple Team Automation Lifecycle │ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ 1. Plan │──▶│ 2. Test │──▶│3. Detect │──▶│4. Measure│ │ │ │ │ │ │ │ │ │ │ │ │ │ Threat │ │ Atomic │ │ Sigma │ │ Coverage │ │ │ │ Profile │ │ Red Team │ │ Rules + │ │ Metrics │ │ │ │ (YAML) │ │ (Auto) │ │ CI/CD │ │ + Trends │ │ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ ▲ │ │ │ │ Feedback Loop │ │ │ └──────────────────────────────────────────────┘ │ │ │ │ Coverage: 33.3% ──▶ 91.7% over 4 weeks │ │ MTTD: 22.4s ──▶ 14.6s over 4 weeks │ │ Gaps: 8 ──▶ 1 (data source dependency) │ │ │ └─────────────────────────────────────────────────────────────────────────┘ ```

ATT&CK Technique Coverage Summary

ATT&CK Tactic Techniques Tested Detected Coverage
Initial Access (TA0001) 1 1 100%
Execution (TA0002) 1 1 100%
Persistence (TA0003) 2 2 100%
Defense Evasion (TA0005) 2 2 100%
Credential Access (TA0006) 2 2 100%
Lateral Movement (TA0008) 2 2 100%
Collection (TA0009) 1 1 100%
Exfiltration (TA0010) 1 0 0%
Total 12 11 91.7%

Key Lessons Learned

  1. Automation reveals gaps faster than manual testing. The weekly automated cadence identified 8 detection gaps in the first run that manual quarterly exercises had missed.

  2. Detection-as-code with CI/CD prevents regressions. By version-controlling Sigma rules and testing them in a pipeline, ACME Security Corp ensured that new rule deployments never broke existing detections.

  3. Data source coverage is a prerequisite for detection coverage. The T1048.003 gap cannot be closed by writing better rules — it requires ingesting proxy log data. Purple team metrics should track data source coverage alongside detection coverage.

  4. MTTD varies significantly by technique. The 4.3s to 41.3s range highlights that some detection rules depend on polling intervals (Sysmon file events) while others trigger on near-real-time event streams (Kerberos authentication). Tuning infrastructure matters as much as tuning rules.

  5. Executive reporting drives investment. The trend report showing 33.3% to 91.7% coverage improvement quantifies the value of purple team operations and justifies continued investment in detection engineering.

Cross-References


Blank Results Table (Printable)

Copy this template for your own purple team automation runs.

# Technique ID Technique Name Tactic Test Date Detected? TTD (s) Alert Name Rule ID Notes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Coverage: /15 = % | Mean TTD: ___s | Gaps: ___


Challenge Extensions

For teams that complete this lab and want to go further:

Extension 1: Multi-OS Coverage

Extend the test plan to include Linux-specific ATT&CK techniques (T1053.003 Cron, T1059.004 Bash, T1070.002 Linux Log Clearing) and run the automation against lnx-target-01.acme.example (10.10.3.201). Write Sigma rules for auditd and syslog data sources.

Extension 2: VECTR Integration

Export the validation results to VECTR format for standardized purple team tracking. VECTR uses a campaign/assessment model that maps cleanly to the JSON reports generated in this lab.

Extension 3: Threat Intel-Driven Expansion

Use the MITRE ATT&CK Groups knowledge base to identify additional techniques used by real threat actors targeting your industry. Add those techniques to the test plan and measure your coverage against multiple threat profiles simultaneously. See Chapter 49: Threat Intelligence Operations.

Extension 4: Automated Slack/Teams Alerting

Add webhook notifications to the automation pipeline so that gap reports are automatically posted to a SOC channel when coverage drops below the 90% threshold.