Skip to content

SOP: Detection Content Change Control

Document Type: Standard Operating Procedure Classification: Internal Applies to: Detection Engineers, SOC Tier 2 Analysts contributing to detection Version: 1.0 (Template) Nexus SecOps Control: Nexus SecOps-203


1. Purpose

This SOP defines the process for creating, modifying, testing, and retiring detection rules. All changes to production detection content MUST follow this process without exception. The goal is to prevent untested rules from causing alert storms or missing real threats.


2. Change Types and Applicability

Change Type This SOP Applies? Approvals Required
New detection rule Yes — full process Peer review + Staging validation
Modify existing rule logic Yes — full process Peer review + Staging validation
Severity change only Yes — abbreviated Peer review only
Description/label update No — minor Ticket documentation
Rule disable (emergency) Yes — post-hoc Document reason; full review within 48h
Rule retirement Yes — abbreviated Peer review

3. The Detection Change Control Gate Process

Gate 1: Creation and Documentation

Before any rule enters review, the author MUST complete:

Required documentation in the detection rule PR / ticket:

# Detection Rule Documentation Header
rule_id: DET-[NEXT_SEQUENTIAL_NUMBER]
rule_name: "[Descriptive Name  what behavior this detects]"
author: "[Your name / GitHub handle]"
created: "[YYYY-MM-DD]"
version: "1.0"

# ATT&CK Mapping (required)
mitre_tactic: "[Tactic ID and Name, e.g., TA0006  Credential Access]"
mitre_technique: "[Technique ID and Name, e.g., T1110  Brute Force]"

# Log Sources
log_sources:
  - "[Log source 1, e.g., Azure AD Sign-in Logs]"
  - "[Log source 2]"

# Detection Logic
logic_summary: >
  "[One paragraph describing what behavior this rule detects and why that
   behavior is indicative of malicious activity vs. benign activity]"

# Expected Performance
expected_fp_rate: "[Low / Medium / High  and explanation]"
known_fp_sources: "[List any known benign sources that may trigger this]"
expected_severity: "[Critical / High / Medium / Low]"
expected_volume: "[Estimated alerts per day/week in production]"

# Business Justification
rationale: "[Why do we need this rule? What threat does it address?]"

Checklist before submitting for peer review: - [ ] Rule logic tested locally against sample data - [ ] At least 3 known TP test cases identified and documented - [ ] At least 3 known FP test cases identified and documented (benign scenarios) - [ ] Rule documentation header complete (above) - [ ] MITRE ATT&CK mapping verified (not just guessed) - [ ] Log source availability confirmed (the required logs exist in our SIEM)


Gate 2: Peer Review

Reviewer requirements: At least one experienced detection engineer or Tier 2 analyst with >6 months SOC experience MUST review the rule.

Reviewer checklist: - [ ] R1: Logic correctly detects the described behavior (walk through the logic manually) - [ ] R2: Logic does NOT introduce obvious false positive scenarios the author missed - [ ] R3: Severity is appropriate for the threat described - [ ] R4: Log source fields referenced in the rule actually exist and are populated - [ ] R5: MITRE ATT&CK mapping is accurate - [ ] R6: Rule documentation is complete and accurate - [ ] R7: No hardcoded values that should be configurable (IPs, usernames)

Reviewer approval required: Documented in PR approval or ticket comment: "Reviewed and approved for staging — [Reviewer Name] [Date]"

Review rejection: If the reviewer identifies issues: - Document specific issues in PR/ticket comments - Return to author for revision - Do NOT merge to staging until issues are resolved


Gate 3: Testing (Pre-Staging)

Before staging deployment, the rule MUST be tested against synthetic data:

Synthetic TP test (must FIRE): - Document 3+ events that represent true positive scenarios - Verify the rule fires on all TP test cases - Record: test case name, result, timestamp

Synthetic TN test (must NOT fire): - Document 3+ events that represent known benign scenarios (FP avoidance) - Verify the rule does NOT fire on TN test cases - If it does fire: revise the rule logic to add exclusions, return to Gate 2

Test documentation format:

Test Case: [Name]
Type: TP / TN
Input Data: [Brief description of the test event]
Expected Result: FIRE / NO FIRE
Actual Result: FIRE / NO FIRE
Status: PASS / FAIL
Notes: [Any observations]


Gate 4: Staging Deployment (7-Day Minimum)

Staging requirements: - Rule deployed at low severity (or informational only) for minimum 7 calendar days - Alert volume monitored daily - Each alert during staging reviewed and classified (TP / FP / Benign)

Staging metrics to track: - Total alerts generated during staging period - TP count and FP count - Calculated FP rate: FP / (TP + FP) × 100% - Volume per day (trend)

Staging pass criteria: - FP rate ≤ [40% for High; 30% for Critical] — if higher, return to rule logic revision - Volume is acceptable (no alert storm that would overwhelm analysts) - At least 1 true positive OR confirmed detection of intended behavior in test

Staging fail response: - Rule removed from staging - Author revises based on observed FP patterns - Process restarts at Gate 1


Gate 5: Production Promotion

Before promoting to production severity:

  • [ ] Staging period complete (minimum 7 days)
  • [ ] Staging metrics documented in ticket
  • [ ] FP rate within acceptable threshold
  • [ ] Approved by: Detection Lead or SOC Manager
  • [ ] Rollback plan documented: "If this rule generates >X alerts in first 24h, [action to take]"

Production deployment: - Increase severity to intended level - Enable SOAR integration if applicable - Notify SOC team: "New rule LIVE: [Rule Name] — what to do when it fires: [runbook ref]"


Gate 6: Post-Deployment Monitoring (30 Days)

For 30 days after production promotion:

  • [ ] FP rate monitored weekly
  • [ ] Alert volume monitored daily
  • [ ] Week 1 review: Detection Lead assesses performance
  • [ ] Week 4 review: Confirm rule is performing to expectations

Ongoing FP threshold actions:

FP Rate Action
< 30% No action; continue monitoring
30–50% Add exclusions; document in ticket
50–70% Downgrade severity; schedule rewrite
> 70% Disable rule; schedule rewrite

4. Emergency Rule Changes

For urgent rule changes where the standard 7-day staging is not feasible:

  1. Document the emergency justification in the ticket
  2. Peer review still REQUIRED (no exceptions)
  3. Staging period MAY be shortened to 24 hours with approval from Detection Lead
  4. Post-deployment monitoring frequency increased to daily for 7 days
  5. Full staging must be completed within 30 days of emergency deployment

Emergency bypass of peer review: NEVER PERMITTED. If the reviewer is unavailable, escalate to find another qualified reviewer.


5. Rule Retirement

When retiring a detection rule:

  • [ ] Document retirement reason: [Replaced by newer rule / No longer applicable / Persistent FP / Log source retired]
  • [ ] Confirm replacement rule covers the same threat (if retiring due to replacement)
  • [ ] Peer review of retirement decision
  • [ ] Archive rule (do not delete) — maintain in "retired" collection with retirement date
  • [ ] Notify SOC team of retirement

6. Version Control

All detection rules MUST be version-controlled in [GIT REPOSITORY NAME]:

  • Branch: feature/[rule-id]-[brief-description]
  • PR: Required; linked to change control ticket
  • Merge to main: Only after all gates passed
  • Tagging: [DET-XXX-v1.0] on production-ready commits

SOP-DETECTION-CC v1.0 | Owner: Detection Lead | Review: Quarterly | Nexus SecOps-203 compliant