Skip to content

Chapter 36 Quiz: Purple Team Operations

Test your knowledge of purple team methodology, MITRE ATT&CK emulation, adversary simulation, detection validation, and exercise documentation.


Questions

1. What is the fundamental operational difference between a traditional red team engagement and a purple team exercise?

  • A) Red teams use commercial tools; purple teams use only open-source tools
  • B) Red teams operate covertly without blue team awareness, testing detection in a realistic scenario; purple teams involve real-time collaboration where the blue team observes each technique and tunes detection live
  • C) Purple teams are used for compliance assessments; red teams are used for technical testing
  • D) Red teams focus on network attacks; purple teams focus on social engineering only
Answer

B — Red teams operate covertly without blue team awareness, testing detection in a realistic scenario; purple teams involve real-time collaboration where the blue team observes each technique and tunes detection live

The key distinction is collaboration vs. adversarial simulation. In a red team engagement, the blue team does not know the assessment is happening — this tests detection capability authentically. In a purple team exercise, the red team executes techniques in a coordinated, transparent way while the blue team observes, detects (or fails to detect), and tunes controls in real time. Purple team maximizes learning speed; red team maximizes realism.


2. A purple team exercises the technique T1055 (Process Injection). The blue team's SIEM shows no alert was generated. What should be the immediate next step?

  • A) Accept the detection gap and move to the next technique
  • B) Document the detection gap in the exercise tracker, analyze why detection failed (log source gap, rule gap, or tuning issue), develop or improve a detection rule, validate with synthetic test data, and track to closure
  • C) Report the failure to the CISO and suspend the exercise
  • D) Deploy an additional EDR agent to cover the gap immediately before continuing
Answer

B — Document the detection gap in the exercise tracker, analyze why detection failed (log source gap, rule gap, or tuning issue), develop or improve a detection rule, validate with synthetic test data, and track to closure

The value of purple team is the feedback loop: detect gap → root cause analysis → rule development → validation → deployment. The gap may stem from missing telemetry (EDR not logging process injection API calls), a missing detection rule, or a tuning issue (rule exists but suppressed by noise filter). Immediate remediation mid-exercise is possible but the gap must be formally tracked for closure verification.


3. VECTR is a purpose-built tool used in purple team operations. What is its primary function?

  • A) Automated exploitation of targets during red team operations
  • B) Tracking and documenting purple team exercise results — test cases, techniques, detection outcomes, and remediation status
  • C) Visualizing network topology for attack path planning
  • D) Generating threat intelligence reports from ATT&CK framework data
Answer

B — Tracking and documenting purple team exercise results — test cases, techniques, detection outcomes, and remediation status

VECTR (Vulnerability Exploitation and Capture Tracking Report) is a free, open-source platform specifically designed to track adversary emulation and purple team exercises. It stores: test cases mapped to ATT&CK techniques, execution evidence, detection outcomes (detected/not detected/prevented), and remediation tracking. VECTR provides a baseline for measuring detection improvement across exercise iterations.


4. A purple team exercise uses Atomic Red Team tests. What does Atomic Red Team provide, and who maintains it?

  • A) A commercial exploitation framework maintained by CrowdStrike
  • B) A library of small, focused, portable test cases mapped to MITRE ATT&CK techniques, maintained by Red Canary, designed to validate detection without complex attack simulation
  • C) An automated purple team orchestration platform maintained by MITRE
  • D) A threat intelligence feed mapped to ATT&CK TTPs, maintained by CISA
Answer

B — A library of small, focused, portable test cases mapped to MITRE ATT&CK techniques, maintained by Red Canary, designed to validate detection without complex attack simulation

Atomic Red Team (github.com/redcanaryco/atomic-red-team) is an open-source library of "atomic tests" — small, self-contained scripts that simulate individual ATT&CK techniques (e.g., a PowerShell command that performs credential dumping). Each test is designed to validate whether a specific detection fires, making it ideal for coverage validation and purple team exercises. It is maintained by Red Canary and widely used for detection engineering validation.


5. The TIBER-EU framework governs which type of security testing, and who developed it?

  • A) Vulnerability management assessments for banks; developed by ISO
  • B) Threat Intelligence-Based Ethical Red Teaming for financial sector entities; developed by the European Central Bank
  • C) Purple team exercises for critical infrastructure; developed by ENISA
  • D) Penetration testing certification for red team operators; developed by CREST
Answer

B — Threat Intelligence-Based Ethical Red Teaming for financial sector entities; developed by the European Central Bank

TIBER-EU (Threat Intelligence-Based Ethical Red Teaming) is a European Central Bank framework for controlled, intelligence-led red team tests of critical financial infrastructure. It requires: threat intelligence from a qualified TI provider mapping to the entity's specific threat actors, followed by a controlled red team simulation of those TTPs. It has been adopted by many EU member states' central banks as the standard for testing financial sector resilience.


6. A purple team exercise concludes with a report showing that out of 45 ATT&CK technique tests, 18 generated alerts, 12 were blocked by controls, and 15 were not detected. What is the detection coverage rate?

  • A) 26.7% (12/45)
  • B) 40% (18/45)
  • C) 66.7% (30/45)
  • D) 60% (27/45)
Answer

C — 66.7% (30/45)

Detection coverage rate counts both detected (alert generated = 18) and prevented/blocked techniques (12) as positive outcomes — the organization was aware of or stopped the technique. Only the 15 undetected techniques represent coverage gaps. Total covered = 18 + 12 = 30 out of 45 = 66.7%. Some organizations track detected and prevented separately (detection rate vs. prevention rate), but combined coverage is a useful executive metric.


7. During a purple team exercise emulating the Lazarus Group (North Korean APT), the red team uses T1566.001 (Spearphishing Attachment). The blue team detects the email in their SEG logs but the endpoint does not generate an alert when the payload executes. What does this indicate?

  • A) The SEG is misconfigured and should have blocked the email
  • B) Email detection is working; endpoint detection has a coverage gap for the specific payload execution technique — requiring investigation of EDR telemetry and rule coverage
  • C) The test is invalid because the Lazarus Group would not use spearphishing
  • D) The result is a false positive — the SEG log entry was coincidental
Answer

B — Email detection is working; endpoint detection has a coverage gap for the specific payload execution technique — requiring investigation of EDR telemetry and rule coverage

The layered result shows partial detection: the SEG caught the initial delivery vector (T1566.001) but the endpoint (T1204 — User Execution, or T1059 — Command Execution) was not detected. This gap analysis is one of the most valuable purple team outputs — it identifies precisely which layer failed and for which technique. The remediation action is investigating what telemetry the EDR logged for the execution and building or tuning the appropriate detection rule.


8. What is the distinction between adversary emulation and adversary simulation in the context of red/purple team operations?

  • A) Adversary emulation uses real malware; adversary simulation uses synthetic scripts
  • B) Adversary emulation replicates the specific TTPs, tools, and behaviors of a named real-world threat actor; adversary simulation uses generic attack techniques without modeling a specific adversary
  • C) Adversary simulation is performed by external red teams; adversary emulation is internal only
  • D) There is no meaningful distinction — the terms are interchangeable
Answer

B — Adversary emulation replicates the specific TTPs, tools, and behaviors of a named real-world threat actor; adversary simulation uses generic attack techniques without modeling a specific adversary

Adversary emulation (as in MITRE ATT&CK Evaluations and TIBER-EU) uses threat intelligence to replicate the specific TTP chain of a named group (APT29, FIN7, Lazarus). Adversary simulation tests generic techniques without attributing them to a specific actor. Emulation provides higher relevance to an organization's actual threat profile; simulation provides broader coverage validation across the ATT&CK matrix.


9. A CISO wants to demonstrate the value of purple team operations to the board. Which metric set best quantifies the program's impact over time?

  • A) Number of red team engagements completed and hours billed
  • B) Detection coverage rate improvement (% techniques detected, measured before/after), mean time to detect (MTTD) reduction for tested techniques, and number of new detections deployed
  • C) Number of vulnerabilities found by the red team during the engagement
  • D) CVSS scores of techniques tested in the ATT&CK emulation plan
Answer

B — Detection coverage rate improvement (% techniques detected, measured before/after), mean time to detect (MTTD) reduction for tested techniques, and number of new detections deployed

Purple team ROI is demonstrated through measurable detection improvement: (1) Detection coverage rate: baseline vs. post-exercise (e.g., 45% → 67%), (2) MTTD reduction: how much faster detections fire after tuning, (3) New detections deployed: concrete deliverables from the exercise. These metrics show the board that the investment directly improves defensive capability. CVSS scores (D) are irrelevant — ATT&CK techniques are not CVSS-scored.


10. After a purple team exercise, the detection engineering team deploys a new rule to detect T1003.001 (LSASS Memory Dumping). In the first week, the rule generates 2,000 alerts, 1,990 of which are from legitimate Windows backup and AV software. What is this scenario called, and what is the correct response?

  • A) A true positive storm — escalate all 2,000 alerts immediately
  • B) Alert fatigue from excessive false positives — tune the rule to exclude known-good processes (by process hash, parent process, or signed binary attributes) before deploying to production detection queues
  • C) The rule should be deleted — if it generates false positives it is not useful
  • D) A detection gap — the rule is not detecting real LSASS dumping attempts
Answer

B — Alert fatigue from excessive false positives — tune the rule to exclude known-good processes (by process hash, parent process, or signed binary attributes) before deploying to production detection queues

A detection rule with a 99.5% false positive rate creates alert fatigue that actually degrades security (analysts stop investigating). The correct response is rule tuning: analyze FP patterns, identify the distinguishing characteristics of legitimate vs. malicious LSASS access (parent process, process name, code signing status, access mask), and add exclusions. This is normal detection engineering workflow — raw rules from purple team require production tuning before deployment.


Scoring

Score Performance
9–10 Expert — Purple team methodology, adversary emulation, and detection validation fully mastered
7–8 Proficient — Ready to lead purple team exercises and build detection improvement programs
5–6 Developing — Review Chapter 36 sections on VECTR, ATT&CK emulation plans, and detection gap analysis
<5 Foundational — Re-read Chapter 36 before proceeding

Return to Chapter 36 | Next: Chapter 37