Skip to content

Chapter 55 Quiz: Threat Modeling Operations

Test your knowledge of STRIDE, PASTA, LINDDUN threat modeling frameworks, data flow diagrams, trust boundaries, attack trees, continuous threat modeling in DevSecOps, IaC-based threat analysis, attack surface management, Kubernetes and AI/ML threat modeling, detection engineering from threat models, and threat modeling program maturity.


Questions

1. What are the four fundamental questions that every threat model must answer?

  • A) Who, what, when, where
  • B) (1) What are we building? — define architecture, components, data flows, trust boundaries. (2) What can go wrong? — systematically identify threats using frameworks like STRIDE. (3) What are we going to do about it? — define mitigations, accept residual risk, or transfer risk. (4) Did we do a good enough job? — validate through testing, review, and continuous reassessment
  • C) Attack, defend, detect, respond
  • D) Plan, build, test, deploy
Answer

B — (1) What are we building? (2) What can go wrong? (3) What are we going to do about it? (4) Did we do a good enough job?

These four questions (originally formulated by Adam Shostack) are the foundation of all threat modeling methodologies. They ensure that threat modeling is structured and complete: understanding the system (DFDs, trust boundaries), identifying threats (STRIDE, PASTA, attack trees), deciding on mitigations (accept, mitigate, transfer, avoid), and validating that the analysis is sufficient (testing, peer review, continuous reassessment). Skipping any question creates blind spots.


2. In STRIDE, what security property does each threat category violate, and why are trust boundaries the most critical element to analyze?

  • A) STRIDE threats only apply to web applications
  • B) Spoofing violates Authentication, Tampering violates Integrity, Repudiation violates Non-repudiation, Information Disclosure violates Confidentiality, Denial of Service violates Availability, Elevation of Privilege violates Authorization — trust boundaries are critical because every point where data crosses a trust boundary (e.g., internet→DMZ, DMZ→internal, app→database) requires validation, authentication, and authorization, and threats cluster at these crossing points
  • C) STRIDE categories are optional — you only need to check 2-3 per system
  • D) Trust boundaries only exist between networks, not between software components
Answer

B — Each STRIDE category maps to a violated security property, and threats cluster at trust boundary crossings

STRIDE-per-element analysis applies different threat subsets to different DFD elements: External Entities are susceptible to Spoofing and Repudiation, Processes to all six categories, Data Stores to Tampering/Information Disclosure/DoS, and Data Flows to Tampering/Information Disclosure/DoS. Trust boundaries are where privilege levels change — the internet-to-DMZ boundary, the application-to-database boundary, the user-process-to-kernel boundary. Every data flow crossing a trust boundary is an attack surface that needs controls. Focusing STRIDE analysis on trust boundary crossings first provides the highest ROI.


3. Compare STRIDE and PASTA. When would you choose PASTA over STRIDE?

  • A) PASTA is always better than STRIDE
  • B) STRIDE is a per-component threat classification framework best for development teams doing quick design reviews — it answers "what threats apply to this component?" PASTA is a seven-stage risk-centric methodology that connects business objectives to technical threats — it answers "what is the business risk of this threat?" Choose PASTA when risk quantification is required, executive-facing deliverables are needed, compliance mandates a formal risk assessment, or the system's business impact must justify security investment
  • C) PASTA is faster than STRIDE
  • D) STRIDE requires more people than PASTA
Answer

B — STRIDE for component-level classification, PASTA for business-risk-driven assessment

PASTA's seven stages (Define Objectives → Technical Scope → Decomposition → Threat Analysis → Vulnerability Analysis → Attack Modeling → Risk & Impact) produce a risk-ranked mitigation plan with business impact quantification. This is necessary when: (1) leadership needs to understand ROI of security controls, (2) compliance frameworks (SOC 2, ISO 27001) require documented risk assessments, (3) the system handles financial/health/PII data where breach cost must be quantified. STRIDE is preferred when: (1) developers need quick per-component analysis during design reviews, (2) the team is new to threat modeling and needs a simple framework, (3) the goal is broad threat identification rather than risk quantification.


4. What is LINDDUN, and why is it essential for AI/ML systems and GDPR compliance?

  • A) LINDDUN is a network security framework
  • B) LINDDUN is a privacy-focused threat modeling framework covering seven categories: Linkability, Identifiability, Non-repudiation (as a privacy concern), Detectability, Disclosure, Unawareness, and Non-compliance — it's essential for AI/ML systems because they create unique privacy threats (model memorization, membership inference, training data reconstruction) and for GDPR because it systematically identifies where personal data processing violates privacy principles (consent, purpose limitation, data minimization)
  • C) LINDDUN replaces STRIDE entirely
  • D) LINDDUN only applies to healthcare systems
Answer

B — LINDDUN covers 7 privacy threat categories essential for AI systems and regulatory compliance

LINDDUN addresses threats that STRIDE misses entirely. An AI model trained on user data creates: Linkability (model memorizes training data, adversary queries model to link data points), Identifiability (model inversion reconstructs individual training examples), Disclosure (membership inference reveals if a person's data was in the training set), Unawareness (users don't know their data was used for training), Non-compliance (training on personal data without consent violates GDPR Article 6). STRIDE's Information Disclosure category covers unauthorized access but NOT these privacy-specific threats.


5. How do you construct a quantitative attack tree, and what advantage does it provide over qualitative threat listing?

  • A) Attack trees only list threats without prioritization
  • B) An attack tree decomposes a high-level goal (e.g., "steal customer data") into sub-goals connected by AND/OR gates, then assigns quantitative values to each leaf node — probability of success, cost to attacker, and impact to defender — allowing you to calculate the risk of each attack path (probability × impact), identify the highest-risk path, and prioritize mitigations on the path that provides the maximum risk reduction per dollar spent
  • C) Attack trees require specialized software to build
  • D) Quantitative values are always guesses and therefore useless
Answer

B — Attack trees decompose goals into sub-goals with AND/OR gates and quantitative values for data-driven prioritization

The key advantage of quantitative attack trees is data-driven prioritization. A qualitative list says "SQL injection is a threat" — but how likely is it compared to insider threat? An attack tree with probability estimates answers this: SQL injection (p=0.3, cost=$500, impact=$2M, risk=$600K) vs insider threat (p=0.1, cost=$0, impact=$5M, risk=$500K) → SQL injection has higher expected risk despite lower impact. This enables rational resource allocation: mitigate the highest-risk path first, then re-evaluate.


6. How does continuous threat modeling integrate into a CI/CD pipeline, and what triggers a threat model update?

  • A) Threat models only need to be created once before launch
  • B) Continuous threat modeling automates threat identification on every significant change: IaC scanners (Checkov, tfsec) identify infrastructure threats on commit, SAST tools detect code-level threats, dependency scanners flag supply chain risks, and architecture diff detectors identify new trust boundary crossings — triggers for update include: new components added, trust boundaries changed, data classification changed, new external integrations, post-incident findings, and regulatory changes
  • C) CI/CD pipelines can only run SAST, not threat modeling
  • D) Continuous threat modeling requires a dedicated security team for every pipeline
Answer

B — Automated threat analysis on every commit, with specific triggers for formal threat model updates

The shift from "annual threat model workshop" to "continuous threat analysis" mirrors the shift from "annual pentest" to "continuous security testing." Architecture-as-code (defining systems in YAML) enables programmatic analysis: when a developer adds a new API endpoint that crosses a trust boundary, the CI pipeline detects this, applies STRIDE automatically, and creates a PR comment: "New trust boundary crossing detected. STRIDE analysis: Spoofing (JWT forgery), Tampering (input validation), Information Disclosure (error messages). Review required."


7. What are the unique threat modeling challenges for Kubernetes environments?

  • A) Kubernetes uses the same threat model as traditional infrastructure
  • B) Kubernetes introduces unique threats at every layer: API Server (anonymous auth, admission webhook bypass, RBAC misconfiguration), etcd (unencrypted secrets at rest, direct write access), Kubelet (anonymous API access, host namespace access), Pods (privileged containers, container escape, environment variable secrets), Network (flat network without network policies, east-west traffic), and Supply Chain (unsigned images, vulnerable base images, typosquatted registries) — each requiring layer-specific STRIDE analysis with Kubernetes-native detection queries
  • C) Only the Pod layer needs threat modeling
  • D) Kubernetes security is entirely handled by the cloud provider
Answer

B — Kubernetes introduces threats at every layer requiring Kubernetes-native STRIDE analysis

Traditional threat models focus on network boundaries and application logic. Kubernetes adds entirely new attack surfaces: the API server is a single point of compromise (RBAC misconfiguration = cluster-wide privilege escalation), etcd stores all cluster state including secrets (direct access = complete cluster compromise), the kubelet API on each node can execute commands in any pod (anonymous access = remote code execution), and the flat network model means any pod can talk to any other pod by default (lateral movement without network controls). Detection queries must be Kubernetes-native: audit log monitoring for privileged pod creation, network policy enforcement verification, RBAC change alerts.


8. How do you threat model AI/ML systems, and what frameworks extend STRIDE for AI-specific threats?

  • A) Standard STRIDE is sufficient for AI systems
  • B) AI systems require extended frameworks because they face threats STRIDE doesn't cover: model theft (extracting weights via prediction API — MITRE ATLAS AML.T0024), training data poisoning (manipulating training data to bias outputs — AML.T0020), adversarial examples (crafted inputs causing misclassification — AML.T0015), prompt injection (hijacking LLM behavior — OWASP LLM Top 10 LLM01), and membership inference (determining if data was in training set) — use STRIDE for the infrastructure around AI + MITRE ATLAS + OWASP LLM Top 10 for AI-native threats
  • C) AI threats only apply to LLMs, not traditional ML
  • D) MITRE ATLAS is a replacement for ATT&CK
Answer

B — Use STRIDE + MITRE ATLAS + OWASP LLM Top 10 for comprehensive AI threat coverage

STRIDE covers the infrastructure threats (API authentication, data store access, network encryption) but misses the AI-native threats that arise from the model itself being an attack surface. A prompt injection attack doesn't violate any STRIDE category cleanly — it's not spoofing (the user is authenticated), not tampering (the input is syntactically valid), not EoP (the user already has API access). It's a fundamentally new threat category: manipulating the AI's reasoning process. MITRE ATLAS catalogs 25+ AI-specific techniques, and OWASP LLM Top 10 covers the 10 most critical risks specific to LLM applications.


9. What is the most valuable output of a threat model, and how do you close the loop between threat identification and operational detection?

  • A) The most valuable output is a PDF document
  • B) The most valuable output is a detection query — the threat-to-detection pipeline works by: (1) threat model identifies a specific threat (e.g., "attacker modifies order amount via parameter tampering"), (2) the threat description is translated into observable indicators (parameter value changes between cart and charge), (3) a KQL/SPL detection query is written to monitor for that indicator, (4) the query is deployed to the SIEM, (5) when the alert fires, the IR playbook references the original threat model for context — if a threat doesn't produce a detection query, it's documentation, not defense
  • C) Detection queries are separate from threat modeling
  • D) Only Critical-severity threats need detection queries
Answer

B — The most valuable output is a detection query; if a threat doesn't produce a detection, it's documentation, not defense

This is the critical insight that separates operational threat modeling from academic threat modeling. A threat model that produces a 50-page PDF describing 200 threats but no detection queries has zero operational value — it describes risk without enabling response. The threat-to-detection pipeline ensures every identified threat results in a measurable security control: either a preventive control (WAF rule, IAM policy) or a detective control (SIEM query, EDR rule). The threat model becomes a living document: when new threats are identified, new detections are deployed; when detections fire, the threat model is validated.


10. What distinguishes a Level 1 (Ad Hoc) threat modeling program from a Level 5 (Optimized) program?

  • A) Level 5 just does more threat models
  • B) Level 1 has no standard methodology, is dependent on individual expertise, has no tracking or metrics. Level 5 has: automated threat identification via IaC scanning, architecture-as-code analysis with programmatic STRIDE generation, threat models version-controlled alongside code, ASM feeds directly into threat models, every threat automatically generates a detection query, metrics prove ROI (mitigation rate >80%, detection coverage >50%), and regular tabletop exercises validate the program — the difference is that Level 1 treats threat modeling as a one-time event while Level 5 treats it as a continuous, automated, measured engineering discipline
  • C) The only difference is team size
  • D) Level 5 requires commercial threat modeling tools
Answer

B — Level 1 is ad-hoc and manual; Level 5 is continuous, automated, and measured

The maturity progression: Level 1 (Ad Hoc) → Level 2 (Repeatable: standard methodology, templates) → Level 3 (Defined: integrated into SDLC, every major change includes threat model) → Level 4 (Managed: IaC scanning, automated STRIDE generation, threat model versioning) → Level 5 (Optimized: threats drive detection engineering, ASM feeds into models, metrics prove ROI, AI-assisted threat identification). Key metrics at Level 5: threat model coverage (100% critical systems), time to model (<4 hours), mitigation implementation rate (>80% within 90 days), detection query generation rate (>50% of threats).


Scoring Guide

Score Assessment Recommended Action
9-10 (90-100%) Excellent — Strong mastery of threat modeling operations Ready for advanced practice
7-8 (70-89%) Good — Solid understanding with minor gaps Review LINDDUN and AI/ML threat modeling sections
5-6 (50-69%) Developing — Key concepts need reinforcement Re-read Chapter 55 sections 55.2, 55.5, 55.10
Below 5 (<50%) Needs Review — Revisit prerequisites Review Chapter 30 then re-read Chapter 55

Study Recommendations

  • Before the quiz: Read Chapter 55 completely, focusing on STRIDE-per-element analysis, PASTA's seven stages, and the threat-to-detection pipeline
  • Hands-on practice: Build a STRIDE threat model for a simple web application using the DFD template in Section 55.1
  • Spaced repetition: Retake this quiz in 3-5 days to reinforce threat modeling concepts