Skip to content

Chapter 55: Threat Modeling Operations

Overview

Threat modeling is the disciplined practice of identifying, categorizing, and prioritizing potential threats to a system before those threats are exploited. Unlike reactive security disciplines — incident response, forensics, vulnerability management — threat modeling is inherently proactive. It asks: What could go wrong? Who would attack this? How would they do it? What are we going to do about it? These questions, asked early and asked often, fundamentally shift the economics of defense. A threat identified during architecture review costs orders of magnitude less to mitigate than one discovered during incident response. Yet most organizations either skip threat modeling entirely or perform it as a one-time compliance checkbox that rapidly becomes stale. This chapter transforms threat modeling from an occasional exercise into a continuous operational capability embedded in the development lifecycle.

The landscape of threat modeling methodologies has matured significantly. STRIDE provides a systematic categorization of threats against system components. PASTA delivers a risk-centric, attacker-focused process that connects business objectives to technical threats. LINDDUN addresses the increasingly critical domain of privacy threats. Attack trees formalize adversary decision-making into analyzable structures. Kill chain models map threats to the sequential phases of an attack. Each methodology has strengths, blind spots, and ideal application contexts — and mature organizations combine them rather than picking one. This chapter covers each methodology with worked examples using synthetic architectures, then shows how to operationalize them: integrating threat modeling into CI/CD pipelines, automating threat identification from infrastructure-as-code, connecting threat models to detection engineering, and measuring program maturity.

The critical gap this chapter addresses is the bridge between threat modeling output and security operations action. A threat model that produces a PDF which sits in a SharePoint folder helps no one. A threat model that produces detection rules in KQL and SPL, drives attack surface reduction initiatives, informs red team engagement scoping, and generates measurable risk reduction metrics — that is an operational capability. We cover the complete pipeline from threat identification through mitigation verification, with special attention to cloud-native architectures, Kubernetes deployments, AI/ML systems, and the emerging discipline of attack surface management. Every section connects back to detection engineering and the SOC: if you model a threat, you should be able to detect it.

Educational Content Only

All techniques, architecture diagrams, IP addresses, domain names, and scenarios in this chapter are 100% synthetic and created for educational purposes only. IP addresses use RFC 5737 (192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24) and RFC 1918 ranges (10.x, 172.16.x, 192.168.x). Domains use *.example.com and *.example. All credentials shown are placeholders (testuser/REDACTED). Application names such as "SynthApp" or "PhantomAPI" are entirely fictional. Never execute offensive techniques without explicit written authorization against systems you own or have written permission to test.

Learning Objectives

By the end of this chapter, students SHALL be able to:

  1. Analyze the core principles, strengths, and limitations of STRIDE, PASTA, LINDDUN, and attack tree methodologies, selecting the appropriate framework for a given system context (Analysis)
  2. Design a threat model for a multi-tier web application that identifies threats across all STRIDE categories, maps them to MITRE ATT&CK techniques, and produces prioritized mitigation recommendations (Synthesis)
  3. Evaluate privacy threats using the LINDDUN framework, assessing data flow diagrams for linkability, identifiability, non-repudiation, detectability, disclosure, unawareness, and non-compliance risks (Evaluation)
  4. Construct attack trees that decompose high-level adversary objectives into granular attack paths with cost, skill, and detectability annotations (Application)
  5. Implement continuous threat modeling workflows integrated into CI/CD pipelines, triggering automated threat analysis on infrastructure-as-code and architecture changes (Application)
  6. Create detection queries in KQL and SPL derived directly from threat model outputs, establishing a traceable link between modeled threats and deployed detection coverage (Synthesis)
  7. Assess attack surface management programs that continuously discover, inventory, and prioritize externally exposed assets for threat modeling prioritization (Evaluation)
  8. Design threat models for cloud-native Kubernetes architectures addressing container escape, service mesh bypass, RBAC misconfiguration, and supply chain injection vectors (Synthesis)
  9. Formulate threat models for AI/ML systems covering model theft, training data poisoning, adversarial inputs, prompt injection, and model inversion attacks (Synthesis)
  10. Develop a threat modeling maturity assessment framework with measurable KPIs that tracks progression from ad-hoc exercises to continuous, automated, metrics-driven operations (Synthesis)

Prerequisites


MITRE ATT&CK Threat Modeling Mapping

Technique ID Technique Name Threat Modeling Context Tactic
T1190 Exploit Public-Facing Application Modeled threat: input validation failures on internet-facing services Initial Access (TA0001)
T1078 Valid Accounts Modeled threat: credential theft or abuse of legitimate accounts Initial Access (TA0001)
T1068 Exploitation for Privilege Escalation Modeled threat: privilege boundary violations in multi-tier architectures Privilege Escalation (TA0004)
T1548 Abuse Elevation Control Mechanism Modeled threat: bypassing authorization controls through design flaws Privilege Escalation (TA0004)
T1557 Adversary-in-the-Middle Modeled threat: unencrypted communication channels between services Credential Access (TA0006)
T1021 Remote Services Modeled threat: lateral movement through exposed management interfaces Lateral Movement (TA0008)
T1071.001 Application Layer Protocol: Web Protocols Modeled threat: C2 communication blending with legitimate HTTPS traffic Command & Control (TA0011)
T1565.001 Data Manipulation: Stored Data Manipulation Modeled threat: integrity attacks on databases and configuration stores Impact (TA0040)
T1195.002 Supply Chain Compromise: Compromise Software Supply Chain Modeled threat: compromised dependencies injected into build pipeline Initial Access (TA0001)
T1098 Account Manipulation Modeled threat: persistence through account creation or privilege modification Persistence (TA0003)
T1530 Data from Cloud Storage Object Modeled threat: misconfigured cloud storage exposing sensitive data Collection (TA0009)
T1562.001 Impair Defenses: Disable or Modify Tools Modeled threat: attacker disabling security monitoring after initial access Defense Evasion (TA0005)

55.1 Threat Modeling Fundamentals

Threat modeling is the structured process of identifying what can go wrong in a system, how likely it is, what the impact would be, and what to do about it. It is the single most cost-effective security activity an organization can perform — but only if it is done systematically, maintained continuously, and connected to operational security controls.

55.1.1 The Four Questions of Threat Modeling

Adam Shostack's foundational framework reduces threat modeling to four questions that apply regardless of methodology:

Question Purpose Output
What are we working on? Scope definition, system decomposition Architecture diagrams, DFDs, asset inventory
What can go wrong? Threat identification and enumeration Threat list with categories and attack vectors
What are we going to do about it? Mitigation planning and prioritization Risk treatment decisions: mitigate, accept, transfer, avoid
Did we do a good job? Validation and continuous improvement Coverage metrics, red team validation, incident correlation

55.1.2 When to Threat Model

The Threat Modeling Timing Matrix

Threat modeling is not a one-time activity. Different triggers demand different depths of analysis.

Trigger Depth Output Example
New system design Full model Complete threat model document with DFDs, threats, mitigations New customer-facing API gateway
Major architecture change Incremental update Updated DFDs, new threats for changed components Migrating from monolith to microservices
New feature/user story Lightweight review Threat annotations on design docs or Jira tickets Adding OAuth provider integration
Incident post-mortem Focused analysis Gap analysis — why was this threat not modeled? After a privilege escalation incident
Compliance requirement Structured assessment Formal threat model mapped to regulatory controls PCI DSS Requirement 6, SOC 2 Type II
Periodic review Refresh cycle Updated threat landscape, new attack techniques Annual threat model refresh
Dependency change Supply chain review Component threat analysis Adopting new open-source framework

55.1.3 Data Flow Diagrams (DFDs) — The Foundation

Every threat modeling methodology begins with understanding the system. Data Flow Diagrams (DFDs) are the standard notation for decomposing systems into analyzable components.

DFD elements:

┌──────────────────────────────────────────────────────────────────────┐
│                    DFD NOTATION ELEMENTS                             │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌─────────────┐                                                     │
│  │             │   External Entity: Users, systems, APIs outside     │
│  │  External   │   the trust boundary. Source/sink of data.          │
│  │  Entity     │                                                     │
│  └─────────────┘                                                     │
│                                                                      │
│  ╔═════════════╗                                                     │
│  ║             ║   Process: Code that transforms, validates, or      │
│  ║  Process    ║   routes data. Each process is a potential target.   │
│  ║             ║                                                     │
│  ╚═════════════╝                                                     │
│                                                                      │
│  ══════════════     Data Store: Databases, files, caches, queues.    │
│  ║  Data Store ║   Where data rests. Integrity and confidentiality   │
│  ══════════════     targets.                                         │
│                                                                      │
│  ───────────────>   Data Flow: Movement of data between elements.    │
│                     Each flow can be intercepted, modified, or       │
│                     replayed.                                        │
│                                                                      │
│  - - - - - - - -    Trust Boundary: Separates zones of different     │
│  :  Trust      :    privilege levels. Threats concentrate at          │
│  :  Boundary   :    boundary crossings.                              │
│  - - - - - - - -                                                     │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

Example DFD — SynthApp E-Commerce Platform:

┌──────────────────────────────────────────────────────────────────────┐
│                  SYNTHAPP E-COMMERCE DFD (LEVEL 1)                   │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌──────────┐    HTTPS     ╔══════════════╗    SQL      ══════════   │
│  │ Customer │ ──────────> ║  Web App      ║ ────────> ║ Customer ║  │
│  │ Browser  │ <────────── ║  (Node.js)    ║ <──────── ║ Database ║  │
│  └──────────┘    HTML/JS   ║ 192.168.1.10  ║           ║ 10.0.1.5 ║  │
│       │                    ╚══════════════╝            ══════════   │
│       │                         │    │                               │
│  - - -│- - - - - - - - - - - - │- - │ - - - Trust Boundary - - - - │
│       │                         │    │        (DMZ / Internal)       │
│       │                    REST │    │ gRPC                          │
│       │                         │    │                               │
│       │                    ╔════╧════╧═════╗   API     ══════════   │
│       │                    ║  Payment       ║ ──────> ║  Payment ║  │
│       │                    ║  Service       ║ <────── ║  Gateway ║  │
│       │                    ║  10.0.2.10     ║          ║ External ║  │
│       │                    ╚════════════════╝          ══════════   │
│       │                         │                                    │
│       │                    ╔════╧═══════════╗                        │
│       │                    ║  Auth Service   ║                       │
│       │                    ║  (OAuth2/OIDC)  ║                       │
│       │                    ║  10.0.2.20      ║                       │
│       │                    ╚════════════════╝                        │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.1.4 Asset Identification and Trust Boundaries

Effective threat modeling requires explicit identification of assets worth protecting and the boundaries between trust zones.

Asset categories for threat modeling:

Category Examples Primary Threats
Data assets Customer PII, payment card data, API keys, session tokens Disclosure, tampering, exfiltration
Compute assets Application servers, containers, serverless functions Code execution, resource hijacking
Identity assets User accounts, service accounts, certificates, OAuth tokens Credential theft, impersonation
Network assets APIs, load balancers, DNS, service mesh Interception, denial of service
Configuration assets IAM policies, firewall rules, Kubernetes RBAC Misconfiguration, privilege escalation
Supply chain assets Dependencies, container images, build pipelines Tampering, injection, compromise

55.2 STRIDE Framework Deep Dive

STRIDE is Microsoft's threat classification framework, developed by Loren Kohnfelder and Praerit Garg. It categorizes threats into six types, each representing a violation of a specific security property. STRIDE remains the most widely used threat modeling framework because of its simplicity, systematicity, and direct mapping to security controls.

55.2.1 The STRIDE Categories

Threat Security Property Violated Question to Ask Example
Spoofing Authentication Can an attacker pretend to be someone/something else? Forged JWT token to impersonate admin
Tampering Integrity Can an attacker modify data in transit or at rest? SQL injection modifying order amounts
Repudiation Non-repudiation Can an actor deny performing an action? User disputes a transaction with no audit log
Information Disclosure Confidentiality Can an attacker access data they should not? API endpoint leaking other users' PII
Denial of Service Availability Can an attacker make the system unavailable? Application-layer flood exhausting thread pool
Elevation of Privilege Authorization Can an attacker gain higher privileges? IDOR vulnerability granting admin access

55.2.2 STRIDE-per-Element Analysis

The most rigorous application of STRIDE applies each threat category to each DFD element type. Not every category applies to every element type:

DFD Element S T R I D E
External Entity X X
Process X X X X X X
Data Store X X X X
Data Flow X X X

55.2.3 Worked Example: STRIDE on SynthApp

Applying STRIDE-per-Element to the SynthApp e-commerce platform DFD:

Element: Web Application Process (192.168.1.10)

Threat Specific Threat Attack Vector Mitigation Priority
Spoofing Attacker forges session cookie to impersonate customer Cookie theft via XSS, session fixation HttpOnly + Secure + SameSite cookies; CSP headers HIGH
Tampering Attacker modifies price in client-side form data Parameter manipulation in POST request Server-side price validation from database CRITICAL
Repudiation Customer disputes they placed an order Insufficient audit logging Immutable audit log with timestamp, IP, user agent MEDIUM
Info Disclosure Error page leaks stack trace with database schema Unhandled exception in production Custom error pages; structured logging without sensitive data HIGH
Denial of Service Attacker sends malformed JSON payloads causing CPU spike ReDoS in input validation regex Input size limits; regex complexity analysis; rate limiting HIGH
Elevation of Privilege Attacker modifies user role in JWT payload Weak JWT signing (HS256 with weak secret) RS256 with strong key pair; server-side role lookup CRITICAL

Element: Data Flow — Web App to Customer Database (SQL)

Threat Specific Threat Attack Vector Mitigation Priority
Tampering SQL injection modifying customer data Unsanitized input in SQL queries Parameterized queries; ORM with prepared statements CRITICAL
Info Disclosure Database query results include columns not needed Excessive data retrieval (SELECT *) Column-specific queries; view-based access control MEDIUM
Denial of Service Unbounded query exhausts database connections Missing pagination; no query timeout Query timeouts; connection pool limits; pagination enforcement HIGH

55.2.4 STRIDE Threat Modeling Detection Queries

Detecting threats identified through STRIDE analysis:

// Detect potential spoofing: multiple sessions from different locations for same user
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType == 0  // Successful sign-in
| summarize DistinctIPs = dcount(IPAddress),
            DistinctLocations = dcount(Location),
            IPList = make_set(IPAddress, 5),
            LocationList = make_set(Location, 5)
  by UserPrincipalName
| where DistinctLocations > 2
| extend AlertTitle = strcat("STRIDE-Spoofing: User ", UserPrincipalName,
                              " authenticated from ", DistinctLocations,
                              " locations within 1 hour")
index=auth sourcetype=azure:signinlogs ResultType=0
| bin _time span=1h
| stats dc(IPAddress) as DistinctIPs,
        dc(Location) as DistinctLocations,
        values(IPAddress) as IPList,
        values(Location) as LocationList
  by UserPrincipalName, _time
| where DistinctLocations > 2
| eval AlertTitle="STRIDE-Spoofing: User ".UserPrincipalName
        ." authenticated from ".DistinctLocations
        ." locations within 1 hour"
// Detect potential tampering: unauthorized configuration changes
AuditLogs
| where TimeGenerated > ago(24h)
| where OperationName has_any ("Update", "Modify", "Set", "Change")
| where Category == "Policy" or Category == "RoleManagement"
| where Result == "success"
| where InitiatedBy.user.userPrincipalName !in
        ("admin@synthapp.example.com", "svc-config@synthapp.example.com")
| project TimeGenerated, OperationName, Category,
          InitiatedBy.user.userPrincipalName,
          TargetResources[0].displayName
| extend AlertTitle = "STRIDE-Tampering: Unauthorized configuration change"
index=azure_audit sourcetype=azure:audit
| search OperationName IN ("Update*", "Modify*", "Set*", "Change*")
| search Category IN ("Policy", "RoleManagement") Result="success"
| search NOT InitiatedBy.user.userPrincipalName IN
        ("admin@synthapp.example.com", "svc-config@synthapp.example.com")
| table _time, OperationName, Category,
        InitiatedBy.user.userPrincipalName,
        TargetResources{}.displayName
| eval AlertTitle="STRIDE-Tampering: Unauthorized configuration change"

55.3 PASTA — Process for Attack Simulation and Threat Analysis

PASTA is a seven-stage, risk-centric threat modeling methodology that takes an attacker's perspective. Unlike STRIDE, which focuses on threat categorization, PASTA emphasizes business impact and attacker simulation. It connects business objectives to technical threats through a structured, iterative process.

55.3.1 The Seven Stages of PASTA

┌──────────────────────────────────────────────────────────────────────┐
│                    PASTA SEVEN-STAGE PROCESS                         │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Stage 1: Define Objectives                                          │
│  ├── Business objectives and security requirements                   │
│  ├── Compliance mandates (PCI, HIPAA, GDPR)                        │
│  └── Risk appetite and tolerance thresholds                          │
│                                                                      │
│  Stage 2: Define Technical Scope                                     │
│  ├── Application architecture and components                        │
│  ├── Infrastructure and network topology                             │
│  └── Data flows and trust boundaries                                 │
│                                                                      │
│  Stage 3: Application Decomposition                                  │
│  ├── Data flow diagrams (DFDs)                                       │
│  ├── Use cases and abuse cases                                       │
│  └── Entry points and exit points                                    │
│                                                                      │
│  Stage 4: Threat Analysis                                            │
│  ├── Threat intelligence (industry-specific)                         │
│  ├── Attack library and historical incidents                         │
│  └── Threat actor profiling and motivation                           │
│                                                                      │
│  Stage 5: Vulnerability & Weakness Analysis                          │
│  ├── Known CVEs in components (SBOM correlation)                     │
│  ├── Design weaknesses (CWE mapping)                                 │
│  └── Configuration weaknesses                                        │
│                                                                      │
│  Stage 6: Attack Modeling & Simulation                               │
│  ├── Attack trees for each threat                                    │
│  ├── Attack simulation (tabletop or red team)                        │
│  └── Exploit probability scoring                                     │
│                                                                      │
│  Stage 7: Risk & Impact Analysis                                     │
│  ├── Business impact assessment per threat                           │
│  ├── Residual risk calculation                                       │
│  └── Prioritized mitigation roadmap                                  │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.3.2 PASTA Stage 4 — Threat Intelligence Integration

PASTA Stage 4 requires threat intelligence specific to the organization's industry, geography, and technology stack. This is where threat modeling connects to the broader threat landscape.

Threat actor profiling for SynthApp (financial services):

Threat Actor Category Motivation Capability Likely TTPs Relevance
Financially motivated (e-crime) Profit from stolen payment data Moderate — commodity tools + purchased access Phishing, credential stuffing, web app exploitation HIGH
Nation-state (APT) Espionage, sanctions evasion Advanced — custom malware, zero-days Spear phishing, supply chain, living-off-the-land MEDIUM
Hacktivists Disruption, ideology Low to moderate — DDoS-for-hire, defacement Application-layer DDoS, SQL injection, data dumps MEDIUM
Insider threat Financial gain, grievance High — legitimate access, institutional knowledge Data exfiltration, privilege abuse, sabotage HIGH
Competitors Competitive advantage Low — typically via intermediaries Scraping, social engineering, intellectual property theft LOW

55.3.3 PASTA Stage 6 — Attack Simulation

Attack simulation in PASTA goes beyond theoretical analysis. It involves constructing realistic attack scenarios and testing them through tabletop exercises or controlled red team engagements.

Example attack scenario for SynthApp:

┌──────────────────────────────────────────────────────────────────────┐
│  PASTA ATTACK SCENARIO: PAYMENT DATA EXFILTRATION                    │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Actor: Financially motivated attacker (e-crime syndicate)           │
│  Objective: Steal customer payment card data                         │
│  Capability: Moderate (commodity tools, purchased credentials)       │
│                                                                      │
│  Attack Path:                                                        │
│  1. Purchase leaked credentials from darknet market                  │
│  2. Credential stuff against synthapp.example.com/login              │
│  3. Exploit IDOR in /api/v2/orders/{id} to enumerate customers      │
│  4. Discover debug endpoint /api/internal/export leaks PAN data      │
│  5. Exfiltrate via HTTPS to attacker-controlled 203.0.113.50         │
│                                                                      │
│  Controls Tested:                                                    │
│  - Rate limiting on login endpoint                                   │
│  - IDOR protection on order API                                      │
│  - Internal API endpoint exposure                                    │
│  - DLP on outbound HTTPS containing PAN patterns                     │
│  - Monitoring for bulk data access patterns                          │
│                                                                      │
│  Business Impact: $2.5M (breach notification + fines + remediation)  │
│  Likelihood (with current controls): HIGH                            │
│  Risk Score: CRITICAL                                                │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.3.4 PASTA Detection — Monitoring Attack Scenarios

// Detect credential stuffing pattern from PASTA threat model
SigninLogs
| where TimeGenerated > ago(1h)
| where AppDisplayName == "SynthApp Portal"
| where ResultType != 0  // Failed sign-ins
| summarize FailedAttempts = count(),
            DistinctUsers = dcount(UserPrincipalName),
            UserList = make_set(UserPrincipalName, 20)
  by IPAddress, bin(TimeGenerated, 10m)
| where FailedAttempts > 50 and DistinctUsers > 10
| extend AlertTitle = strcat("PASTA-TM: Credential stuffing from ",
                              IPAddress, " — ", FailedAttempts,
                              " failures across ", DistinctUsers, " users")
index=auth sourcetype=azure:signinlogs AppDisplayName="SynthApp Portal"
    ResultType!=0
| bin _time span=10m
| stats count as FailedAttempts,
        dc(UserPrincipalName) as DistinctUsers,
        values(UserPrincipalName) as UserList
  by IPAddress, _time
| where FailedAttempts > 50 AND DistinctUsers > 10
| eval AlertTitle="PASTA-TM: Credential stuffing from "
        .IPAddress." — ".FailedAttempts
        ." failures across ".DistinctUsers." users"

55.4 LINDDUN — Privacy Threat Modeling

LINDDUN is a privacy-specific threat modeling framework developed at KU Leuven. As data protection regulations (GDPR, CCPA, LGPD) proliferate, security teams must model not only confidentiality breaches but also privacy violations — which are a distinct category of harm. See Chapter 13: Security Governance, Privacy & Risk for the broader regulatory context.

55.4.1 The LINDDUN Categories

Category Privacy Property Violated Description Example Threat
Linkability Unlinkability Attacker can link two or more items of interest (actions, identities, data) Correlating pseudonymized records across datasets to identify individuals
Identifiability Anonymity/Pseudonymity Attacker can identify a specific individual from data Re-identifying anonymized health records using quasi-identifiers
Non-repudiation Plausible deniability System provides undeniable evidence of an action Blockchain-based voting system reveals voter choices
Detectability Undetectability Attacker can detect that an item of interest exists Metadata analysis reveals a user is communicating with a whistleblower
Disclosure of information Confidentiality Unauthorized access to personal data API endpoint leaks user profile data without authorization
Unawareness Content awareness User is unaware of data collection or processing Application collects location data without clear disclosure
Non-compliance Policy/legal compliance Processing violates data protection regulations Data retained beyond GDPR-mandated retention period

55.4.2 LINDDUN Applied to SynthApp

Data flow: Customer Browser → Web Application → Customer Database

LINDDUN Threat Specific Risk Data Element Mitigation
Linkability Purchase history across sessions linkable via cookie ID Session cookies + order data Rotate session identifiers; minimize cross-session tracking
Identifiability Email + purchase pattern uniquely identifies customer Customer email, order history Pseudonymize analytics data; purpose limitation on queries
Non-repudiation Immutable audit log proves customer browsed sensitive product categories Browse history in logs Aggregate browsing analytics; minimize PII in logs
Detectability API response time differs for existing vs non-existing accounts Account lookup endpoint Constant-time responses; generic error messages
Disclosure Admin panel exposes full customer PII to support staff Name, address, phone, email Role-based field masking; data minimization in support views
Unawareness Third-party analytics script tracking without consent banner Page views, clicks, device info Consent management platform; cookie banner before tracking
Non-compliance Customer data replicated to non-EU region without adequacy All PII in database replicas Data residency controls; SCCs for cross-border transfers

55.4.3 Privacy Threat Detection Queries

// Detect bulk PII access pattern (LINDDUN: Disclosure)
let pii_tables = dynamic(["CustomerProfile_CL", "PaymentInfo_CL",
                           "HealthRecord_CL"]);
AuditLogs
| where TimeGenerated > ago(1h)
| where TargetResources[0].displayName in (pii_tables)
| where OperationName has_any ("Read", "Export", "Query", "Select")
| summarize QueryCount = count(),
            DistinctTables = dcount(TargetResources[0].displayName),
            RecordsAccessed = sum(toint(AdditionalDetails[0].value))
  by InitiatedBy.user.userPrincipalName, bin(TimeGenerated, 15m)
| where RecordsAccessed > 1000 or DistinctTables > 2
| extend AlertTitle = "LINDDUN-Disclosure: Bulk PII access detected"
index=app_audit sourcetype=database_audit
    table IN ("CustomerProfile", "PaymentInfo", "HealthRecord")
    operation IN ("READ", "EXPORT", "QUERY", "SELECT")
| bin _time span=15m
| stats count as QueryCount,
        dc(table) as DistinctTables,
        sum(records_accessed) as RecordsAccessed
  by user, _time
| where RecordsAccessed > 1000 OR DistinctTables > 2
| eval AlertTitle="LINDDUN-Disclosure: Bulk PII access by ".user
        ." — ".RecordsAccessed." records across ".DistinctTables." tables"

55.5 Attack Trees & Kill Chain Modeling

Attack trees formalize adversary objectives and decomposition into a tree structure where the root node represents the attacker's goal and child nodes represent the steps or conditions needed to achieve that goal. When combined with kill chain models, attack trees provide a structured framework for understanding and disrupting attack paths.

55.5.1 Attack Tree Fundamentals

An attack tree decomposes a high-level objective into sub-goals connected by AND/OR nodes:

  • OR nodes: The attacker needs to accomplish any one child to achieve the parent (alternatives)
  • AND nodes: The attacker needs to accomplish all children to achieve the parent (prerequisites)

Example attack tree — Steal customer payment data from SynthApp:

┌──────────────────────────────────────────────────────────────────────┐
│          ATTACK TREE: STEAL CUSTOMER PAYMENT DATA                    │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  [ROOT] Steal Payment Card Data (OR)                                 │
│  ├── [1] Exploit Web Application (OR)                                │
│  │   ├── [1.1] SQL Injection on payment API (AND)                    │
│  │   │   ├── [1.1.1] Find injectable parameter                      │
│  │   │   ├── [1.1.2] Bypass WAF rules                               │
│  │   │   └── [1.1.3] Extract data via UNION or blind SQLi           │
│  │   ├── [1.2] Exploit IDOR on order endpoint                       │
│  │   │   └── [1.2.1] Enumerate order IDs sequentially               │
│  │   └── [1.3] Exploit XSS to steal admin session (AND)             │
│  │       ├── [1.3.1] Inject stored XSS payload                      │
│  │       ├── [1.3.2] Admin triggers XSS                             │
│  │       └── [1.3.3] Hijack admin session token                     │
│  ├── [2] Compromise Infrastructure (OR)                              │
│  │   ├── [2.1] Exploit unpatched service (AND)                       │
│  │   │   ├── [2.1.1] Scan for vulnerable service version             │
│  │   │   ├── [2.1.2] Develop/acquire exploit                        │
│  │   │   └── [2.1.3] Pivot to database server                       │
│  │   └── [2.2] Abuse cloud misconfiguration (OR)                     │
│  │       ├── [2.2.1] Access exposed S3 bucket with backups           │
│  │       └── [2.2.2] Exploit overly permissive IAM role              │
│  ├── [3] Social Engineering (OR)                                     │
│  │   ├── [3.1] Phish DBA for database credentials                    │
│  │   └── [3.2] Recruit insider with database access                  │
│  └── [4] Supply Chain Attack (AND)                                   │
│      ├── [4.1] Compromise npm dependency used by payment module      │
│      ├── [4.2] Inject data exfiltration code in package              │
│      └── [4.3] Wait for SynthApp to update dependency                │
│                                                                      │
│  Annotations per node:                                               │
│  Cost | Skill | Time | Detectability | Existing Controls             │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.5.2 Attack Tree Node Annotations

Each leaf node should be annotated with attributes that enable quantitative risk analysis:

Attribute Node 1.1 (SQLi) Node 2.2.1 (S3 bucket) Node 3.1 (Phishing) Node 4.1 (Supply chain)
Cost to attacker Low ($0-100) Low ($0) Low ($50-200) High ($10K+)
Skill required Moderate Low Low Advanced
Time required Hours Minutes Days Weeks
Detectability Medium (WAF logs) Low (no monitoring) Medium (email security) Very Low
Existing control WAF + parameterized queries None Email gateway SBOM monitoring
Residual risk Medium Critical Medium High

55.5.3 Kill Chain Integration

Attack trees become more powerful when mapped to kill chain phases. This enables defenders to identify detection opportunities at each phase. Cross-reference Chapter 16: Penetration Testing Methodology for the full kill chain framework.

Kill Chain Phase Attack Tree Nodes Detection Opportunity Control Gap
Reconnaissance 1.1.1, 2.1.1 Web scanner fingerprint in WAF logs Scanner detection rules needed
Weaponization 2.1.2, 4.2 Threat intel on new exploit PoCs Threat intel feed integration
Delivery 3.1, 1.3.1 Phishing email detected; XSS payload in input Email security + input validation
Exploitation 1.1, 1.2, 2.1, 2.2 WAF alerts; cloud audit logs IDOR detection; S3 monitoring
Installation 4.3 SBOM diff detects new dependency Dependency pinning + review
Command & Control Post-exploitation C2 Anomalous outbound connections Egress monitoring + DLP
Actions on Objectives Data exfiltration Bulk data access alerts DLP + database activity monitoring

55.6 Continuous Threat Modeling in DevSecOps

Traditional threat modeling is a point-in-time activity performed during design reviews. In modern DevSecOps environments where code ships daily or hourly, point-in-time threat models become stale within weeks. Continuous threat modeling integrates threat analysis into the development pipeline, triggering automated re-evaluation whenever architecture, code, or infrastructure changes. See Chapter 35: DevSecOps Pipeline for the broader pipeline security context.

55.6.1 Shift-Left Threat Modeling

Shift-left threat modeling embeds threat analysis into the earliest phases of development:

┌──────────────────────────────────────────────────────────────────────┐
│           SHIFT-LEFT THREAT MODELING INTEGRATION POINTS               │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Requirements ──> Design ──> Code ──> Build ──> Test ──> Deploy      │
│       │              │          │        │         │         │        │
│       ▼              ▼          ▼        ▼         ▼         ▼        │
│  Abuse Cases    DFD Review   Code     SAST +    DAST +   Runtime     │
│  & Security     & STRIDE     Review   Threat    Threat   Threat      │
│  User Stories   Analysis     for      Model     Model    Monitoring   │
│                              Threat   Checks    Verify                │
│                              Patterns                                 │
│                                                                      │
│  ◄──────── CHEAPER ──────────────────────── EXPENSIVE ──────────►    │
│  ◄──────── PROACTIVE ────────────────────── REACTIVE ────────────►   │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.6.2 Threat Modeling as Code

Modern threat modeling tools allow threat models to be expressed as code, version-controlled alongside application code, and validated in CI/CD pipelines.

Example threat model definition (YAML format):

# synthapp-threat-model.yaml
# Version-controlled threat model for SynthApp payment service
---
system:
  name: "SynthApp Payment Service"
  owner: "payments-team@synthapp.example.com"
  data_classification: "PCI-DSS Scope"
  last_reviewed: "2026-04-12"

components:
  - name: "Payment API Gateway"
    type: process
    technology: "Node.js Express"
    exposed: true
    trust_zone: "DMZ"
    data_handled:
      - "payment_card_number"  # PCI scope
      - "customer_email"       # PII
      - "order_amount"

  - name: "Payment Database"
    type: datastore
    technology: "PostgreSQL 16"
    trust_zone: "Internal-Restricted"
    encryption_at_rest: true
    data_stored:
      - "tokenized_pan"
      - "transaction_history"

  - name: "External Payment Processor"
    type: external_entity
    trust_zone: "External"
    protocol: "TLS 1.3 mutual auth"

data_flows:
  - from: "Payment API Gateway"
    to: "Payment Database"
    protocol: "TLS/PostgreSQL"
    data: ["tokenized_pan", "order_amount"]
    crosses_trust_boundary: true

  - from: "Payment API Gateway"
    to: "External Payment Processor"
    protocol: "HTTPS mutual TLS"
    data: ["payment_card_number", "order_amount"]
    crosses_trust_boundary: true

threats:
  - id: "TM-PAY-001"
    stride: "Tampering"
    component: "Payment API Gateway"
    description: "Attacker modifies order amount in transit"
    attack_vector: "Parameter manipulation in API request"
    mitre_attack: "T1565.001"
    likelihood: "Medium"
    impact: "High"
    risk: "High"
    mitigations:
      - "Server-side price validation against catalog database"
      - "Request signing with HMAC"
      - "Integrity monitoring on transaction amounts"
    detection_query: "TM-PAY-001-detect.kql"
    status: "mitigated"

  - id: "TM-PAY-002"
    stride: "Information Disclosure"
    component: "Payment Database"
    description: "Unauthorized bulk export of transaction history"
    attack_vector: "Compromised service account or SQL injection"
    mitre_attack: "T1530"
    likelihood: "Low"
    impact: "Critical"
    risk: "High"
    mitigations:
      - "Database activity monitoring with alert on bulk SELECT"
      - "Service account with minimal SELECT permissions"
      - "Row-level security on customer data"
    detection_query: "TM-PAY-002-detect.kql"
    status: "mitigated"

55.6.3 CI/CD Pipeline Integration

GitHub Actions workflow for threat model validation:

# .github/workflows/threat-model-check.yml
name: Threat Model Validation
on:
  pull_request:
    paths:
      - 'src/payment-service/**'
      - 'infrastructure/terraform/payment/**'
      - 'threat-models/payment-service.yaml'

jobs:
  threat-model-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Validate threat model schema
        run: |
          python scripts/validate_threat_model.py \
            --model threat-models/payment-service.yaml \
            --schema threat-models/schema.json

      - name: Check threat model freshness
        run: |
          python scripts/check_threat_model_freshness.py \
            --model threat-models/payment-service.yaml \
            --max-age-days 90 \
            --changed-paths ${{ github.event.pull_request.changed_files }}

      - name: Verify detection coverage
        run: |
          python scripts/verify_detection_coverage.py \
            --model threat-models/payment-service.yaml \
            --queries-dir detection-rules/payment/

      - name: Generate threat model diff
        run: |
          python scripts/threat_model_diff.py \
            --base threat-models/payment-service.yaml \
            --head ${{ github.sha }} \
            --output threat-model-diff.md

      - name: Comment PR with threat model status
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const diff = fs.readFileSync('threat-model-diff.md', 'utf8');
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## Threat Model Impact\n${diff}`
            });

55.6.4 Threat Model Staleness Detection

// Track threat model coverage against deployed services
let threat_model_coverage = externaldata(
    ServiceName: string, LastModelReview: datetime,
    ThreatCount: int, MitigatedCount: int, OpenCount: int
) [@"https://threatmodels.synthapp.example.com/coverage.csv"]
  with (format="csv");
threat_model_coverage
| extend DaysSinceReview = datetime_diff('day', now(), LastModelReview)
| extend CoveragePercent = round(100.0 * MitigatedCount / ThreatCount, 1)
| where DaysSinceReview > 90 or CoveragePercent < 80
| project ServiceName, DaysSinceReview, ThreatCount,
          MitigatedCount, OpenCount, CoveragePercent
| extend AlertTitle = strcat("Threat model stale or incomplete: ",
                              ServiceName, " — ", DaysSinceReview,
                              " days since review, ", CoveragePercent,
                              "% coverage")
| inputlookup threat_model_coverage.csv
| eval DaysSinceReview=round((now()-strptime(LastModelReview,
        "%Y-%m-%dT%H:%M:%S"))/86400, 0)
| eval CoveragePercent=round(100*MitigatedCount/ThreatCount, 1)
| where DaysSinceReview > 90 OR CoveragePercent < 80
| table ServiceName, DaysSinceReview, ThreatCount,
        MitigatedCount, OpenCount, CoveragePercent
| eval AlertTitle="Threat model stale or incomplete: "
        .ServiceName." — ".DaysSinceReview
        ." days since review, ".CoveragePercent."% coverage"

55.7 Threat Modeling Automation

Manual threat modeling does not scale. Organizations with hundreds of microservices, dynamic cloud infrastructure, and frequent deployments cannot rely on human-driven threat analysis for every change. Threat modeling automation uses infrastructure-as-code analysis, architecture-as-code parsing, and machine learning to identify threats programmatically.

55.7.1 IaC-Based Threat Identification

Infrastructure-as-Code (Terraform, CloudFormation, Kubernetes manifests) contains machine-readable descriptions of system architecture. Automated tools can parse IaC to identify security-relevant patterns and generate threat candidates.

Common IaC threat patterns:

IaC Pattern Threat Category Detection Logic Example
Public subnet with no WAF Information Disclosure Resource in public subnet without WAF association aws_instance in public_subnet without aws_wafv2_web_acl
Database without encryption Information Disclosure Storage resource with encryption = false or missing aws_rds_instance without storage_encrypted = true
Overly permissive security group Elevation of Privilege Ingress rule 0.0.0.0/0 on non-HTTP ports Security group allowing 0.0.0.0/0 on port 22
Missing network policy Lateral Movement Kubernetes pod without NetworkPolicy Pod in namespace without default-deny NetworkPolicy
Service account with admin role Elevation of Privilege IAM binding with roles/owner or * permissions google_project_iam_binding with roles/owner
Container running as root Elevation of Privilege securityContext.runAsUser: 0 or missing Dockerfile with USER root or no USER directive
Secrets in environment variables Information Disclosure env block containing PASSWORD, SECRET, KEY TF_VAR_db_password in plaintext

55.7.2 Architecture-as-Code Threat Generation

Tools like Threagile, OWASP Threat Dragon, and IriusRisk can ingest architecture descriptions and automatically generate threat catalogs.

Example automated threat generation workflow:

# Pseudo-code: Automated threat identification from Terraform
# Educational example — not production code

import json
from pathlib import Path

THREAT_RULES = {
    "public_exposure": {
        "resource_types": ["aws_instance", "aws_lb", "aws_api_gateway_rest_api"],
        "condition": lambda r: r.get("subnet_type") == "public",
        "stride": "Information Disclosure",
        "severity": "HIGH",
        "description": "Resource exposed to public internet without WAF protection",
        "mitigation": "Add WAF, restrict security groups, enable logging"
    },
    "unencrypted_storage": {
        "resource_types": ["aws_rds_instance", "aws_s3_bucket", "aws_ebs_volume"],
        "condition": lambda r: not r.get("encryption", {}).get("enabled", False),
        "stride": "Information Disclosure",
        "severity": "HIGH",
        "description": "Storage resource without encryption at rest",
        "mitigation": "Enable encryption with KMS-managed keys"
    },
    "missing_logging": {
        "resource_types": ["aws_api_gateway_rest_api", "aws_lb",
                           "aws_cloudfront_distribution"],
        "condition": lambda r: not r.get("access_logging", {}).get("enabled", False),
        "stride": "Repudiation",
        "severity": "MEDIUM",
        "description": "Internet-facing resource without access logging",
        "mitigation": "Enable access logging to centralized SIEM"
    },
    "overprivileged_iam": {
        "resource_types": ["aws_iam_policy", "aws_iam_role_policy"],
        "condition": lambda r: "*" in str(r.get("policy", {}).get("Action", [])),
        "stride": "Elevation of Privilege",
        "severity": "CRITICAL",
        "description": "IAM policy with wildcard actions",
        "mitigation": "Apply least-privilege; scope to specific actions and resources"
    }
}

def analyze_terraform_plan(plan_json_path: str) -> list:
    """Analyze Terraform plan JSON for threat patterns."""
    with open(plan_json_path) as f:
        plan = json.load(f)

    threats = []
    for resource in plan.get("planned_values", {}).get("root_module", {}).get("resources", []):
        for rule_name, rule in THREAT_RULES.items():
            if resource["type"] in rule["resource_types"]:
                if rule["condition"](resource.get("values", {})):
                    threats.append({
                        "rule": rule_name,
                        "resource": f"{resource['type']}.{resource['name']}",
                        "stride": rule["stride"],
                        "severity": rule["severity"],
                        "description": rule["description"],
                        "mitigation": rule["mitigation"]
                    })
    return threats

55.7.3 Automated Threat Model Drift Detection

// Detect infrastructure changes that invalidate threat model assumptions
AzureActivity
| where TimeGenerated > ago(24h)
| where OperationNameValue has_any (
    "Microsoft.Network/networkSecurityGroups/write",
    "Microsoft.Network/publicIPAddresses/write",
    "Microsoft.Compute/virtualMachines/write",
    "Microsoft.ContainerService/managedClusters/write",
    "Microsoft.Sql/servers/firewallRules/write")
| where ActivityStatusValue == "Success"
| project TimeGenerated, OperationNameValue, ResourceGroup,
          _ResourceId, Caller, CallerIpAddress
| extend AlertTitle = strcat("Threat model drift: Infrastructure change — ",
                              OperationNameValue)
| extend Recommendation = "Review threat model for affected service"
index=azure_activity sourcetype=azure:activity ActivityStatusValue="Success"
| search OperationNameValue IN (
    "Microsoft.Network/networkSecurityGroups/write",
    "Microsoft.Network/publicIPAddresses/write",
    "Microsoft.Compute/virtualMachines/write",
    "Microsoft.ContainerService/managedClusters/write",
    "Microsoft.Sql/servers/firewallRules/write")
| table _time, OperationNameValue, ResourceGroup,
        _ResourceId, Caller, CallerIpAddress
| eval AlertTitle="Threat model drift: Infrastructure change — "
        .OperationNameValue
| eval Recommendation="Review threat model for affected service"

55.8 Attack Surface Management (ASM) Integration

Attack Surface Management (ASM) continuously discovers, inventories, and assesses externally exposed assets. ASM feeds directly into threat modeling by identifying what needs to be modeled — the externally visible surface that attackers see. See Chapter 29: Vulnerability Management for the vulnerability prioritization context.

55.8.1 ASM Discovery Categories

Discovery Type What It Finds Threat Modeling Input Example
DNS enumeration Subdomains, CNAME chains, MX records External-facing services to model staging.synthapp.example.com — forgotten staging environment
Certificate transparency All TLS certificates issued for domain Shadow IT services, forgotten endpoints cert for internal-api.synthapp.example.com exposed externally
Port scanning Open ports and services Entry points for threat model DFDs Port 9200 (Elasticsearch) open on 198.51.100.20
Cloud asset discovery Public IPs, storage buckets, serverless Cloud resources outside threat model scope Public S3 bucket synthapp-backups.s3.example.com
API discovery Undocumented or shadow APIs API endpoints missing from threat model /api/v1/debug/dump endpoint found via path brute-force
Technology fingerprinting Server versions, frameworks, libraries Known vulnerability surface Apache 2.4.49 (vulnerable to path traversal)

55.8.2 ASM-to-Threat-Model Pipeline

┌──────────────────────────────────────────────────────────────────────┐
│            ASM → THREAT MODEL INTEGRATION PIPELINE                   │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ASM Discovery ──> Asset Inventory ──> Delta Analysis                │
│                                             │                        │
│                                    ┌────────┴────────┐               │
│                                    ▼                  ▼               │
│                              New Assets          Changed Assets      │
│                              (not in TM)         (TM outdated)       │
│                                    │                  │               │
│                                    ▼                  ▼               │
│                           Auto-Generate          Flag for            │
│                           Threat Model           TM Review           │
│                           Skeleton                                   │
│                                    │                  │               │
│                                    ▼                  ▼               │
│                           Security Team Review + Prioritization      │
│                                         │                            │
│                                         ▼                            │
│                              Updated Threat Model                    │
│                              + Detection Rules                       │
│                              + Mitigation Tickets                    │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.8.3 ASM Monitoring Queries

// Detect new externally exposed services discovered by ASM
let known_assets = externaldata(FQDN: string, IP: string, Port: int,
                                 ServiceType: string, ThreatModelID: string)
[@"https://asm.synthapp.example.com/known_assets.csv"]
  with (format="csv");
ASMDiscovery_CL
| where TimeGenerated > ago(24h)
| where IsExternal_b == true
| join kind=leftanti known_assets
  on $left.FQDN_s == $right.FQDN
| project TimeGenerated, FQDN_s, IPAddress_s, Port_d,
          ServiceBanner_s, TLSVersion_s, DiscoveryMethod_s
| extend AlertTitle = strcat("ASM: New external asset discovered — ",
                              FQDN_s, ":", Port_d)
| extend Action = "Create or update threat model for this asset"
index=asm sourcetype=asm:discovery IsExternal=true earliest=-24h
| lookup known_assets.csv FQDN as FQDN OUTPUT ThreatModelID
| where isnull(ThreatModelID)
| table _time, FQDN, IPAddress, Port, ServiceBanner,
        TLSVersion, DiscoveryMethod
| eval AlertTitle="ASM: New external asset discovered — "
        .FQDN.":".Port
| eval Action="Create or update threat model for this asset"

55.9 Threat Modeling for Cloud-Native & Kubernetes Architectures

Cloud-native architectures introduce threat surfaces that do not exist in traditional on-premises environments. Container orchestration, service meshes, serverless functions, and dynamic scaling create a fundamentally different threat landscape. Threat models must account for ephemeral workloads, shared tenancy, API-driven infrastructure, and the complex trust relationships in Kubernetes clusters. See Chapter 39: Zero Trust Implementation for the zero trust architecture context.

55.9.1 Kubernetes Threat Model — Trust Boundaries

┌──────────────────────────────────────────────────────────────────────┐
│            KUBERNETES THREAT MODEL — TRUST BOUNDARIES                │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌─ External Network ─────────────────────────────────────────────┐  │
│  │  Internet → Ingress Controller (192.168.1.100)                 │  │
│  └────────────────────────────────┬────────────────────────────────┘  │
│  - - - - - - - - - - - - - - - - | - - - - Trust Boundary 1 - - -   │
│  ┌─ Cluster Network ─────────────┴────────────────────────────────┐  │
│  │                                                                 │  │
│  │  ┌─ Namespace: synthapp-prod ──────────────────────────────┐   │  │
│  │  │                                                          │   │  │
│  │  │  ┌──────────┐    ┌──────────┐    ┌──────────────────┐   │   │  │
│  │  │  │ Frontend │───>│ API Svc  │───>│ Payment Svc      │   │   │  │
│  │  │  │ Pod      │    │ Pod      │    │ Pod (PCI scope)  │   │   │  │
│  │  │  └──────────┘    └──────────┘    └──────────────────┘   │   │  │
│  │  │                       │                    │             │   │  │
│  │  └───────────────────────┼────────────────────┼─────────────┘  │  │
│  │  - - - - - - - - - - - - | - - - - - - - - - | - - TB 2 - -   │  │
│  │  ┌─ Namespace: synthapp-data ─────────────────┴────────────┐   │  │
│  │  │  ┌───────────┐    ┌───────────┐                         │   │  │
│  │  │  │ PostgreSQL│    │ Redis     │                         │   │  │
│  │  │  │ StatefulSet    │ Cache     │                         │   │  │
│  │  │  └───────────┘    └───────────┘                         │   │  │
│  │  └──────────────────────────────────────────────────────────┘  │  │
│  │                                                                 │  │
│  │  ┌─ kube-system ───────────────────────────────────────────┐   │  │
│  │  │  API Server | etcd | Controller Manager | Scheduler     │   │  │
│  │  └─────────────────────────────────────────────────────────┘   │  │
│  │  - - - - - - - - - - - - - - - - - - - - - - TB 3 (Control)   │  │
│  └─────────────────────────────────────────────────────────────────┘  │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.9.2 Kubernetes-Specific Threats (STRIDE Applied)

Threat K8s Component Attack Scenario MITRE ATT&CK Mitigation
Spoofing Service Account Attacker uses leaked SA token to impersonate workload T1078.004 Short-lived tokens; bound SA tokens; disable auto-mount
Tampering etcd Direct etcd access bypassing API server RBAC T1565.001 Encrypt etcd; restrict network access; mTLS for etcd peers
Tampering Container Image Attacker pushes malicious image tag to registry T1195.002 Image signing (Cosign/Notary); admission controllers (OPA/Kyverno)
Repudiation API Server API calls without audit logging enabled T1562.001 Enable audit logging; ship to SIEM; immutable log storage
Info Disclosure Secrets Secrets accessible via env vars or mounted volumes T1552.007 External secrets operators; vault integration; encryption at rest
Info Disclosure Pod Network Pod-to-pod traffic unencrypted on flat network T1040 Service mesh mTLS (Istio/Linkerd); NetworkPolicies
DoS API Server Resource exhaustion via mass pod creation T1499 ResourceQuotas; LimitRanges; API priority and fairness
Elevation RBAC ClusterRoleBinding granting unnecessary cluster-admin T1078 Least-privilege RBAC; namespace-scoped roles; RBAC auditing
Elevation Container Container escape via privileged mode or hostPID T1611 PodSecurityAdmission; no privileged containers; seccomp profiles

55.9.3 Kubernetes Threat Detection Queries

// Detect privileged container creation (K8s threat model: Elevation of Privilege)
KubeAuditLogs
| where TimeGenerated > ago(1h)
| where ObjectRef_Resource == "pods"
| where Verb in ("create", "update", "patch")
| where ResponseStatus_Code >= 200 and ResponseStatus_Code < 300
| extend PodSpec = parse_json(RequestObject)
| where PodSpec.spec.containers[0].securityContext.privileged == true
        or PodSpec.spec.hostPID == true
        or PodSpec.spec.hostNetwork == true
| project TimeGenerated, SourceIPs, User_Username,
          ObjectRef_Namespace, ObjectRef_Name,
          PodSpec.spec.containers[0].image
| extend AlertTitle = strcat("K8s-TM: Privileged container created in ",
                              ObjectRef_Namespace, "/", ObjectRef_Name)
index=kubernetes sourcetype=kube:apiserver:audit
    objectRef.resource="pods"
    verb IN ("create", "update", "patch")
    responseStatus.code>=200 responseStatus.code<300
| spath input=requestObject path=spec.containers{0}.securityContext.privileged
    output=privileged
| spath input=requestObject path=spec.hostPID output=hostPID
| spath input=requestObject path=spec.hostNetwork output=hostNetwork
| where privileged="true" OR hostPID="true" OR hostNetwork="true"
| table _time, sourceIPs{}, user.username,
        objectRef.namespace, objectRef.name,
        requestObject.spec.containers{0}.image
| eval AlertTitle="K8s-TM: Privileged container created in "
        .'objectRef.namespace'."/".objectRef.name
// Detect RBAC escalation (K8s threat model: Elevation of Privilege)
KubeAuditLogs
| where TimeGenerated > ago(24h)
| where ObjectRef_Resource in ("clusterrolebindings", "rolebindings")
| where Verb in ("create", "update", "patch")
| where ResponseStatus_Code >= 200 and ResponseStatus_Code < 300
| extend BindingSpec = parse_json(RequestObject)
| where BindingSpec.roleRef.name in ("cluster-admin", "admin", "edit")
| project TimeGenerated, SourceIPs, User_Username,
          ObjectRef_Name, BindingSpec.roleRef.name,
          BindingSpec.subjects
| extend AlertTitle = strcat("K8s-TM: RBAC escalation — ",
                              User_Username, " bound ",
                              BindingSpec.roleRef.name, " role")
index=kubernetes sourcetype=kube:apiserver:audit
    objectRef.resource IN ("clusterrolebindings", "rolebindings")
    verb IN ("create", "update", "patch")
    responseStatus.code>=200 responseStatus.code<300
| spath input=requestObject path=roleRef.name output=roleName
| search roleName IN ("cluster-admin", "admin", "edit")
| spath input=requestObject path=subjects{} output=subjects
| table _time, sourceIPs{}, user.username,
        objectRef.name, roleName, subjects
| eval AlertTitle="K8s-TM: RBAC escalation — "
        .'user.username'." bound ".roleName." role"

55.10 Threat Modeling for AI/ML Systems

AI and ML systems introduce novel threat categories that traditional frameworks like STRIDE do not fully address. Model theft, training data poisoning, adversarial inputs, prompt injection, and model inversion represent a fundamentally new attack surface. As organizations deploy AI-powered features, threat models must expand to cover these ML-specific risks. Cross-reference Chapter 54: SBOM Operations for supply chain risks to ML model dependencies.

55.10.1 AI/ML Threat Taxonomy

Threat Category Description Attack Phase Impact
Data Poisoning Attacker corrupts training data to influence model behavior Training Model produces incorrect or biased outputs
Model Theft / Extraction Attacker queries model to reconstruct a functionally equivalent copy Inference Intellectual property loss; enables further attacks
Adversarial Inputs Crafted inputs that cause model misclassification Inference Security control bypass; incorrect decisions
Prompt Injection Attacker injects instructions into LLM prompts Inference Unauthorized actions; data exfiltration
Model Inversion Attacker reconstructs training data from model outputs Inference Privacy violation; training data disclosure
Membership Inference Attacker determines if a specific record was in training data Inference Privacy violation; regulatory non-compliance
Supply Chain Compromise Malicious model weights, corrupted pre-trained models Deployment Backdoored models; trojan behavior
Infrastructure Compromise Attack on GPU clusters, model registries, feature stores Training/Deployment Data theft; model tampering

55.10.2 STRIDE Extended for AI/ML

STRIDE + AI Extension AI-Specific Threat Example Attack Mitigation
Spoofing (Model Identity) Attacker serves a substitute model Man-in-the-middle on model API returning adversarial outputs Model signing; endpoint certificate pinning; response validation
Tampering (Training Data) Data poisoning via label flipping Attacker modifies 5% of training labels in shared dataset Data provenance tracking; anomaly detection on training metrics
Tampering (Model Weights) Direct modification of model parameters Compromise model registry; modify weights file Model checksum verification; immutable model storage; access control
Repudiation (Inference) Denial of model decision Model made a medical diagnosis with no audit trail Inference logging with input hash, output, confidence score, timestamp
Info Disclosure (Training Data) Model inversion or membership inference Reconstructing patient faces from medical imaging model Differential privacy; federated learning; output perturbation
Info Disclosure (Prompt Leakage) LLM reveals system prompt or RAG context Prompt injection extracting system instructions Input/output filtering; prompt isolation; output validation
DoS (Model Availability) Adversarial inputs causing compute exhaustion Crafted inputs triggering worst-case model inference paths Input validation; inference timeouts; rate limiting
Elevation (LLM Tool Use) Prompt injection gaining unauthorized tool access LLM tricked into calling admin API via injected instructions Tool-use authorization framework; human-in-the-loop for sensitive actions

55.10.3 AI/ML Threat Detection Queries

// Detect potential model extraction attack (high-volume inference queries)
MLInferenceLog_CL
| where TimeGenerated > ago(1h)
| where ModelName_s == "synthapp-fraud-detector"
| summarize QueryCount = count(),
            DistinctInputPatterns = dcount(hash_sha256(InputData_s)),
            AvgLatency = avg(InferenceLatencyMs_d)
  by CallerIP_s, CallerIdentity_s, bin(TimeGenerated, 10m)
| where QueryCount > 1000
| where DistinctInputPatterns > 500  // Systematic exploration
| extend AlertTitle = strcat("AI-TM: Potential model extraction — ",
                              CallerIdentity_s, " made ", QueryCount,
                              " queries with ", DistinctInputPatterns,
                              " unique inputs in 10 minutes")
index=ml_inference sourcetype=ml:inference
    ModelName="synthapp-fraud-detector" earliest=-1h
| bin _time span=10m
| eval input_hash=sha256(InputData)
| stats count as QueryCount,
        dc(input_hash) as DistinctInputPatterns,
        avg(InferenceLatencyMs) as AvgLatency
  by CallerIP, CallerIdentity, _time
| where QueryCount > 1000 AND DistinctInputPatterns > 500
| eval AlertTitle="AI-TM: Potential model extraction — "
        .CallerIdentity." made ".QueryCount
        ." queries with ".DistinctInputPatterns
        ." unique inputs in 10 minutes"
// Detect prompt injection attempts against LLM endpoints
LLMGatewayLog_CL
| where TimeGenerated > ago(24h)
| where UserPrompt_s has_any (
    "ignore previous instructions",
    "disregard above",
    "system prompt",
    "you are now",
    "new instructions",
    "forget everything",
    "override your",
    "reveal your prompt",
    "repeat the above",
    "output your instructions")
| project TimeGenerated, CallerIP_s, CallerIdentity_s,
          ModelName_s, UserPrompt_s, ResponseTruncated_s
| extend AlertTitle = "AI-TM: Prompt injection attempt detected"
| extend MITREAttack = "T1059 (Command Injection variant)"
index=llm_gateway sourcetype=llm:gateway earliest=-24h
| search UserPrompt IN ("*ignore previous instructions*",
    "*disregard above*", "*system prompt*", "*you are now*",
    "*new instructions*", "*forget everything*",
    "*override your*", "*reveal your prompt*",
    "*repeat the above*", "*output your instructions*")
| table _time, CallerIP, CallerIdentity, ModelName,
        UserPrompt, ResponseTruncated
| eval AlertTitle="AI-TM: Prompt injection attempt detected"
| eval MITREAttack="T1059 (Command Injection variant)"

55.11 Detection Engineering from Threat Models

The ultimate measure of a threat model's operational value is whether it produces working detection rules. This section describes the systematic process of converting threat model outputs into deployed KQL and SPL detection queries. Cross-reference Chapter 5: Detection Engineering at Scale for the full detection engineering lifecycle.

55.11.1 Threat-to-Detection Pipeline

Every identified threat should have a corresponding detection strategy — or an explicit documented reason why detection is not feasible (e.g., encrypted channel with no inspection point).

┌──────────────────────────────────────────────────────────────────────┐
│           THREAT MODEL → DETECTION RULE PIPELINE                     │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Threat Model Output                                                 │
│  ├── Threat ID: TM-PAY-001                                           │
│  ├── STRIDE: Tampering                                               │
│  ├── Component: Payment API Gateway                                  │
│  ├── Attack: Price manipulation in API request                       │
│  └── ATT&CK: T1565.001                                              │
│         │                                                            │
│         ▼                                                            │
│  Detection Strategy                                                  │
│  ├── Data Source: Application logs (price_submitted vs price_actual) │
│  ├── Log Availability: ✓ App emits both values in structured log     │
│  ├── Detection Logic: price_submitted != price_catalog               │
│  ├── False Positive Assessment: Low (prices rarely change mid-TX)    │
│  └── Threshold: Any mismatch = alert                                 │
│         │                                                            │
│         ▼                                                            │
│  Detection Rule                                                      │
│  ├── KQL: AppLogs | where PriceSubmitted != PriceCatalog             │
│  ├── SPL: index=app PriceSubmitted!=PriceCatalog                     │
│  ├── Severity: HIGH                                                  │
│  ├── Response: Block transaction, alert SOC, investigate session     │
│  └── Traceability: TM-PAY-001 → DR-PAY-001                          │
│         │                                                            │
│         ▼                                                            │
│  Validation                                                          │
│  ├── Purple team test: Submit modified price → verify alert fires    │
│  ├── False positive rate: 0.1% (promotional pricing edge case)       │
│  └── Coverage: 100% of payment transactions                          │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

55.11.2 Detection Coverage Matrix

A detection coverage matrix maps every threat model entry to its detection status:

Threat ID STRIDE Component ATT&CK Detection Rule Status Coverage
TM-PAY-001 Tampering Payment API T1565.001 DR-PAY-001 Deployed Full
TM-PAY-002 Info Disclosure Payment DB T1530 DR-PAY-002 Deployed Full
TM-PAY-003 Spoofing Auth Service T1078 DR-AUTH-001 Deployed Partial
TM-PAY-004 Elevation API Gateway T1548 DR-PAY-004 In Review None
TM-PAY-005 DoS Payment API T1499 Gap None
TM-PAY-006 Repudiation Audit Log DR-AUD-001 Deployed Full

Detection Gap = Risk Acceptance

Any threat in the model without a corresponding detection rule represents an explicit or implicit risk acceptance. The detection coverage matrix makes this visible to leadership. A 60% detection coverage rate means 40% of modeled threats have no monitoring — and those gaps should be prioritized based on threat severity and likelihood.

55.11.3 Automated Detection Rule Generation

Threat models expressed as structured data (YAML, JSON) can drive semi-automated detection rule generation:

// Auto-generated from TM-PAY-001: Price tampering detection
// Threat Model: SynthApp Payment Service
// STRIDE: Tampering | ATT&CK: T1565.001
// Last reviewed: 2026-04-12
AppServiceHTTPLogs
| where TimeGenerated > ago(1h)
| where CsUriStem startswith "/api/v2/payment/checkout"
| where CsMethod == "POST"
| extend RequestBody = parse_json(CsBytes)
| extend SubmittedPrice = todouble(RequestBody.total_amount)
| join kind=inner (
    AppLogs_CL
    | where OperationName_s == "PriceValidation"
    | extend CatalogPrice = todouble(CatalogAmount_d),
             OrderId = OrderId_s
) on $left.CorrelationId == $right.CorrelationId_s
| where abs(SubmittedPrice - CatalogPrice) > 0.01
| project TimeGenerated, CsHost, ClientIP, SubmittedPrice,
          CatalogPrice, PriceDelta = SubmittedPrice - CatalogPrice,
          OrderId, UserAgent
| extend AlertTitle = strcat("TM-PAY-001: Price tampering — submitted $",
                              SubmittedPrice, " vs catalog $", CatalogPrice)
| extend ThreatModelRef = "TM-PAY-001"
| extend MITREAttack = "T1565.001"
index=appservice sourcetype=httplogs cs_uri_stem="/api/v2/payment/checkout*"
    cs_method=POST earliest=-1h
| spath input=cs_bytes path=total_amount output=SubmittedPrice
| join type=inner CorrelationId
    [search index=app sourcetype=app:logs OperationName="PriceValidation"
    | rename CatalogAmount as CatalogPrice, OrderId as OrderId
    | fields CorrelationId, CatalogPrice, OrderId]
| eval PriceDelta=SubmittedPrice-CatalogPrice
| where abs(PriceDelta) > 0.01
| table _time, cs_host, c_ip, SubmittedPrice, CatalogPrice,
        PriceDelta, OrderId, cs_User_Agent
| eval AlertTitle="TM-PAY-001: Price tampering — submitted $"
        .SubmittedPrice." vs catalog $".CatalogPrice
| eval ThreatModelRef="TM-PAY-001"
| eval MITREAttack="T1565.001"

55.11.4 Threat Model Coverage Reporting

// Threat model detection coverage dashboard query
let threat_model = externaldata(
    ThreatID: string, STRIDE: string, Component: string,
    MITREAttack: string, Severity: string, DetectionRuleID: string,
    DetectionStatus: string
) [@"https://threatmodels.synthapp.example.com/detection_coverage.csv"]
  with (format="csv");
threat_model
| summarize TotalThreats = count(),
            DetectedThreats = countif(DetectionStatus == "Deployed"),
            GapThreats = countif(DetectionStatus == "Gap"),
            InReview = countif(DetectionStatus == "In Review")
  by Component
| extend CoveragePercent = round(100.0 * DetectedThreats / TotalThreats, 1)
| sort by CoveragePercent asc
| extend Status = case(
    CoveragePercent >= 90, "GREEN",
    CoveragePercent >= 70, "YELLOW",
    "RED")
| inputlookup threat_model_detection_coverage.csv
| stats count as TotalThreats,
        count(eval(DetectionStatus="Deployed")) as DetectedThreats,
        count(eval(DetectionStatus="Gap")) as GapThreats,
        count(eval(DetectionStatus="In Review")) as InReview
  by Component
| eval CoveragePercent=round(100*DetectedThreats/TotalThreats, 1)
| sort CoveragePercent
| eval Status=case(
    CoveragePercent>=90, "GREEN",
    CoveragePercent>=70, "YELLOW",
    1=1, "RED")

55.12 Threat Modeling Program Maturity

A mature threat modeling program is not about performing more threat models — it is about performing them more consistently, more efficiently, with better coverage, and with stronger connections to security operations. This section provides a maturity model and KPIs for measuring program effectiveness.

55.12.1 Threat Modeling Maturity Model

Level Name Characteristics Typical Org Profile
0 — None No threat modeling No systematic threat identification; reactive security Small teams with no security function
1 — Ad Hoc Occasional exercises Threat modeling done for compliance or after incidents; inconsistent methodology; results not maintained Organizations starting security programs
2 — Repeatable Standardized process Consistent methodology (STRIDE/PASTA); documented templates; trained practitioners; periodic reviews Mid-maturity security teams
3 — Defined Integrated into SDLC Threat modeling triggers on design changes; CI/CD integration; detection rules generated from models; coverage tracked Advanced security organizations
4 — Managed Metrics-driven KPIs tracked and reported; threat model quality measured; red team validates models; continuous improvement cycle Security-first organizations
5 — Optimizing Automated and adaptive AI-assisted threat identification; real-time model updates from ASM and runtime telemetry; threat models drive resource allocation Industry leaders

55.12.2 Program KPIs

KPI Description Target (Level 3+) Measurement
Coverage Rate % of applications with current threat model > 80% Applications with TM / Total applications
Model Freshness Average age of threat models in days < 90 days Mean(now - last_review_date)
Detection Coverage % of modeled threats with deployed detection rules > 75% Detected threats / Total modeled threats
Time-to-Model Average time from design change to updated threat model < 5 business days Mean(TM_update_date - change_date)
Finding Density Average number of threats identified per model 10-25 (depends on system complexity) Total threats / Total models
Mitigation Rate % of identified threats with implemented mitigations > 70% Mitigated threats / Total threats
Validation Rate % of threat models validated by red team or purple team > 50% Validated models / Total models
Incident Correlation % of incidents matching previously modeled threats > 60% Modeled incidents / Total incidents
False Negative Rate Incidents not covered by any threat model < 20% Unmodeled incidents / Total incidents
Practitioner Coverage % of development teams with trained threat modeling practitioners > 60% Teams with practitioners / Total teams

55.12.3 Maturity Assessment Checklist

Threat Modeling Maturity Self-Assessment

Level 1 — Ad Hoc:

  • [ ] Threat modeling has been performed at least once
  • [ ] Someone in the organization understands STRIDE or equivalent
  • [ ] Results are documented (even in a document or spreadsheet)

Level 2 — Repeatable:

  • [ ] Standardized methodology selected and documented
  • [ ] Templates and guidance available for practitioners
  • [ ] At least 3 practitioners trained across the organization
  • [ ] Threat models reviewed at least annually
  • [ ] Results stored in a central repository

Level 3 — Defined:

  • [ ] Threat modeling integrated into SDLC — triggered by design changes
  • [ ] CI/CD pipeline includes threat model validation checks
  • [ ] Detection rules generated from threat model outputs
  • [ ] Detection coverage matrix maintained and reviewed
  • [ ] Threat models reference MITRE ATT&CK techniques
  • [ ] ASM feeds into threat model scope

Level 4 — Managed:

  • [ ] KPIs tracked and reported to leadership monthly
  • [ ] Red team / purple team validates threat models quarterly
  • [ ] Incident post-mortems update threat models
  • [ ] Threat model quality scoring implemented
  • [ ] Cross-team threat model reviews performed

Level 5 — Optimizing:

  • [ ] Automated threat identification from IaC and architecture changes
  • [ ] AI-assisted threat enumeration and prioritization
  • [ ] Real-time threat model updates from runtime telemetry
  • [ ] Threat models drive security budget allocation
  • [ ] Threat modeling metrics influence engineering prioritization

55.12.4 Building a Threat Modeling Center of Excellence

For organizations scaling threat modeling beyond a single team, a Center of Excellence (CoE) provides governance, tooling, and training.

CoE responsibilities:

Function Activities Deliverables
Methodology governance Select and maintain standard methodologies; create templates and playbooks Methodology guide, templates, decision trees
Training and enablement Train developers and security engineers; certify practitioners Training curriculum, certification program, workshops
Tooling and automation Evaluate and deploy threat modeling tools; build CI/CD integrations Tool standards, pipeline integrations, automation scripts
Quality assurance Review completed threat models; assess coverage and depth Review checklist, quality scores, feedback reports
Metrics and reporting Track KPIs; produce dashboards; report to leadership Monthly dashboard, quarterly maturity assessment
Community building Facilitate cross-team knowledge sharing; maintain threat library Threat library, community of practice, knowledge base

55.13 Threat Modeling in Incident Response

Threat models should not gather dust after creation. They are invaluable during incident response — providing pre-built understanding of system architecture, known attack paths, and expected controls. See Chapter 9: Incident Response Lifecycle and Chapter 53: Zero-Day Response for the full IR context.

55.13.1 Using Threat Models During IR

IR Phase How Threat Model Helps Example
Detection Pre-built detection rules from threat model catch initial indicators TM-PAY-001 detection rule fires on price manipulation
Triage Threat model provides context on asset criticality and data classification Analyst knows the affected API handles PCI-scope data
Analysis Attack trees show likely attack paths; scope investigation Attack tree shows SQLi → pivot → DB access path; investigate DB logs
Containment Threat model maps trust boundaries; know what to isolate Isolate payment namespace without affecting frontend
Eradication Threat model lists all related components; ensure complete cleanup Threat model shows 3 services share the compromised library
Recovery Threat model documents expected state; verify restoration DFD shows expected data flows; confirm no unauthorized connections
Post-mortem Compare incident to threat model; update gaps Incident used attack path not in model; add to next revision

55.13.2 Post-Incident Threat Model Update

// Correlate incident findings with threat model coverage
SecurityIncident
| where TimeGenerated > ago(30d)
| where Status == "Closed"
| extend IncidentTactics = parse_json(AdditionalData).tactics
| extend IncidentTechniques = parse_json(AdditionalData).techniques
| mv-expand IncidentTechniques
| join kind=leftouter (
    externaldata(ThreatID: string, MITREAttack: string,
                  Component: string, DetectionStatus: string)
    [@"https://threatmodels.synthapp.example.com/detection_coverage.csv"]
    with (format="csv")
) on $left.IncidentTechniques == $right.MITREAttack
| extend ThreatModelCoverage = iff(isnotempty(ThreatID), "Covered", "Gap")
| summarize IncidentCount = count(),
            CoveredCount = countif(ThreatModelCoverage == "Covered"),
            GapCount = countif(ThreatModelCoverage == "Gap")
  by tostring(IncidentTechniques)
| extend CoverageRate = round(100.0 * CoveredCount /
                               (CoveredCount + GapCount), 1)
| sort by GapCount desc
index=security_incidents sourcetype=incident_tracker
    Status="Closed" earliest=-30d
| mvexpand IncidentTechniques
| lookup threat_model_detection_coverage.csv
    MITREAttack as IncidentTechniques
    OUTPUT ThreatID, DetectionStatus
| eval ThreatModelCoverage=if(isnotnull(ThreatID), "Covered", "Gap")
| stats count as IncidentCount,
        count(eval(ThreatModelCoverage="Covered")) as CoveredCount,
        count(eval(ThreatModelCoverage="Gap")) as GapCount
  by IncidentTechniques
| eval CoverageRate=round(100*CoveredCount/(CoveredCount+GapCount), 1)
| sort - GapCount

55.14 Threat Modeling Tools and Resources

55.14.1 Tool Comparison Matrix

Tool Methodology Format Collaboration CI/CD Integration Cost
Microsoft Threat Modeling Tool STRIDE TMT7 (proprietary) Local file sharing Limited Free
OWASP Threat Dragon STRIDE, custom JSON (open) Web-based GitHub integration Free
Threagile Architecture-as-code YAML input → report Git-based Native CI/CD Free
IriusRisk STRIDE, PASTA, custom Proprietary + export Web-based, multi-user Jenkins, GitLab, Azure DevOps Commercial
ThreatModeler PASTA, VAST Proprietary Web-based, enterprise Jira, CI/CD Commercial
pytm STRIDE Python code → DFD + threats Git-based Native CI/CD Free
CAIRIS Multiple XML, JSON Web-based REST API Free

55.14.2 Choosing the Right Tool

Tool Selection Decision Tree

  1. Budget = $0? → OWASP Threat Dragon (web-based) or Threagile (code-based)
  2. Microsoft ecosystem? → Microsoft Threat Modeling Tool
  3. DevSecOps pipeline integration critical? → Threagile or pytm
  4. Enterprise scale with compliance needs? → IriusRisk or ThreatModeler
  5. Privacy-focused modeling? → LINDDUN GO (card-based) + any DFD tool
  6. AI/ML system modeling? → Custom extension of STRIDE (see Section 55.10)

Review Questions

  1. Compare and contrast STRIDE and PASTA. Under what circumstances would you choose PASTA over STRIDE for a threat modeling engagement, and what additional inputs does PASTA require that STRIDE does not?

  2. Design a LINDDUN analysis for a healthcare patient portal at portal.synthmed.example.com that allows patients to view lab results, schedule appointments, and message their physicians. Identify at least one threat for each LINDDUN category and propose a mitigation for each.

  3. Construct an attack tree for the objective "Exfiltrate intellectual property from synthcorp.example.com's source code repository." Include at least three alternative attack paths with AND/OR nodes, and annotate each leaf node with cost, skill level, and detectability.

  4. Explain how continuous threat modeling differs from traditional point-in-time threat modeling. Describe three specific CI/CD pipeline integration points where automated threat model validation should occur, and what each check should verify.

  5. A Kubernetes cluster hosts the SynthApp payment service. Identify five Kubernetes-specific threats that would not appear in a traditional infrastructure threat model. For each threat, specify the STRIDE category, the Kubernetes component affected, and a detection query approach.

  6. An organization has deployed an LLM-based customer support chatbot at chat.synthapp.example.com. Apply the AI/ML threat taxonomy from Section 55.10 to identify the top three threats, and design a detection strategy for prompt injection attacks that balances security with user experience.

  7. Your organization's threat modeling detection coverage matrix shows 45% coverage — less than half of modeled threats have deployed detection rules. Develop a prioritization framework for closing this gap that considers threat severity, data source availability, and detection feasibility. Which threats should be instrumented first and why?


Key Takeaways

  1. Threat modeling is the highest-ROI security activity — threats identified during design cost 10-100x less to mitigate than those discovered in production incidents.

  2. No single methodology is sufficient. STRIDE excels at systematic categorization, PASTA adds attacker-centric risk analysis, LINDDUN addresses privacy threats, and attack trees formalize adversary decision-making. Mature programs combine methodologies based on context.

  3. Threat models must produce operational outputs. A threat model that generates a PDF but no detection rules, no monitoring alerts, and no mitigation tickets has zero operational value.

  4. Continuous threat modeling is essential in DevSecOps. Point-in-time models become stale within weeks in environments that deploy daily. Threat models must be code, version-controlled, and validated in CI/CD pipelines.

  5. Attack surface management feeds threat modeling. ASM discovers what needs to be modeled — external-facing services, shadow IT, forgotten endpoints. Without ASM, threat models have blind spots.

  6. Cloud-native and Kubernetes architectures require expanded threat models that address container escape, RBAC misconfiguration, service mesh bypass, and supply chain injection vectors that do not exist in traditional infrastructure.

  7. AI/ML systems introduce novel threat categories — data poisoning, model extraction, adversarial inputs, prompt injection — that traditional frameworks do not fully address. Extended STRIDE analysis must include these ML-specific threats.

  8. Detection coverage matrices make risk acceptance visible. Every modeled threat without a detection rule is an implicit risk acceptance. Track and report coverage to leadership.

  9. Threat models are invaluable during incident response — they provide pre-built architecture understanding, known attack paths, and expected controls that accelerate triage, analysis, and containment.

  10. Maturity is measured by metrics, not models. A mature threat modeling program tracks coverage rate, model freshness, detection coverage, mitigation rate, and incident correlation — and uses these KPIs to drive continuous improvement.


Cross-References