Skip to content

Zero-Day Exploitation Response Playbook

CRITICAL — No Patch Available

Zero-day exploitation means no vendor fix exists. Signature-based detection is ineffective. Response depends on behavioral analytics, compensating controls, and rapid coordination with the vendor. Every hour of exposure increases the blast radius.

Metadata

Field Value
Playbook ID IR-PB-003
Severity Critical (P1)
RTO — Containment < 2 hours
RTO — Recovery < 48 hours
Owner IR Lead
Escalation CISO → Legal → Executive Sponsor → Vendor PSIRT
Last Reviewed 2025-01-01

Severity Classification Matrix

Factor Low (P3) Medium (P2) High (P1) Critical (P0)
Exposure Internal-only, non-critical app Internal, business app Internet-facing, business-critical Internet-facing, auth bypass or RCE
Active Exploitation No known exploitation PoC exists, no ITW use Targeted exploitation observed Widespread exploitation confirmed
Data at Risk No sensitive data reachable PII of limited scope Regulated data (PHI/PCI) Crown jewels / domain compromise
Compensating Controls Effective mitigation in place Partial mitigation available Limited mitigation possible No viable compensating control
CISA KEV Listed No N/A Yes — within grace period Yes — past grace period

RACI — Roles & Responsibilities

Activity IR Lead SOC Analyst Threat Intel IT Ops CISO Legal Vendor
Initial detection & triage A R C I I I
Behavioral hunt & IOC development A R R I I C
Emergency containment decision R C C R A I C
Compensating control deployment A I C R I C
Vendor coordination & disclosure A R I C C R
Executive & regulatory notification I A R
Patch validation & deployment A R C R I C
Post-incident review A R R R C C I

R = Responsible, A = Accountable, C = Consulted, I = Informed


Trigger Conditions

Activate this playbook on any of the following:

  • [ ] Vendor advisory or CERT/CC notification of a zero-day in software deployed in your environment
  • [ ] CISA Known Exploited Vulnerabilities (KEV) catalog addition for software in your asset inventory
  • [ ] Threat intelligence feed match — zero-day IOCs correlated against SIEM/EDR telemetry
  • [ ] EDR behavioral alert with no matching signature — novel exploit chain indicators
  • [ ] Anomalous process execution from a trusted application (e.g., outlook.exe spawning cmd.exepowershell.exe)
  • [ ] WAF alert showing novel attack pattern against a known application endpoint
  • [ ] External report from bug bounty, security researcher, or peer organization (ISAC)
  • [ ] Unexplained crash dumps / application faults consistent with memory corruption exploitation

Decision Tree

flowchart TD
    A([Zero-Day Trigger Detected]) --> B{Is the vulnerability\nactively exploited in\nyour environment?}
    B -- Yes --> C[IMMEDIATE: Activate\nIR bridge — Severity P0/P1]
    B -- No --> D{Is a public PoC\navailable?}
    D -- Yes --> E[Elevate to P1\nDeploy compensating\ncontrols within 4h]
    D -- No --> F{Is the affected\nasset internet-facing?}
    F -- Yes --> G[Elevate to P2\nDeploy WAF rules\nand monitor]
    F -- No --> H[Monitor — P3\nTrack vendor patch\ntimeline]
    C --> I{Does exposure justify\nemergency containment\nvs. monitoring?}
    I -- "Containment" --> J[Isolate affected systems\nDeploy network segmentation\nBlock exploitation vectors]
    I -- "Monitor" --> K[Enhanced logging\nThreat hunt\nBehavioral detection rules]
    E --> I
    J --> L{Is vendor\npatch available?}
    K --> L
    G --> L
    L -- Yes --> M[Test & deploy patch\nvia emergency change]
    L -- No --> N[Maintain compensating\ncontrols — reassess\nevery 24h]
    N --> L
    M --> O[Validate remediation\nConfirm no persistence]
    O --> P([Post-Incident Review\n& Lessons Learned])
    H --> L

Phase 1 — Preparation (Pre-Incident)

Zero-day readiness must be established before an incident occurs.

1.1 Vendor Contact Matrix

Maintain a current vendor security contact list for all critical software:

Vendor Product PSIRT Contact SLA (Critical) Coordinated Disclosure
Microsoft Windows / Office / Azure secure@microsoft.com 24h acknowledgment MSRC Portal
Cisco Network infrastructure psirt@cisco.com 24h acknowledgment Cisco PSIRT
Palo Alto Networks Firewalls / Cortex psirt@paloaltonetworks.com 4h acknowledgment PAN PSIRT
VMware (Broadcom) ESXi / vCenter security@vmware.com 24h acknowledgment VMware PSIRT
Apache Foundation Open-source projects security@apache.org Best effort Apache Security Team
Internal Applications Custom apps appsec-team@corp.local 2h acknowledgment N/A

1.2 CERT/CC & Coordinated Disclosure Contacts

Organization Role Contact
CERT/CC Coordination center cert@cert.org / +1-412-268-7090
CISA Federal coordination central@cisa.dhs.gov / 888-282-0870
FIRST Global CSIRT community first-teams@first.org
Industry ISAC Sector-specific sharing Per ISAC membership

1.3 Pre-Positioned Compensating Controls

  • [ ] WAF in detection mode on all internet-facing applications — ready to switch to blocking
  • [ ] Network segmentation policies pre-defined for emergency micro-segmentation
  • [ ] EDR behavioral detection rules for common exploit primitives (process injection, LOLBins, memory manipulation)
  • [ ] Application allowlisting policies tested and ready for enforcement
  • [ ] Emergency change management process documented and pre-approved by CAB

Phase 2 — Detection & Triage (0–1 Hour)

Signature-based detection will NOT work for true zero-days. Focus on behavioral analytics.

2.1 Behavioral Detection Strategies

Since no signatures exist for a true zero-day, detection relies on anomaly and behavior:

Detection Method What It Catches Tool
Process lineage anomaly Unexpected parent-child process chains EDR (CrowdStrike / MDE)
Memory anomaly detection Heap spray, ROP chains, shellcode injection EDR + memory forensics
Network behavioral baseline deviation Unusual outbound connections from trusted apps NDR (Zeek / Corelight)
File integrity monitoring Unexpected binary modifications OSSEC / Wazuh / Tripwire
Application crash correlation Repeated crashes = possible exploitation attempts SIEM correlation rule
WAF anomaly scoring Novel payloads not matching known patterns WAF (ModSecurity / Cloudflare)

2.2 Detection Queries

// Detect anomalous process trees — trusted app spawning unexpected children
DeviceProcessEvents
| where Timestamp > ago(24h)
| where InitiatingProcessFileName in ("outlook.exe", "winword.exe", "excel.exe", "iexplore.exe", "msedge.exe")
| where FileName in ("cmd.exe", "powershell.exe", "wscript.exe", "cscript.exe", "mshta.exe",
                      "certutil.exe", "bitsadmin.exe", "rundll32.exe")
| project Timestamp, DeviceName, InitiatingProcessFileName, FileName, ProcessCommandLine
| order by Timestamp desc

// Detect potential memory exploitation — abnormal crash patterns
DeviceEvents
| where Timestamp > ago(7d)
| where ActionType == "ExploitGuardAuditEvent" or ActionType == "ExploitGuardBlockEvent"
| summarize CrashCount = count() by DeviceName, FileName, bin(Timestamp, 1h)
| where CrashCount > 5
| order by CrashCount desc
// Detect anomalous process trees — trusted app spawning unexpected children
index=edr sourcetype=process_creation
| where parent_process_name IN ("outlook.exe", "winword.exe", "excel.exe", "iexplore.exe", "msedge.exe")
| where process_name IN ("cmd.exe", "powershell.exe", "wscript.exe", "cscript.exe", "mshta.exe",
                          "certutil.exe", "bitsadmin.exe", "rundll32.exe")
| table _time, host, parent_process_name, process_name, process_command_line
| sort - _time

// Detect potential exploitation — abnormal application crash frequency
index=windows sourcetype=WinEventLog:Application EventCode=1000
| stats count AS crash_count by host, Application, span=1h
| where crash_count > 5
| sort - crash_count

2.3 CISA KEV Integration Check

  • [ ] Query CISA KEV catalog API for the CVE (if assigned): https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json
  • [ ] If listed in KEV, federal agencies have a binding deadline — apply the same urgency internally
  • [ ] Cross-reference with your asset inventory to determine exposure scope
# Check CISA KEV for a specific CVE
curl -s https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json | \
  jq '.vulnerabilities[] | select(.cveID == "CVE-YYYY-XXXXX")' 2>/dev/null || \
  echo "CVE not found in KEV catalog"

2.4 Exposure Assessment

  • [ ] Query CMDB/asset inventory for all instances of the affected software and version
  • [ ] Determine internet-facing vs. internal-only exposure
  • [ ] Identify data classification of systems running vulnerable software
  • [ ] Calculate blast radius: how many systems, users, and data stores are at risk
  • [ ] Assign severity using the Severity Classification Matrix above

Phase 3 — Containment (1–4 Hours)

Decision Gate: Does exposure justify emergency containment (business disruption) vs. enhanced monitoring?

3.1 Emergency Containment Actions

Deploy based on severity and exposure — not all actions are required for every zero-day:

Network-Level Containment

# Emergency micro-segmentation — block lateral movement from affected VLAN
# Palo Alto Networks (PAN-OS) example
set rulebase security rules zero-day-contain from affected-zone to any application any action deny
set rulebase security rules zero-day-contain log-start yes log-end yes

# Commit with description
commit description "IR-PB-003: Emergency zero-day containment"
# Windows Firewall — block outbound from vulnerable application (host-level)
New-NetFirewallRule -DisplayName "ZD-Contain: Block VulnApp Outbound" `
  -Direction Outbound `
  -Program "C:\Program Files\VulnerableApp\vuln.exe" `
  -Action Block `
  -Profile Any `
  -Description "IR-PB-003 zero-day containment"

Application-Level Containment

  • [ ] Disable the vulnerable feature/module if possible without full application shutdown
  • [ ] Restrict access to the vulnerable endpoint via reverse proxy / load balancer rules
  • [ ] Enable enhanced authentication (MFA step-up) for the affected application
  • [ ] Reduce attack surface: disable unnecessary services, ports, protocols on affected hosts

WAF Emergency Rules

# ModSecurity emergency virtual patch example
SecRule REQUEST_URI "@rx /vulnerable/endpoint" \
    "id:900001,\
     phase:1,\
     deny,\
     status:403,\
     log,\
     msg:'IR-PB-003: Zero-day virtual patch — blocking access to vulnerable endpoint',\
     severity:'CRITICAL'"

# Block specific exploitation payload pattern (update as IOCs emerge)
SecRule REQUEST_BODY "@rx (\x00\x00\x00\x00){4,}" \
    "id:900002,\
     phase:2,\
     deny,\
     status:403,\
     log,\
     msg:'IR-PB-003: Potential exploit payload — heap spray pattern detected',\
     severity:'CRITICAL'"

3.2 EDR Behavioral Containment

# CrowdStrike — create custom IOA rule for zero-day behavior
# Push behavioral detection for the specific exploitation pattern
# Example: Block specific parent→child process chain
$ioaRule = @{
    name        = "ZD-IR-PB-003: Exploit Chain Detection"
    description = "Block exploitation pattern for active zero-day"
    disposition = "BLOCK"
    pattern     = @{
        parent_process = "vulnerable_app.exe"
        child_process  = "cmd.exe|powershell.exe|wscript.exe"
    }
}
# Deploy via Falcon API or console

3.3 Stakeholder Notifications (within 30 min)

Recipient Channel Message
CISO Phone + Signal "P1 Zero-day exploitation — [CVE/description] — IR bridge open"
Legal Counsel Phone "Zero-day — potential data exposure — assess notification obligations"
Executive Sponsor Phone Brief: affected systems, containment status, no patch available, next update in 60 min
IT Operations Slack/Teams Emergency change: compensating controls deployed — do not modify
Vendor PSIRT Email + Phone "Active exploitation of [product] — request emergency patch timeline"
PR / Communications Email Standby — do not communicate externally until cleared

Phase 4 — Eradication (4–48 Hours)

4.1 Threat Hunt for Exploitation Artifacts

  • [ ] Search for IOCs across all endpoints via EDR sweep (hashes, mutexes, network indicators)
  • [ ] Hunt for webshells or implants dropped post-exploitation
  • [ ] Review authentication logs for lateral movement from compromised systems
  • [ ] Examine DNS logs for C2 beacon patterns (DGA domains, DNS-over-HTTPS anomalies)
  • [ ] Check for new scheduled tasks, services, or registry persistence created during exploitation window

4.2 Forensic Evidence Collection

Preserve forensic evidence before remediation. Chain of custody matters for legal and regulatory purposes.

  • [ ] Capture memory dumps from affected systems: winpmem_mini.exe output.raw
  • [ ] Collect disk images or triage packages from patient zero and adjacent systems
  • [ ] Export relevant SIEM/EDR logs for the exploitation window (72h before → present)
  • [ ] Preserve WAF/proxy logs showing exploitation attempts
  • [ ] Document timeline of events with UTC timestamps

4.3 Vendor Coordination

Vendor Communication Timeline:
├─ T+0h:    Initial notification to vendor PSIRT with exploitation details
├─ T+1h:    Vendor acknowledges receipt
├─ T+4h:    Share sanitized IOCs and exploitation artifacts with vendor
├─ T+24h:   Request vendor status update on patch development
├─ T+48h:   If no patch — request official vendor mitigation guidance
├─ T+72h:   Escalate to CERT/CC if vendor is non-responsive
├─ T+7d:    Coordinated disclosure decision point
└─ T+90d:   Standard coordinated disclosure deadline (per CERT/CC policy)

4.4 Credential Reset Scope

Condition Action Priority
System running vulnerable software was compromised Reset all local and service account credentials P1
Attacker achieved code execution Reset all credentials accessible from that host P1
Lateral movement confirmed Domain-wide credential reset assessment P1
No confirmed exploitation (preventive containment) Reset credentials on directly exposed systems only P2

Phase 5 — Recovery (24–72 Hours)

5.1 Patch Deployment (When Available)

When the vendor releases a patch, deploy via emergency change — do not wait for the next patch cycle.

  • [ ] Obtain patch from vendor — verify hash against vendor-published checksum
  • [ ] Test patch in isolated environment (staging/QA) — minimum 2h validation
  • [ ] Deploy to Tier 1 systems first (internet-facing, highest risk)
  • [ ] Monitor for application stability issues post-patch (4h observation window)
  • [ ] Deploy to remaining systems in priority order
  • [ ] Confirm all instances patched via vulnerability scanner re-scan

5.2 Compensating Control Removal

  • [ ] Remove emergency WAF rules only after patch is confirmed deployed and effective
  • [ ] Restore network segmentation to normal state
  • [ ] Re-enable disabled features/services that were shut down during containment
  • [ ] Remove emergency firewall rules
  • [ ] Document all compensating controls that were deployed and their removal timestamps

5.3 Integrity Verification

# Verify no persistence remains after patching
# Check for files modified during the exploitation window
find /opt/vulnerable_app -type f -newer /tmp/exploitation_start_marker -name "*.php" -o -name "*.jsp" -o -name "*.aspx" | \
  xargs sha256sum > post_patch_hashes.txt

# Compare against known-good baseline
diff known_good_app_hashes.txt post_patch_hashes.txt
# Windows — verify no unauthorized scheduled tasks remain
Get-ScheduledTask | Where-Object {
    $_.Date -gt (Get-Date).AddDays(-7) -and
    $_.TaskPath -notlike "\Microsoft\*"
} | Format-Table TaskName, TaskPath, Date, State

Phase 6 — Lessons Learned (Within 2 Weeks)

6.1 Metrics to Capture

Metric Definition Target
Time to Detect (TTD) Trigger event → confirmed detection < 1 hour
Time to Contain (TTC) Confirmed detection → compensating controls active < 2 hours
Time to Remediate (TTR) Compensating controls → vendor patch deployed Vendor-dependent
Blast Radius Number of systems/users/data stores affected Minimize
False Positive Rate Behavioral alerts generated vs. confirmed exploitation < 20%
Compensating Control Effectiveness Exploitation attempts blocked by interim controls > 95%

6.2 Post-Incident Review Agenda

  • [ ] Timeline reconstruction: when did exploitation begin vs. when was it detected?
  • [ ] Detection gap analysis: why did/didn't behavioral detection catch it?
  • [ ] Compensating control assessment: were interim mitigations effective?
  • [ ] Vendor response evaluation: was the vendor responsive and timely?
  • [ ] Asset inventory accuracy: did we know about all instances of the vulnerable software?
  • [ ] Communication review: were notifications timely and accurate?
  • [ ] Process improvements identified and assigned to owners with deadlines

6.3 Communication Plan Review

Audience Timing Channel Content
Executive leadership Within 1h of confirmation Phone / emergency bridge Scope, business impact, containment status
Board of Directors Within 24h (if material) CISO briefing Business risk, response actions, timeline
Customers (if affected) Per legal/regulatory guidance Email / portal notice What happened, what we did, what they should do
Regulators Per regulatory deadlines Formal notification Incident details per regulatory requirements
Employees After containment confirmed Internal comms Awareness, any actions required
Media (if public) After legal review PR team only Approved statement only

Runbook Checklist

Preparation

  • [ ] Vendor PSIRT contact matrix is current and tested
  • [ ] Behavioral detection rules deployed and tuned in EDR
  • [ ] WAF in detection mode on all internet-facing applications
  • [ ] Emergency change process pre-approved by CAB
  • [ ] Network micro-segmentation policies pre-defined

Detection & Triage

  • [ ] Zero-day trigger confirmed and IR bridge opened
  • [ ] Severity assigned using classification matrix
  • [ ] CISA KEV catalog checked
  • [ ] Asset inventory queried — all affected instances identified
  • [ ] Exposure assessment completed (internet-facing vs. internal)

Containment

  • [ ] Decision gate: containment vs. monitoring — decision documented
  • [ ] Compensating controls deployed (WAF / network / application level)
  • [ ] EDR behavioral rules pushed to all endpoints
  • [ ] Stakeholder notifications sent
  • [ ] Vendor PSIRT contacted

Eradication

  • [ ] Threat hunt completed across all endpoints
  • [ ] Forensic evidence preserved (memory dumps, disk images, logs)
  • [ ] IOCs shared with vendor and threat intel community
  • [ ] Compromised credentials reset
  • [ ] Persistence mechanisms removed

Recovery

  • [ ] Vendor patch obtained and validated
  • [ ] Patch deployed to all affected systems
  • [ ] Compensating controls removed after patch confirmation
  • [ ] Integrity verification passed
  • [ ] Vulnerability scan confirms remediation

Lessons Learned

  • [ ] Metrics captured (TTD, TTC, TTR)
  • [ ] Post-incident review conducted within 2 weeks
  • [ ] Process improvements documented and assigned
  • [ ] Detection rules updated based on lessons learned
  • [ ] Playbook updated with findings

Nexus SecOps Benchmark Control Mapping

Control ID Control Name Playbook Phase
Nexus SecOps-ZD-01 Zero-Day Behavioral Detection Phase 2 — Detection & Triage
Nexus SecOps-ZD-02 Compensating Control Deployment Phase 3 — Containment
Nexus SecOps-ZD-03 Vendor Coordination & Disclosure Phase 4 — Eradication
Nexus SecOps-ZD-04 Emergency Patch Management Phase 5 — Recovery
Nexus SecOps-ZD-05 Zero-Day Exposure Assessment Phase 2 — Detection & Triage
Nexus SecOps-ZD-06 Post-Incident Metrics & Review Phase 6 — Lessons Learned