Skip to content

SC-020: Critical Infrastructure Attack (Energy Sector)

Scenario Header

Type: ICS/OT / Critical Infrastructure  |  Difficulty: ★★★★★  |  Duration: 3–4 hours  |  Participants: 6–10

Threat Actor: Nation-state APT — destructive, critical infrastructure targeting

Primary ATT&CK Techniques: T1566.001 · T1021.001 · T1078 · T0855 · T0831 · T0826 · T0813


Threat Actor Profile

VOLT SPECTER is a synthetic nation-state threat group modeled on publicly documented ICS/OT attack campaigns including Ukraine 2015 (BlackEnergy/KillDisk), Ukraine 2016 (Industroyer/CrashOverride), and the 2021 Colonial Pipeline ransomware incident. VOLT SPECTER combines sophisticated IT network intrusion capabilities with deep knowledge of industrial control systems, particularly SCADA/EMS platforms used in electric power distribution.

The group operates in two phases: a patient IT compromise campaign (3–6 months of persistent access and reconnaissance) followed by a rapid OT attack designed to manipulate distribution automation systems and cause controlled blackouts in targeted geographic areas. Their objectives are strategic disruption and demonstration of capability — not financial gain.

Motivation: Strategic/political — demonstrate ability to disrupt critical infrastructure as a coercive tool. Secondary objective: intelligence collection on grid architecture and restoration procedures.

Public Research Context

This scenario is informed by publicly documented attacks on the energy sector:

  • Ukraine 2015 (BlackEnergy): First confirmed cyberattack causing a power outage — 230,000 customers affected for 1–6 hours — attackers used spearphishing → IT compromise → OT pivot → manual SCADA manipulation
  • Ukraine 2016 (Industroyer/CrashOverride): Automated ICS malware targeting IEC 61850, IEC 104, and OPC DA protocols — caused a 75-minute blackout in Kyiv
  • Colonial Pipeline 2021: Ransomware (DarkSide) targeting IT systems — pipeline shutdown was a precautionary business decision, not OT compromise — 5-day fuel supply disruption across US Southeast
  • CISA Advisory AA22-103A: Warning of advanced persistent threat activity targeting ICS/SCADA in the energy sector

All technical details in this scenario are synthetic. No real utility infrastructure or operations data is used.


Scenario Narrative

Phase 1 — Spear Phishing & IT Network Compromise (~30 min)

VOLT SPECTER targets PowerGrid Municipal Electric (PGME), a fictional municipal electric utility serving 180,000 customers across a mid-size metropolitan area. PGME operates 12 distribution substations, a 115kV/34.5kV transmission-to-distribution network, and uses a GE/Alstom SCADA/EMS platform for grid management.

The attack begins with a spear phishing campaign targeting PGME's engineering staff. The attacker sends a weaponized email to 6 engineers in the Protection & Controls department, impersonating the IEEE Power & Energy Society:

Subject: "IEEE PES — Updated Relay Settings Standard (C37.90-2026 Draft)" Attachment: IEEE_C37_90_2026_Draft_Review.docx (macro-enabled Word document)

The document contains a VBA macro that downloads a second-stage payload from 198.51.100.200 — a custom backdoor dubbed GridShell. Engineer Michael Torres opens the document and enables macros, believing it to be a legitimate standards draft from IEEE.

GridShell establishes persistence via a scheduled task and begins IT network reconnaissance. Over the next 90 days, VOLT SPECTER maps PGME's IT infrastructure, harvests credentials via Mimikatz, and identifies the critical network boundary: the IT/OT demilitarized zone (DMZ) separating the corporate network from the SCADA/EMS network.

Evidence Artifacts:

Artifact Detail
Email Gateway Inbound from standards@ieee-pes-review[.]org — SPF: FAIL — DMARC: FAIL — Attachment: IEEE_C37_90_2026_Draft_Review.docx2025-11-15T09:22:00Z
Endpoint (CrowdStrike) Process: WINWORD.EXEcmd.exepowershell.exe -enc ... — Host: PGME-ENG-TORRES — User: m.torres2025-11-15T09:24:33Z
Endpoint (CrowdStrike) Scheduled task created: \Microsoft\Windows\Maintenance\SystemHealthCheck — Executable: C:\ProgramData\Microsoft\Health\svchost.exe (GridShell backdoor) — 2025-11-15T09:25:01Z
Active Directory Credential harvesting: DCSync attack detected by MDI (Microsoft Defender for Identity) — Source: PGME-ENG-TORRES — Domain admin hash extracted: pgme-admin2025-12-20T02:14:00ZAlert severity: High — investigated by L2, classified as false positive
Network Flow C2 beacon: PGME-ENG-TORRES198.51.100.200:443 — Interval: 4 hours — HTTPS with JA3 fingerprint matching GridShell — Active since 2025-11-15
Phase 1 — Discussion Inject

Technical: The spear phishing email failed SPF and DMARC checks but was still delivered. What email gateway configuration allows DMARC-failing emails to be delivered? What would a p=reject DMARC policy have prevented?

Decision: The DCSync alert from Microsoft Defender for Identity was investigated by an L2 analyst and classified as a false positive on 2025-12-20. This was 35 days after initial compromise and the attacker had extracted domain admin credentials. What investigation steps should the L2 analyst have taken? What process failure allowed a DCSync alert — one of the highest-fidelity detections in Active Directory — to be dismissed?

Expected Analyst Actions: - [ ] Investigate the DCSync alert — this is a critical-severity event that should never be dismissed without full investigation - [ ] Identify the source of the PowerShell execution chain from WINWORD.EXE - [ ] Analyze the C2 beacon pattern — 4-hour interval HTTPS to 198.51.100.200 - [ ] Check if any other engineers opened the phishing document - [ ] Search for GridShell indicators across all endpoints


Phase 2 — IT/OT Boundary Pivot (~35 min)

After 90 days of IT network access, VOLT SPECTER has mapped PGME's architecture:

  • Corporate workstations, email servers, file shares
  • Active Directory domain: pgme.local
  • Domain admin credentials compromised: pgme-admin
  • VPN gateway for remote access: 10.10.1.5
  • Historian server: 172.16.0.10 (OSIsoft PI) — mirrors real-time SCADA data to IT network for reporting
  • Jump server: 172.16.0.20 — used by engineers to access OT network (RDP-based)
  • Patch management server: 172.16.0.30 — WSUS for OT workstations
  • SCADA/EMS server: 192.168.100.10 (GE/Alstom e-terra platform)
  • Human-Machine Interface (HMI) workstations: 192.168.100.20–25 (6 operator stations)
  • RTU/IED network gateway: 192.168.100.50 — communicates with field devices via DNP3
  • Engineering workstation: 192.168.100.60 — used for relay configuration and firmware updates

The critical pivot point is the jump server (172.16.0.20). PGME engineers use this server to RDP into OT network workstations for maintenance. The jump server requires Active Directory authentication — and VOLT SPECTER has domain admin credentials.

On 2026-02-10, the attacker uses the compromised pgme-admin account to RDP to the jump server, then pivots to the SCADA/EMS engineering workstation (192.168.100.60). From this workstation, the attacker can interact with the GE/Alstom e-terra SCADA platform.

The attacker spends 48 hours studying the SCADA system — mapping substations, understanding breaker control sequences, and identifying which distribution feeders serve the highest-population areas.

Evidence Artifacts:

Artifact Detail
Jump Server RDP Log Login: pgme-admin — Source IP: 10.10.15.47 (compromised workstation) — Destination: 172.16.0.202026-02-10T01:14:00Z
Jump Server RDP Log Outbound RDP: 172.16.0.20192.168.100.60 (OT engineering workstation) — 2026-02-10T01:16:33Z
OT Firewall (Palo Alto) RDP session: 172.16.0.20192.168.100.60 — Rule: "DMZ-to-OT-Engineering" — Action: Allow — Note: Rule exists for legitimate maintenance
SCADA Historian (PI) Query pattern change: engineering workstation 192.168.100.60 queried all 12 substation configurations in 2 hours — Normal pattern: 1–2 substations per maintenance window
OT Network IDS (Claroty) Alert: "New RDP session to engineering workstation from DMZ" — Severity: Medium — 2026-02-10T01:16:33ZAlert sent to OT security inbox (checked weekly)
Phase 2 — Discussion Inject

Technical: The attacker pivoted from IT to OT through the jump server using stolen Active Directory credentials. What IT/OT boundary controls would prevent this? Consider: separate authentication for OT (no AD trust), multi-factor authentication on jump servers, privileged access workstations (PAWs), and network segmentation with application-layer inspection.

Decision: The Claroty OT IDS alert was sent to the "OT security inbox" which is checked weekly. OT security monitoring is managed by the control systems engineering team, not the SOC — because the SOC "doesn't understand OT." How do you bridge the IT/OT security monitoring gap? Should OT alerts route to the SOC, and if so, what training and context do SOC analysts need?

Expected Analyst Actions: - [ ] Review all RDP sessions to the jump server in the past 90 days — identify anomalous access patterns - [ ] Verify that pgme-admin account should have jump server access — check authorization records - [ ] Analyze SCADA historian query patterns — flag the bulk substation configuration queries - [ ] Review OT firewall rules — assess whether DMZ-to-OT RDP rules are appropriately scoped - [ ] Escalate the Claroty alert — treat DMZ-to-OT RDP as high severity


Phase 3 — Distribution Automation Manipulation & Blackout (~30 min)

On 2026-02-14 at 6:15 PM (chosen for maximum impact — peak residential demand during winter, darkness at 5 PM), VOLT SPECTER executes the attack:

Attack Sequence (18 minutes):

Time (UTC) Action Target Effect
23:15:00 Open breakers on Feeder 4, 7, 11 Substations 3, 5, 9 47,000 customers lose power
23:17:00 Disable automatic reclosing on all feeders All 12 substations Prevents automated restoration
23:19:00 Modify relay protection settings Substations 3, 5, 9 If operators manually close breakers, overcurrent protection won't trip — risk of equipment damage
23:22:00 Lock out HMI operator workstations 192.168.100.20–25 Operators cannot see or control SCADA system
23:25:00 Overwrite SCADA/EMS firmware update partition 192.168.100.10 Complicate system recovery — requires firmware reload from offline backup
23:28:00 Deploy KillDisk-variant wiper on IT network 14 IT workstations Destroy forensic evidence on IT side
23:33:00 Delete jump server RDP logs and OT firewall logs 172.16.0.20, OT FW Cover tracks (but network tap captured all traffic)

Impact:

  • 47,000 customers lose power across 3 distribution zones
  • Duration: 6 hours for Feeder 4 and 7 (manual restoration by field crews dispatching to substations), 14 hours for Feeder 11 (relay settings had to be manually verified before safe re-energization)
  • Cascading effects: 2 hospitals on backup generators (8-hour fuel capacity), 847 traffic signals dark, 14 nursing homes affected
  • No safety system compromise — the attacker deliberately avoided transmission-level systems and generation facilities

Evidence Artifacts:

Artifact Detail
SCADA Event Log (survived on historian) Breaker OPEN commands: Feeder 4 (SUB3-BKR-F4), Feeder 7 (SUB5-BKR-F7), Feeder 11 (SUB9-BKR-F11) — Source: Engineering workstation 192.168.100.602026-02-14T23:15:00Z
SCADA Event Log Auto-reclose DISABLED: all feeders — 2026-02-14T23:17:00Z
SCADA Event Log Relay settings modified: overcurrent pickup increased 500% on Feeders 4, 7, 11 — 2026-02-14T23:19:00Z
HMI Workstations All 6 operator stations locked — Screen displays: "Session terminated" — 2026-02-14T23:22:00Z
Network Tap (Garland) Full packet capture of all IT/OT DMZ traffic — Preserved on write-once storage — 14 days of traffic available for forensic analysis
NERC E-ISAC PGME files OE-417 (Electric Emergency Incident Report) — 2026-02-14T23:45:00Z (within 1 hour of event)
Phase 3 — Discussion Inject

Technical: The attacker modified relay protection settings to increase overcurrent pickup by 500%, meaning that if operators manually closed breakers, the overcurrent protection wouldn't trip — potentially causing transformer damage. How do engineers safely verify relay settings before re-energization? What is the role of the safety instrumented system (SIS) in preventing equipment damage even when protection settings are compromised?

Decision: 47,000 customers have lost power including 2 hospitals and 14 nursing homes. You must simultaneously manage: (1) safe grid restoration (highest priority — human safety), (2) forensic evidence preservation, (3) NERC CIP reporting, (4) public communication, and (5) law enforcement coordination. Assign priority order and responsible teams. What happens if restoring power quickly conflicts with preserving forensic evidence?

Expected Analyst Actions: - [ ] Dispatch field crews to affected substations for manual breaker restoration - [ ] Verify all relay protection settings before re-energizing any feeder — compare against offline configuration baselines - [ ] Preserve SCADA historian data and network tap captures before any system recovery - [ ] File NERC OE-417 report within 1 hour (compliance requirement) - [ ] Activate incident command structure — assign IC, safety officer, public information officer - [ ] Contact CISA and FBI — this is a cyberattack on critical infrastructure


Phase 4 — Recovery, Forensics & Regulatory Response (~30 min)

Grid Restoration Timeline:

Time Action Status
T+0h Attack executed — 47,000 customers without power Outage confirmed
T+1h Field crews dispatched to all 3 affected substations En route
T+2h Feeder 4 manually restored (relay settings verified from paper backup) 15,000 customers restored
T+4h Feeder 7 manually restored 18,000 customers restored
T+6h Hospital 1 (St. Mary's) restored via dedicated feeder switch Critical load restored
T+8h Hospital 2 (Regional Medical) generator fuel resupply completed Backup power maintained
T+14h Feeder 11 restored (relay reconfiguration required firmware reload) All 47,000 customers restored

Forensic Investigation Findings:

  • GridShell backdoor code analysis: custom C++ implant with obfuscated C2 protocol — no known malware family match
  • Infrastructure: C2 server at 198.51.100.200 hosted on bulletproof VPS — registered with stolen identity
  • Operational schedule: all attacker activity occurred between 01:00–05:00 UTC (consistent with Eastern European or Central Asian working hours)
  • SCADA manipulation demonstrated expert knowledge of GE/Alstom e-terra platform — attacker had access to operator training materials (found on compromised file share)
Finding Control Gap
Spear phishing delivered despite DMARC failure Email gateway configured to quarantine but auto-release after 4 hours
DCSync alert dismissed as false positive L2 analyst training inadequate — no playbook for DCSync investigation
IT/OT boundary: AD credentials valid on jump server No separate authentication for OT access — AD trust extends across boundary
Jump server to OT RDP: allowed by firewall rule Rule too broad — allows any DMZ-to-OT-Engineering RDP without time restriction
Relay settings modified via SCADA No change management or approval workflow for protection settings changes
HMI lockout successful No out-of-band emergency access to SCADA system
KillDisk deployed on 14 IT workstations No application whitelisting — endpoint allowed execution of unknown binary

NERC CIP Compliance Assessment:

Standard Requirement PGME Status Finding
CIP-005-7 Electronic Security Perimeters Partial IT/OT boundary exists but AD trust undermines authentication separation
CIP-007-6 System Security Management Non-compliant No application whitelisting on OT workstations, unpatched systems
CIP-008-6 Incident Reporting & Response Compliant OE-417 filed within 1 hour, ICS-CERT notified within 24 hours
CIP-010-4 Configuration Change Management Non-compliant No automated baseline monitoring for relay protection settings
CIP-011-3 Information Protection Partial Operator training materials stored on unprotected file share

Notification Requirements:

  • [x] NERC OE-417 — filed within 1 hour (compliant)
  • [x] NERC E-ISAC — notified within 1 hour (compliant)
  • [x] CISA — notified within 24 hours per CIRCIA (compliant)
  • [x] FBI — notified within 24 hours (compliant)
  • [x] DOE — notified via OE-417 and direct communication (compliant)
  • [ ] State public utility commission — briefed within 48 hours (pending)
  • [ ] Affected municipalities — briefed within 72 hours (pending)

Evidence Artifacts:

Artifact Detail
NERC CIP Audit Preliminary findings: 2 areas of non-compliance (CIP-007-6, CIP-010-4) — Potential enforcement action — PGME has 90 days to submit mitigation plan
CISA Advisory CISA issues ICS-ALERT-2026-045 based on PGME incident — Indicators shared with energy sector via E-ISAC
DOE Notification DOE Office of Cybersecurity, Energy Security, and Emergency Response (CESER) engaged — Technical assistance offered
Forensic Report 247-page report by Dragos Inc (retained IR firm) — Full kill chain documented — Delivered 2026-03-15
Phase 4 — Discussion Inject

Technical: The root cause analysis identified 7 control failures that enabled the attack. Which single control, if implemented, would have had the greatest impact on preventing or detecting this attack earlier? Defend your answer with the attack chain analysis.

Decision: NERC CIP audit found 2 areas of non-compliance. PGME faces potential enforcement action including financial penalties. The CISO must present a mitigation plan to the NERC regional entity within 90 days. What investments and organizational changes should be in the mitigation plan, and how do you prioritize them given a limited budget? The board is asking: "How much should we spend on OT security, and how do we know it's enough?"

Expected Analyst Actions: - [ ] Complete forensic timeline — from initial phishing through grid restoration - [ ] Remediate all 7 identified control gaps — prioritize by risk and NERC CIP compliance - [ ] Submit NERC CIP mitigation plan within 90-day window - [ ] Implement IT/OT authentication separation — eliminate AD trust across boundary - [ ] Deploy OT-specific monitoring (Dragos, Claroty, or Nozomi) with 24/7 alerting to SOC - [ ] Establish relay setting baseline monitoring with automated change detection - [ ] Conduct sector-wide threat briefing via E-ISAC


Detection Opportunities

Phase Technique ATT&CK (ICS) Detection Method Difficulty
1 Spear phishing with macro T1566.001 Email gateway: block macro-enabled documents from external senders Easy
1 DCSync credential theft T1003.006 Microsoft Defender for Identity: DCSync detection (high fidelity) Easy
1 C2 beacon (GridShell) T1071.001 Network: detect periodic HTTPS beaconing via JA3/JA4 fingerprinting Medium
2 IT/OT pivot via jump server T1021.001 OT IDS: alert on RDP sessions from DMZ to OT — treat as high severity Medium
2 SCADA historian bulk query T0888 Historian: detect anomalous query patterns across all substations Medium
3 Breaker manipulation T0831 SCADA event log: alert on breaker commands from engineering workstation (not HMI) Medium
3 Relay setting modification T0855 OT change management: alert on protection setting changes outside maintenance window Easy
3 HMI lockout T0826 Monitor HMI availability — alert when all operator stations go offline simultaneously Easy
4 KillDisk wiper T0822 Application whitelisting: block execution of unknown binaries on OT/IT workstations Easy

Key Discussion Questions

  1. The attacker maintained IT network access for 90 days before pivoting to OT. What IT-side detections should have caught the intrusion earlier, and why did they fail? Map each missed detection to a specific process, technology, or staffing gap.
  2. The IT/OT boundary was bridged via Active Directory credentials on the jump server. Is it realistic to operate OT networks with completely separate authentication? What are the usability and operational costs, and how do you sell this to the engineering team?
  3. The attacker modified relay protection settings to prevent safe manual restoration. This moves beyond disruption into potential equipment destruction and human safety risk. How does this change your incident response priorities compared to a purely IT-focused cyberattack?
  4. NERC CIP compliance was partial before the attack. Is compliance-driven security sufficient for critical infrastructure, or does it create a false sense of security? What goes beyond compliance?
  5. The energy sector faces nation-state threats but municipal utilities often have smaller security teams and budgets than investor-owned utilities. How should the sector address this asymmetry? What role should CISA, DOE, and E-ISAC play in supporting smaller utilities?

Debrief Guide

What Went Well

  • NERC OE-417 filed within 1 hour — incident reporting procedures worked
  • Network tap at IT/OT boundary preserved full packet capture — critical forensic evidence survived the wiper attack
  • Field crews were able to manually restore power at substations — operational resilience in manual mode
  • Paper backups of relay settings enabled safe verification before re-energization

Key Learning Points

  • IT/OT convergence creates the attack path — the jump server with AD authentication is the most critical vulnerability in most utility architectures
  • OT security monitoring must be 24/7 — weekly inbox checks for OT IDS alerts are fundamentally inadequate for critical infrastructure
  • DCSync is a high-fidelity detection — dismissing it as a false positive enabled 60+ days of additional attacker activity including OT compromise
  • Manual override capability is essential — the ability to manually operate breakers at substations was the ultimate safety net
  • NERC CIP compliance is a floor, not a ceiling — compliance gaps directly correlated with attack success factors
  • [ ] Implement separate OT authentication — eliminate Active Directory trust across IT/OT boundary
  • [ ] Deploy OT network monitoring (Dragos Platform or equivalent) with 24/7 SOC integration
  • [ ] Implement application whitelisting on all OT workstations and servers
  • [ ] Establish automated relay setting baseline monitoring with change alerting
  • [ ] Deploy deception technology (honeypots) in IT/OT DMZ to detect lateral movement
  • [ ] Conduct annual ICS tabletop exercise with engineering, operations, IT security, and executive leadership
  • [ ] Engage CISA for complimentary ICS security assessment (available to critical infrastructure operators)
  • [ ] Implement out-of-band emergency access to SCADA system (independent of HMI workstations)
  • [ ] Review and harden email gateway — enforce DMARC reject policy, block macro-enabled documents from external senders
  • [ ] Submit NERC CIP mitigation plan addressing CIP-007-6 and CIP-010-4 non-compliance findings

References