Skip to content

AI-Powered Threat Detection — Beyond Rules

Your SIEM has 4,200 correlation rules. Your SOC analysts triaged 11,000 alerts last month. They investigated 900. They escalated 47. Of those 47, exactly 12 were true positives. The math is brutal: a 0.1% true positive rate across all generated alerts, and an analyst team spending 96% of its time chasing phantoms.

Rule-based detection served us well for two decades. Signature matching catches known malware. Correlation rules flag known attack patterns. Threshold alerts fire when login failures exceed a count. But the threat landscape shifted beneath our feet. Adversaries adopted living-off-the-land techniques that look identical to legitimate administration. Polymorphic malware mutates faster than signatures propagate. Zero-day exploits arrive with no signatures at all. And the sheer volume of telemetry — billions of events per day in enterprise environments — overwhelms any static ruleset.

Machine learning does not replace rules. It fills the gaps that rules cannot cover: the unknown unknowns, the subtle behavioral shifts, the patterns hidden in dimensionality that no human analyst could manually correlate across a million daily events.

This post covers the practical reality of deploying ML-based detection in security operations — the paradigms, the architectures, the pitfalls, and a full synthetic case study where a fictional financial services firm caught an APT that 4,200 rules missed.


1. The Limitations of Rule-Based Detection

Before we discuss what ML adds, we need to be precise about where rules fail. This is not about disparaging signature-based detection — it is about understanding its boundaries so we can architect around them.

Signature Fatigue

Every new threat variant requires a new signature. Every new signature requires testing, deployment, and tuning. Enterprise detection engineering teams typically maintain between 2,000 and 10,000 correlation rules. Each rule has a lifecycle:

┌─────────────┐   ┌─────────────┐   ┌──────────────┐   ┌─────────────┐
│   Threat     │──▶│   Rule      │──▶│   Testing    │──▶│  Production │
│   Intel      │   │   Authoring │   │   & Tuning   │   │  Deployment │
└─────────────┘   └─────────────┘   └──────────────┘   └─────────────┘
      │                                                        │
      │            Average time: 3-14 days                     │
      │                                                        │
      └──── Adversary dwell time: 16 days (median) ───────────┘

The window between threat emergence and detection deployment is a blind spot. Adversaries know this. They accelerate their campaigns in the gap.

Zero-Day Blind Spots

By definition, zero-day exploits have no signatures. There are no IOCs to match, no file hashes to compare, no domain blocklists to check. The exploit is unknown until someone discovers it — often after the damage is done. Rule-based detection is inherently reactive: it can only detect what it has been told to look for.

Alert Volume and Analyst Burnout

The average enterprise SOC generates between 10,000 and 50,000 alerts per day. Research consistently shows that analysts can meaningfully investigate approximately 20-30 alerts per shift. The gap between generation and investigation creates a selection problem: analysts must decide which alerts to ignore, introducing human bias and missed detections.

Detection Challenge Rule-Based Limitation ML-Based Advantage
Zero-day exploits No signature exists Behavioral anomaly detection
Living-off-the-land Legitimate tools used maliciously Context-aware behavioral baselines
Insider threats No external IOCs UEBA deviation scoring
Alert volume Every rule fires independently Automated triage and correlation
Polymorphic malware Hash-based matching fails Feature-based classification
Encrypted traffic Cannot inspect payload Metadata and traffic pattern analysis
Slow-and-low attacks Below threshold triggers Long-term behavioral trend analysis

Rules + ML = Defense in Depth

The goal is never to replace rules with ML. Rules provide deterministic, explainable, fast detection for known threats. ML extends coverage to the unknown. The best detection architectures layer both — using rules as the foundation and ML as the expansion layer.

For foundational detection engineering concepts, see Chapter 5: Detection Engineering at Scale.


2. ML Detection Paradigms

Machine learning for security detection is not a single technique — it is a family of approaches, each suited to different problem types. Choosing the wrong paradigm is the most common failure mode in ML security projects.

Supervised Learning

What it does: Learns from labeled examples to classify new data.

How it works: The model trains on a dataset where each sample is labeled as "malicious" or "benign" (or more granular categories). It learns the boundary between classes and applies that boundary to new, unseen data.

When to use it: When you have abundant labeled data. Malware classification, phishing URL detection, spam filtering, and known-attack-pattern recognition all benefit from supervised approaches.

Algorithms commonly used:

  • Random Forests: Ensemble of decision trees. Robust against overfitting. Excellent for tabular security data (log features, network flow metadata). Interpretable via feature importance.
  • Gradient Boosted Trees (XGBoost, LightGBM): Higher accuracy than random forests on structured data. Widely used in competition-winning security ML models.
  • Deep Neural Networks: Effective for unstructured data — raw network packets, executable byte sequences, natural language in emails.
  • Support Vector Machines: Strong for high-dimensional, small-sample problems. Good for feature-rich malware classification.

Example — Malware Classification Pipeline:

# Supervised malware classifier — synthetic training pipeline
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report

# Load synthetic feature-engineered malware dataset
# Features: file_entropy, import_count, section_count,
#           packed_ratio, string_count, api_call_patterns
df = pd.read_csv("synthetic_malware_features.csv")

X = df.drop(columns=["label", "sha256"])
y = df["label"]  # 0 = benign, 1 = malicious

X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, stratify=y, random_state=42
)

model = RandomForestClassifier(
    n_estimators=500,
    max_depth=20,
    min_samples_leaf=5,
    class_weight="balanced",  # Handle class imbalance
    random_state=42,
    n_jobs=-1
)

model.fit(X_train, y_train)
predictions = model.predict(X_test)

print(classification_report(y_test, predictions,
      target_names=["Benign", "Malicious"]))

# Feature importance for explainability
importances = pd.Series(
    model.feature_importances_,
    index=X.columns
).sort_values(ascending=False)
print("\nTop 10 Features:")
print(importances.head(10))

Unsupervised Learning

What it does: Finds patterns in data without labels.

How it works: The model identifies structure — clusters, outliers, compressed representations — in raw data without being told what to look for. Anything that deviates from the learned structure is flagged as anomalous.

When to use it: When you lack labeled data (most security scenarios). Insider threat detection, zero-day discovery, network anomaly detection, and behavioral baselining all require unsupervised approaches because you cannot label what you have never seen.

Algorithms commonly used:

  • Isolation Forest: Isolates anomalies by random partitioning. Efficient on high-dimensional data. Low false positive rates when properly tuned.
  • Autoencoders: Neural networks trained to reconstruct their input. High reconstruction error indicates anomaly. Excellent for complex, nonlinear behavioral baselines.
  • DBSCAN: Density-based clustering that naturally identifies outliers. Useful for network traffic clustering where "normal" traffic forms dense regions.
  • One-Class SVM: Learns the boundary around "normal" data. Anything outside the boundary is anomalous.
  • Gaussian Mixture Models: Probabilistic clustering that assigns anomaly probability scores rather than binary labels.

Example — Network Anomaly Detection with Isolation Forest:

# Unsupervised network anomaly detection — synthetic data
from sklearn.ensemble import IsolationForest
import numpy as np

# Synthetic network flow features
# bytes_sent, bytes_received, duration_sec, packet_count,
# unique_dest_ports, unique_dest_ips, dns_query_count
normal_traffic = np.random.multivariate_normal(
    mean=[5000, 12000, 30, 150, 3, 5, 10],
    cov=np.diag([2000, 5000, 15, 80, 2, 3, 5]) ** 2,
    size=10000
)

model = IsolationForest(
    n_estimators=200,
    contamination=0.01,  # Expected 1% anomaly rate
    max_features=0.8,
    random_state=42
)

model.fit(normal_traffic)

# Score new observations (lower = more anomalous)
test_sample = np.array([[500000, 50, 3600, 5, 47, 200, 500]])
score = model.decision_function(test_sample)
prediction = model.predict(test_sample)  # -1 = anomaly

print(f"Anomaly score: {score[0]:.4f}")
print(f"Prediction: {'ANOMALY' if prediction[0] == -1 else 'Normal'}")

Reinforcement Learning

What it does: Learns optimal actions through trial and error in an environment.

How it works: An agent interacts with an environment, receives rewards for good actions and penalties for bad ones, and learns a policy that maximizes cumulative reward.

When to use it: Adaptive response automation, dynamic firewall rule optimization, honeypot interaction strategies, and automated penetration testing. Less common in detection but emerging for active defense scenarios.

Security applications:

  • Automated triage: RL agent learns which alerts to escalate based on analyst feedback
  • Dynamic deception: Honeypots that adapt their behavior based on attacker interactions
  • Adaptive blocking: Firewall policies that adjust thresholds based on observed attack patterns
  • Penetration testing: Automated red team agents that learn attack paths

Paradigm Selection Matrix

Scenario Labeled Data Available? Recommended Paradigm Rationale
Malware classification Yes (VirusTotal, malware zoos) Supervised Abundant labels, well-defined classes
Insider threat detection No (rare events, no labels) Unsupervised Must learn "normal" and flag deviations
Phishing email detection Yes (spam corpuses) Supervised Large labeled datasets exist
Zero-day exploit detection No (by definition unknown) Unsupervised No signatures to train on
Network C2 detection Partial (some labeled C2 traffic) Semi-supervised Leverage sparse labels + unlabeled bulk
Automated incident response Environment feedback available Reinforcement Learn optimal response sequences
DNS tunneling detection Yes (can generate labeled examples) Supervised Synthetic labels from known tunneling tools

For comprehensive coverage of AI and ML in SOC operations, see Chapter 10: AI/ML for SOC.


3. UEBA Deep Dive — User and Entity Behavior Analytics

User and Entity Behavior Analytics (UEBA) is the most mature and widely deployed ML application in security operations. The core idea is deceptively simple: learn what "normal" looks like for every user and entity, then alert when behavior deviates from that baseline.

The difficulty is entirely in the details.

What Constitutes a "Behavioral Baseline"?

A behavioral baseline is a statistical profile of an entity's typical activity patterns across multiple dimensions:

┌─────────────────────────────────────────────────────────────────┐
│                    USER BEHAVIORAL PROFILE                       │
│                    (entity: jsmith@example.com)                   │
│                                                                  │
│  ┌──────────────────┐  ┌──────────────────┐  ┌────────────────┐ │
│  │  Authentication  │  │   Access         │  │   Network      │ │
│  │  ─────────────── │  │   ─────────────  │  │   ──────────── │ │
│  │  Login times:    │  │  Files accessed: │  │  Data volume:  │ │
│  │   M-F 08:00-18:00│  │   avg 42/day     │  │   avg 2.1 GB/d │ │
│  │  Locations:      │  │  Repos cloned:   │  │  Dest IPs:     │ │
│  │   Office, VPN    │  │   avg 1.2/week   │  │   avg 15/day   │ │
│  │  MFA method:     │  │  Privilege use:  │  │  Protocols:    │ │
│  │   Push notify    │  │   rare admin     │  │   HTTP, SSH    │ │
│  │  Failed logins:  │  │  New resources:  │  │  DNS queries:  │ │
│  │   avg 0.3/day    │  │   avg 3/week     │  │   avg 200/day  │ │
│  └──────────────────┘  └──────────────────┘  └────────────────┘ │
│                                                                  │
│  ┌──────────────────┐  ┌──────────────────┐  ┌────────────────┐ │
│  │  Email           │  │   Application    │  │   Temporal     │ │
│  │  ─────────────── │  │   ─────────────  │  │   ──────────── │ │
│  │  Sent: 25/day    │  │  Apps used: 8    │  │  Active hours: │ │
│  │  External: 5/day │  │  Cloud logins:   │  │   10 hrs/day   │ │
│  │  Attachments:    │  │   avg 4/day      │  │  Weekend:      │ │
│  │   2/day avg      │  │  SaaS: O365,     │  │   rare (<5%)   │ │
│  │  New contacts:   │  │   Slack, Jira    │  │  Session len:  │ │
│  │   avg 2/week     │  │  API calls:      │  │   avg 45 min   │ │
│  └──────────────────┘  │   avg 50/day     │  └────────────────┘ │
│                         └──────────────────┘                     │
└─────────────────────────────────────────────────────────────────┘

Baselining Period and Data Requirements

Effective UEBA requires sufficient historical data to establish reliable baselines. The minimum baselining period depends on the behavioral dimension:

Behavioral Dimension Minimum Baseline Period Recommended Period Data Points Required
Login times 2 weeks 30 days 20+ login events
Data transfer volumes 30 days 90 days Daily aggregates
Application usage 14 days 60 days 100+ app events
File access patterns 30 days 90 days 200+ file events
Network destinations 7 days 30 days 500+ connections
Email behavior 14 days 60 days 100+ emails
Privilege escalation 90 days 180 days Rare event baseline

Statistical Methods for Anomaly Detection

Z-Score Method

The simplest approach: calculate how many standard deviations an observation is from the mean of the baseline.

z = (x - μ) / σ

Where:
  x = observed value
  μ = baseline mean
  σ = baseline standard deviation

A z-score above 3.0 is typically flagged as anomalous (99.7% confidence under normal distribution). However, security data is rarely normally distributed — it tends to be heavily right-skewed with long tails.

KQL — Z-Score Anomaly Detection for Data Exfiltration:

// UEBA: Detect anomalous outbound data volume per user
// Baseline: 30-day rolling average and standard deviation
let baseline_period = 30d;
let alert_threshold = 3.0; // z-score threshold
let current_period = 1d;
//
// Step 1: Calculate per-user baseline statistics
let UserBaseline = CommonSecurityLog
| where TimeGenerated between (ago(baseline_period + current_period) .. ago(current_period))
| where DeviceAction == "Allow"
| where CommunicationDirection == "Outbound"
| summarize DailyBytes = sum(SentBytes)
    by UserId = SourceUserName, bin(TimeGenerated, 1d)
| summarize
    AvgDailyBytes = avg(DailyBytes),
    StdDevBytes = stdev(DailyBytes),
    BaselineDays = dcount(TimeGenerated)
    by UserId
| where BaselineDays >= 14; // Minimum baseline requirement
//
// Step 2: Calculate today's activity
let TodayActivity = CommonSecurityLog
| where TimeGenerated >= ago(current_period)
| where DeviceAction == "Allow"
| where CommunicationDirection == "Outbound"
| summarize TodayBytes = sum(SentBytes)
    by UserId = SourceUserName;
//
// Step 3: Calculate z-scores and flag anomalies
TodayActivity
| join kind=inner UserBaseline on UserId
| extend ZScore = iff(StdDevBytes > 0,
    (toreal(TodayBytes) - AvgDailyBytes) / StdDevBytes,
    0.0)
| where ZScore > alert_threshold
| extend BytesOverBaseline = TodayBytes - AvgDailyBytes
| project
    UserId,
    TodayBytes_MB = round(TodayBytes / 1048576.0, 2),
    AvgDailyBytes_MB = round(AvgDailyBytes / 1048576.0, 2),
    ZScore = round(ZScore, 2),
    BytesOverBaseline_MB = round(BytesOverBaseline / 1048576.0, 2),
    BaselineDays
| sort by ZScore desc

SPL — Z-Score Anomaly Detection for Data Exfiltration:

| tstats sum(All_Traffic.bytes_out) as daily_bytes
    from datamodel=Network_Traffic
    where All_Traffic.action=allowed
    by All_Traffic.src_user, _time span=1d
| rename All_Traffic.src_user as user
| eventstats avg(daily_bytes) as avg_bytes,
    stdev(daily_bytes) as stdev_bytes
    by user
| eval z_score = if(stdev_bytes > 0,
    (daily_bytes - avg_bytes) / stdev_bytes, 0)
| where _time >= relative_time(now(), "-1d@d")
    AND z_score > 3.0
| eval daily_bytes_MB = round(daily_bytes / 1048576, 2)
| eval avg_bytes_MB = round(avg_bytes / 1048576, 2)
| eval bytes_over_baseline_MB = round(
    (daily_bytes - avg_bytes) / 1048576, 2)
| sort - z_score
| table user, daily_bytes_MB, avg_bytes_MB, z_score,
    bytes_over_baseline_MB

Interquartile Range (IQR) Method

More robust against skewed distributions than z-scores. Calculates the range between the 25th and 75th percentiles, then flags values beyond 1.5x (mild outlier) or 3.0x (extreme outlier) that range.

IQR = Q3 - Q1
Lower fence = Q1 - 1.5 * IQR
Upper fence = Q3 + 1.5 * IQR

Extreme outlier = Q3 + 3.0 * IQR

Isolation Forest for UEBA

Statistical methods work on univariate data — one feature at a time. Real anomalies are often multivariate: a user who logs in at an unusual time AND from an unusual location AND accesses unusual files is far more suspicious than any single dimension alone.

Isolation Forest handles this naturally by operating in the full feature space simultaneously.

# UEBA Isolation Forest — Multivariate Behavioral Anomaly Detection
# Synthetic data — all entities are fictional
import numpy as np
import pandas as pd
from sklearn.ensemble import IsolationForest
from sklearn.preprocessing import StandardScaler

# Simulate 90-day user behavior data (synthetic)
np.random.seed(42)
n_users = 500
n_days = 90

user_profiles = []
for user_id in range(n_users):
    for day in range(n_days):
        profile = {
            "user_id": f"user_{user_id:04d}@example.com",
            "day": day,
            "login_hour": np.random.normal(9.0, 1.5),
            "session_duration_min": np.random.normal(480, 60),
            "files_accessed": np.random.poisson(40),
            "bytes_transferred_mb": np.random.lognormal(1.0, 0.8),
            "unique_destinations": np.random.poisson(15),
            "failed_logins": np.random.poisson(0.3),
            "privilege_escalations": np.random.poisson(0.05),
            "after_hours_events": np.random.poisson(2),
            "new_applications": np.random.poisson(0.5),
            "email_external_count": np.random.poisson(5),
        }
        user_profiles.append(profile)

df = pd.DataFrame(user_profiles)

# Aggregate features per user (peer group comparison)
user_features = df.groupby("user_id").agg({
    "login_hour": ["mean", "std"],
    "session_duration_min": ["mean", "std"],
    "files_accessed": ["mean", "max"],
    "bytes_transferred_mb": ["mean", "max"],
    "unique_destinations": ["mean", "max"],
    "failed_logins": "sum",
    "privilege_escalations": "sum",
    "after_hours_events": "mean",
    "new_applications": "mean",
    "email_external_count": ["mean", "max"],
}).reset_index()

# Flatten multi-level columns
user_features.columns = [
    "_".join(col).strip("_") for col in user_features.columns
]

# Prepare feature matrix
feature_cols = [c for c in user_features.columns if c != "user_id"]
X = user_features[feature_cols].values

scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Train Isolation Forest
iso_forest = IsolationForest(
    n_estimators=300,
    contamination=0.02,  # Flag top 2% as anomalous
    max_features=0.7,
    random_state=42
)

user_features["anomaly_score"] = iso_forest.fit_predict(X_scaled)
user_features["risk_score"] = -iso_forest.decision_function(X_scaled)

# Normalize risk score to 0-100 scale
min_risk = user_features["risk_score"].min()
max_risk = user_features["risk_score"].max()
user_features["risk_score_normalized"] = (
    (user_features["risk_score"] - min_risk) / (max_risk - min_risk) * 100
)

# Identify anomalous users
anomalies = user_features[user_features["anomaly_score"] == -1]
print(f"Flagged {len(anomalies)} anomalous users out of {len(user_features)}")
print(anomalies.sort_values("risk_score_normalized", ascending=False).head(10))

Autoencoders for Behavioral Anomaly Detection

Autoencoders learn a compressed representation of normal behavior. When presented with anomalous behavior, they fail to reconstruct it accurately — the reconstruction error serves as the anomaly score.

┌─────────────┐                              ┌─────────────┐
│             │     ┌───────────────┐         │             │
│  Input      │────▶│  Encoder      │─────┐   │  Output     │
│  (Normal    │     │  (Compress)   │     │   │  (Recon-    │
│   Behavior) │     └───────────────┘     │   │   structed) │
│             │                           │   │             │
│  20 features│     ┌───────────────┐     │   │  20 features│
│             │◀────│  Decoder      │◀────┘   │             │
│             │     │  (Expand)     │  Latent  │             │
└─────────────┘     └───────────────┘  Space   └─────────────┘
                                       (5 dim)
      │                                              │
      └──────── Reconstruction Error ────────────────┘
                (low for normal, high for anomalous)
# Autoencoder for UEBA anomaly detection (synthetic data)
import tensorflow as tf
from tensorflow import keras
import numpy as np

# Build autoencoder architecture
input_dim = 20  # Number of behavioral features

encoder = keras.Sequential([
    keras.layers.Dense(16, activation="relu", input_shape=(input_dim,)),
    keras.layers.Dense(8, activation="relu"),
    keras.layers.Dense(4, activation="relu"),  # Bottleneck
])

decoder = keras.Sequential([
    keras.layers.Dense(8, activation="relu", input_shape=(4,)),
    keras.layers.Dense(16, activation="relu"),
    keras.layers.Dense(input_dim, activation="sigmoid"),
])

autoencoder = keras.Sequential([encoder, decoder])
autoencoder.compile(
    optimizer=keras.optimizers.Adam(learning_rate=0.001),
    loss="mse"
)

# Train on normal behavior only
# X_normal_scaled: scaled feature matrix of normal user activity
# autoencoder.fit(X_normal_scaled, X_normal_scaled,
#                 epochs=100, batch_size=32,
#                 validation_split=0.1,
#                 callbacks=[keras.callbacks.EarlyStopping(
#                     patience=10, restore_best_weights=True)])

# Anomaly scoring function
def calculate_anomaly_score(model, X_new, threshold_percentile=99):
    """Calculate reconstruction error as anomaly score."""
    reconstructed = model.predict(X_new)
    mse_per_sample = np.mean((X_new - reconstructed) ** 2, axis=1)
    threshold = np.percentile(mse_per_sample, threshold_percentile)
    is_anomaly = mse_per_sample > threshold
    return mse_per_sample, is_anomaly, threshold

UEBA Risk Scoring Framework

Individual anomalies are noisy. Effective UEBA systems aggregate multiple signals into a composite risk score:

┌─────────────────────────────────────────────────────────────────┐
│                    UEBA RISK SCORING MODEL                       │
│                                                                  │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │ Category              │ Weight │ Signals               │    │
│  ├───────────────────────┼────────┼───────────────────────┤    │
│  │ Authentication        │  0.20  │ Unusual time, geo,    │    │
│  │                       │        │ MFA change, failures  │    │
│  ├───────────────────────┼────────┼───────────────────────┤    │
│  │ Data Movement         │  0.25  │ Volume spike, new     │    │
│  │                       │        │ dest, USB, cloud DLP  │    │
│  ├───────────────────────┼────────┼───────────────────────┤    │
│  │ Privilege Activity    │  0.20  │ Escalation, new admin │    │
│  │                       │        │ access, policy change │    │
│  ├───────────────────────┼────────┼───────────────────────┤    │
│  │ Network Behavior      │  0.15  │ New protocols, C2     │    │
│  │                       │        │ patterns, tunneling   │    │
│  ├───────────────────────┼────────┼───────────────────────┤    │
│  │ Application Access    │  0.10  │ New apps, unusual     │    │
│  │                       │        │ SaaS, API abuse       │    │
│  ├───────────────────────┼────────┼───────────────────────┤    │
│  │ Peer Deviation        │  0.10  │ Deviation from role   │    │
│  │                       │        │ group behavioral norm │    │
│  └───────────────────────┴────────┴───────────────────────┘    │
│                                                                  │
│  Composite Risk = Σ (category_weight × category_score)           │
│  Alert Threshold: Risk > 85 = Critical                           │
│                   Risk > 70 = High                                │
│                   Risk > 50 = Medium                              │
└─────────────────────────────────────────────────────────────────┘

UEBA Deployment Pitfalls

Insufficient baseline period: Deploying UEBA with less than 30 days of baseline data produces excessive false positives. Seasonal patterns (month-end financial close, quarterly reporting) require 90+ days.

Ignoring peer groups: A database administrator running 200 queries per hour is normal. A marketing analyst doing the same is anomalous. UEBA must group entities by role, department, and function.

Alert fatigue from over-sensitive thresholds: Start with high thresholds (z-score > 4.0) and gradually lower based on analyst feedback. A UEBA system generating 500 alerts per day will be ignored.


4. NLP for Phishing Detection

Natural Language Processing (NLP) transforms phishing detection from a URL/domain reputation problem into a content understanding problem. Modern phishing emails are crafted to bypass traditional filters — they use clean domains, host payloads on legitimate services, and vary their text with each campaign. NLP catches what filters miss: the manipulation itself.

The Phishing Language Problem

Phishing emails exploit cognitive biases through language. They create urgency, impersonate authority, and present false choices. These patterns are consistent even when technical indicators change:

Linguistic Feature Phishing Signal Example
Urgency markers Artificial time pressure "Your account will be suspended in 24 hours"
Authority impersonation False sender identity "From the desk of the CEO"
Loss framing Fear of negative outcome "Failure to verify will result in permanent lockout"
Action demands Bypassing deliberation "Click immediately to secure your account"
Abnormal formality Mismatch with sender relationship "Dear Valued Customer" from a known colleague
Grammar anomalies Non-native construction patterns Unusual preposition usage, article errors
URL-text mismatch Display text differs from actual URL "Click here to log in" linking to a different domain
Reward bait Too-good-to-be-true offers "You have been selected for a $500 gift card"

Feature Engineering for Phishing NLP

Raw email text must be transformed into features that ML models can consume. Effective phishing NLP combines multiple feature families:

Lexical Features

# Phishing NLP — Feature Engineering Pipeline (Synthetic)
import re
import math
from collections import Counter
from urllib.parse import urlparse

def extract_lexical_features(email_body: str) -> dict:
    """Extract language-based features from email text."""

    words = email_body.lower().split()
    word_count = len(words)

    # Urgency signals
    urgency_words = {
        "immediately", "urgent", "asap", "expire", "suspend",
        "terminate", "deadline", "critical", "alert", "warning",
        "verify", "confirm", "unauthorized", "compromised",
        "locked", "restricted", "limited", "final", "notice"
    }
    urgency_score = sum(1 for w in words if w in urgency_words)

    # Authority signals
    authority_words = {
        "ceo", "director", "president", "administrator", "support",
        "security", "compliance", "legal", "helpdesk", "official",
        "management", "department", "hr", "payroll"
    }
    authority_score = sum(1 for w in words if w in authority_words)

    # Action demand signals
    action_words = {
        "click", "download", "open", "login", "sign",
        "update", "review", "access", "enter", "submit",
        "provide", "send", "transfer", "wire"
    }
    action_score = sum(1 for w in words if w in action_words)

    # Sentiment polarity (negative framing)
    negative_words = {
        "suspend", "terminate", "block", "deny", "fail",
        "violation", "penalty", "breach", "threat", "risk",
        "loss", "damage", "fraud", "unauthorized", "illegal"
    }
    negative_score = sum(1 for w in words if w in negative_words)

    return {
        "word_count": word_count,
        "urgency_ratio": urgency_score / max(word_count, 1),
        "authority_ratio": authority_score / max(word_count, 1),
        "action_ratio": action_score / max(word_count, 1),
        "negative_ratio": negative_score / max(word_count, 1),
        "exclamation_count": email_body.count("!"),
        "caps_ratio": sum(1 for c in email_body if c.isupper()) /
                      max(len(email_body), 1),
        "unique_word_ratio": len(set(words)) / max(word_count, 1),
    }

URL and Domain Features

def extract_url_features(urls: list[str]) -> dict:
    """Extract features from URLs found in email."""

    if not urls:
        return {
            "url_count": 0, "avg_url_length": 0,
            "avg_url_entropy": 0, "has_ip_url": False,
            "suspicious_tld_count": 0, "url_shortener_count": 0,
            "max_subdomain_depth": 0
        }

    suspicious_tlds = {".xyz", ".top", ".work", ".click", ".tk",
                       ".ml", ".ga", ".cf", ".gq", ".buzz"}
    shorteners = {"bit.ly", "tinyurl.com", "t.co", "goo.gl",
                  "is.gd", "buff.ly", "ow.ly", "rebrand.ly"}

    features = {
        "url_count": len(urls),
        "avg_url_length": sum(len(u) for u in urls) / len(urls),
        "has_ip_url": any(re.match(r'https?://\d+\.\d+\.\d+\.\d+', u)
                         for u in urls),
        "suspicious_tld_count": 0,
        "url_shortener_count": 0,
        "max_subdomain_depth": 0,
    }

    entropies = []
    for url in urls:
        parsed = urlparse(url)
        hostname = parsed.hostname or ""

        # Shannon entropy of URL (high entropy = random/suspicious)
        freq = Counter(url)
        length = len(url)
        entropy = -sum(
            (count / length) * math.log2(count / length)
            for count in freq.values()
        )
        entropies.append(entropy)

        # Check TLD
        for tld in suspicious_tlds:
            if hostname.endswith(tld):
                features["suspicious_tld_count"] += 1

        # Check shortener
        if hostname in shorteners:
            features["url_shortener_count"] += 1

        # Subdomain depth
        depth = hostname.count(".")
        features["max_subdomain_depth"] = max(
            features["max_subdomain_depth"], depth
        )

    features["avg_url_entropy"] = sum(entropies) / len(entropies)
    return features

Header Analysis Features

def extract_header_features(headers: dict) -> dict:
    """Extract security-relevant features from email headers."""

    from_addr = headers.get("From", "")
    reply_to = headers.get("Reply-To", "")
    return_path = headers.get("Return-Path", "")

    # SPF, DKIM, DMARC results
    auth_results = headers.get("Authentication-Results", "")

    features = {
        # From/Reply-To mismatch (common in phishing)
        "from_reply_mismatch": (
            reply_to != "" and
            reply_to.lower() != from_addr.lower()
        ),
        # Return-Path mismatch
        "return_path_mismatch": (
            return_path != "" and
            return_path.lower() != from_addr.lower()
        ),
        # Authentication failures
        "spf_fail": "spf=fail" in auth_results.lower(),
        "dkim_fail": "dkim=fail" in auth_results.lower(),
        "dmarc_fail": "dmarc=fail" in auth_results.lower(),
        # Display name spoofing
        "display_name_has_email": "@" in from_addr.split("<")[0]
            if "<" in from_addr else False,
        # Received header hop count (unusually high = suspicious)
        "received_hop_count": str(headers).lower().count("received:"),
    }

    return features

Model Pipeline for Phishing Detection

┌──────────────────────────────────────────────────────────────────────────┐
│                   PHISHING DETECTION ML PIPELINE                         │
│                                                                          │
│  ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐            │
│  │  Email    │──▶│  Feature │──▶│  Model   │──▶│  Alert   │            │
│  │  Ingress │   │  Extract │   │  Predict │   │  Engine  │            │
│  └──────────┘   └──────────┘   └──────────┘   └──────────┘            │
│       │              │              │              │                    │
│  ┌────▼────┐   ┌────▼────────┐ ┌───▼────────┐ ┌──▼───────────┐       │
│  │ Parse   │   │ Lexical     │ │ Ensemble:  │ │ Risk score   │       │
│  │ MIME    │   │ URL/Domain  │ │ - XGBoost  │ │ > 0.8: block │       │
│  │ Headers │   │ Header      │ │ - BERT NLP │ │ > 0.5: quarantine │  │
│  │ Body    │   │ Behavioral  │ │ - URL CNN  │ │ > 0.3: flag  │       │
│  │ Attach  │   │ Attachment  │ │ consensus  │ │ < 0.3: pass  │       │
│  └─────────┘   └─────────────┘ └────────────┘ └──────────────┘       │
│                                                                          │
│  Feedback Loop: Analyst verdict → Retrain weekly                         │
└──────────────────────────────────────────────────────────────────────────┘

Transformer-Based Phishing Detection

Modern NLP leverages transformer architectures (BERT, RoBERTa) fine-tuned on phishing corpuses. These models understand semantic meaning — they detect phishing intent even when specific keywords are absent.

# Transformer-based phishing detection (architecture only — synthetic)
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

class PhishingDetector:
    """BERT-based phishing email classifier (synthetic example)."""

    def __init__(self, model_name="bert-base-uncased"):
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.model = AutoModelForSequenceClassification.from_pretrained(
            model_name, num_labels=2  # benign vs phishing
        )
        self.model.eval()

    def predict(self, email_text: str) -> dict:
        """Classify email as phishing or benign."""
        inputs = self.tokenizer(
            email_text,
            max_length=512,
            truncation=True,
            padding="max_length",
            return_tensors="pt"
        )

        with torch.no_grad():
            outputs = self.model(**inputs)
            probabilities = torch.softmax(outputs.logits, dim=1)

        phishing_prob = probabilities[0][1].item()
        return {
            "is_phishing": phishing_prob > 0.5,
            "confidence": phishing_prob,
            "label": "PHISHING" if phishing_prob > 0.5 else "BENIGN"
        }

# Example usage with synthetic email
# detector = PhishingDetector()
# result = detector.predict(
#     "Dear user, your account at secure-portal.example.com "
#     "has been flagged for suspicious activity. Click here "
#     "to verify your identity within 24 hours or your "
#     "account will be permanently suspended."
# )
# print(result)
# {'is_phishing': True, 'confidence': 0.94, 'label': 'PHISHING'}

NLP Phishing Detection in Production

In production, combine NLP predictions with traditional indicators (SPF/DKIM/DMARC results, URL reputation, sender history). NLP alone catches about 92% of phishing; combined with header/URL analysis, detection rates exceed 98% with false positive rates below 0.1%.


5. Graph-Based Threat Detection

Traditional security analytics treat events as independent data points — each log line is analyzed in isolation or correlated with a small number of related events. Graph-based detection takes a fundamentally different approach: it models the entire environment as an interconnected graph and detects threats based on structural patterns.

Why Graphs for Security?

Attackers do not operate in isolation. They authenticate, move laterally, escalate privileges, access resources, and exfiltrate data through a chain of relationships. These relationships form a graph that reveals attack paths invisible to point-based detection.

┌─────────────────────────────────────────────────────────────────┐
│              ENTITY RELATIONSHIP GRAPH                           │
│                                                                  │
│    ┌──────────┐     auth      ┌──────────┐                      │
│    │  User:   │──────────────▶│  Host:   │                      │
│    │  jsmith  │               │  WS-042  │                      │
│    └──────────┘               └──────────┘                      │
│         │                          │                             │
│         │ member_of                │ connects_to                 │
│         ▼                          ▼                             │
│    ┌──────────┐               ┌──────────┐                      │
│    │  Group:  │               │  Server: │                      │
│    │  Finance │               │  DB-PROD │                      │
│    └──────────┘               └──────────┘                      │
│         │                          │                             │
│         │ has_access_to            │ hosts                       │
│         ▼                          ▼                             │
│    ┌──────────┐               ┌──────────┐                      │
│    │  Share:  │               │  App:    │                      │
│    │  \\FIN01 │               │  ERP-SYS │                      │
│    └──────────┘               └──────────┘                      │
│                                                                  │
│  Normal path:  User → Workstation → Share                        │
│  Attack path:  User → Workstation → Server → App (ANOMALOUS)    │
└─────────────────────────────────────────────────────────────────┘

Graph Construction for Security

The security graph has three primary entity types (nodes) and multiple relationship types (edges):

Node Type Examples Key Attributes
Users Employee accounts, service accounts, external identities Role, department, privilege level, risk score
Hosts Workstations, servers, cloud instances, IoT devices OS, patch level, criticality, network zone
Resources File shares, databases, applications, SaaS tenants Data classification, access control list, owner
Edge Type Source → Target Detection Value
authenticates_to User → Host Unusual auth patterns, pass-the-hash
connects_to Host → Host Lateral movement detection
accesses User → Resource Unauthorized data access
member_of User → Group Privilege escalation via group membership
runs_process Host → Process Malicious process execution
resolves_to DNS Query → IP C2 domain detection
communicates_with Host → External IP Exfiltration, C2 beaconing

Graph Neural Networks for Lateral Movement

Graph Neural Networks (GNNs) learn node embeddings that capture structural context. When an attacker moves laterally, the graph topology around compromised nodes changes — GNNs detect these structural anomalies.

# Graph-based lateral movement detection (synthetic)
# Using PyTorch Geometric for GNN-based anomaly detection
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv, global_mean_pool
from torch_geometric.data import Data

class LateralMovementGNN(torch.nn.Module):
    """GNN for detecting anomalous authentication patterns.

    Input: Subgraph around a target node (2-hop neighborhood)
    Output: Anomaly probability (0 = normal, 1 = lateral movement)
    """

    def __init__(self, num_node_features, hidden_channels):
        super().__init__()
        self.conv1 = GCNConv(num_node_features, hidden_channels)
        self.conv2 = GCNConv(hidden_channels, hidden_channels)
        self.conv3 = GCNConv(hidden_channels, hidden_channels // 2)
        self.classifier = torch.nn.Linear(hidden_channels // 2, 2)

    def forward(self, x, edge_index, batch):
        # Three layers of graph convolution
        x = F.relu(self.conv1(x, edge_index))
        x = F.dropout(x, p=0.3, training=self.training)
        x = F.relu(self.conv2(x, edge_index))
        x = F.dropout(x, p=0.3, training=self.training)
        x = F.relu(self.conv3(x, edge_index))

        # Global pooling to get graph-level representation
        x = global_mean_pool(x, batch)

        # Classification
        x = self.classifier(x)
        return F.log_softmax(x, dim=1)


# Node features for security graph:
# [is_user, is_host, is_server, privilege_level, department_encoded,
#  avg_daily_connections, connection_diversity, time_of_activity,
#  is_service_account, risk_score_normalized]
NUM_FEATURES = 10
HIDDEN_DIM = 32

model = LateralMovementGNN(NUM_FEATURES, HIDDEN_DIM)

# Example: construct a synthetic authentication subgraph
# Nodes: 0=attacker, 1=compromised_ws, 2=dc, 3=fileserver, 4=db_server
node_features = torch.tensor([
    [1, 0, 0, 2, 3, 5.2, 0.3, 14.5, 0, 0.2],   # user (normal)
    [0, 1, 0, 0, 0, 12.0, 0.5, 14.5, 0, 0.1],   # workstation
    [0, 0, 1, 5, 0, 500.0, 0.9, 12.0, 0, 0.05],  # domain controller
    [0, 0, 1, 3, 0, 50.0, 0.4, 10.0, 0, 0.1],    # file server
    [0, 0, 1, 4, 0, 30.0, 0.2, 11.0, 0, 0.15],   # database server
], dtype=torch.float)

# Edges: user→ws, ws→dc (suspicious!), ws→fileserver, ws→db (suspicious!)
edge_index = torch.tensor([
    [0, 1, 1, 1],  # source nodes
    [1, 2, 3, 4],  # target nodes
], dtype=torch.long)

graph = Data(x=node_features, edge_index=edge_index)
print(f"Graph: {graph.num_nodes} nodes, {graph.num_edges} edges")

Attack Path Prediction

Beyond detecting ongoing attacks, graph analysis predicts potential attack paths before they are exploited. By analyzing the connectivity and privilege relationships in the security graph, defenders can identify high-risk paths and proactively remediate them.

Attack Path Analysis Example (Synthetic — Quantum Financial Services):

Path 1 (Risk Score: 92/100):
  intern_account@example.com
    → VPN Gateway (198.51.100.10)
    → WORKSTATION-INT-07
    → [Kerberoasting] → Service Account (svc_backup@example.com)
    → BACKUP-SERVER-01
    → [Credential Dump] → Domain Admin Hash
    → DOMAIN-CONTROLLER-01
    → Full domain compromise

Path 2 (Risk Score: 87/100):
  contractor@example.com
    → Citrix Gateway (203.0.113.50)
    → CITRIX-SESSION-12
    → [PrintNightmare] → SYSTEM on PRINT-SERVER-03
    → [Lateral Movement] → HR-FILE-SERVER
    → PII database access

Remediation Priority:
  1. Remove svc_backup from Domain Admins → eliminates Path 1
  2. Patch PrintNightmare on PRINT-SERVER-03 → eliminates Path 2
  3. Segment CITRIX-SESSION pool from server VLAN

For advanced threat hunting techniques using graph analysis, see Chapter 38: Threat Hunting — Advanced.


6. ML Pipeline for SOC Teams

Deploying ML in a SOC is not a research project — it is a production engineering challenge. Models that perform brilliantly in Jupyter notebooks fail catastrophically in production without proper pipeline architecture.

End-to-End ML Pipeline Architecture

┌──────────────────────────────────────────────────────────────────────────────┐
│                        ML DETECTION PIPELINE                                  │
│                                                                               │
│  ┌────────────┐  ┌────────────┐  ┌────────────┐  ┌────────────┐             │
│  │    Data    │  │  Feature   │  │   Model    │  │  Inference │             │
│  │ Collection │─▶│ Engineering│─▶│  Training  │─▶│  & Alerts  │             │
│  └────────────┘  └────────────┘  └────────────┘  └────────────┘             │
│       │               │               │               │                      │
│  ┌────▼────┐     ┌────▼────┐    ┌────▼────┐    ┌────▼────┐                  │
│  │ SIEM    │     │ Feature │    │ Model   │    │ SIEM    │                  │
│  │ EDR     │     │ Store   │    │ Registry│    │ SOAR    │                  │
│  │ Network │     │ (Redis/ │    │ (MLflow)│    │ Ticket  │                  │
│  │ Cloud   │     │ Feature │    │         │    │ System  │                  │
│  │ Identity│     │ Server) │    │         │    │         │                  │
│  └─────────┘     └─────────┘    └─────────┘    └─────────┘                  │
│                                                                               │
│  ┌──────────────────────────────────────────────────────────────────┐        │
│  │                    FEEDBACK LOOP                                  │        │
│  │  Analyst Verdict → Label Store → Retrain Trigger → A/B Deploy    │        │
│  └──────────────────────────────────────────────────────────────────┘        │
│                                                                               │
│  ┌──────────────────────────────────────────────────────────────────┐        │
│  │                    MONITORING                                     │        │
│  │  Model Drift Detection → Performance Metrics → Alerting          │        │
│  └──────────────────────────────────────────────────────────────────┘        │
└──────────────────────────────────────────────────────────────────────────────┘

Stage 1: Data Collection and Ingestion

The quality of ML detection is bounded by the quality and breadth of input data. Garbage in, garbage out applies more forcefully in security ML than anywhere else.

Essential data sources for security ML:

Data Source Use Case Volume Estimate (10K endpoints)
Authentication logs UEBA, credential abuse 5-20M events/day
Network flow data Lateral movement, C2, exfil 50-500M flows/day
DNS query logs Tunneling, DGA, C2 10-100M queries/day
Endpoint telemetry Process execution, file access 100M-1B events/day
Email metadata Phishing, BEC 500K-5M emails/day
Cloud audit logs Cloud abuse, misconfig 10-50M events/day
Identity/IAM logs Privilege abuse, lateral move 1-10M events/day

Stage 2: Feature Engineering

Raw logs are not model-ready. Feature engineering transforms raw events into structured, numerical representations that capture security-relevant patterns.

Feature engineering principles for security ML:

  1. Temporal aggregation: Convert point events into time-window features (hourly, daily, weekly statistics)
  2. Entity-centric: Group features by entity (user, host, process) rather than by event
  3. Relative features: Compare current behavior to baseline (ratios, z-scores, percentiles)
  4. Domain knowledge encoding: Embed security expertise (is this a known admin tool? Is this port commonly used for C2?)
  5. Interaction features: Capture relationships (user-to-host frequency, host-to-host communication patterns)
# Feature engineering pipeline for security ML (synthetic)
import pandas as pd
import numpy as np
from datetime import datetime, timedelta

class SecurityFeatureEngineering:
    """Transform raw security logs into ML-ready features."""

    def __init__(self, baseline_days=30):
        self.baseline_days = baseline_days

    def engineer_user_features(self, auth_logs: pd.DataFrame,
                                network_logs: pd.DataFrame,
                                file_logs: pd.DataFrame) -> pd.DataFrame:
        """Create user behavioral features from multiple log sources."""

        features = {}

        # Authentication features
        auth_features = auth_logs.groupby("username").agg({
            "timestamp": ["count", "nunique"],
            "source_ip": "nunique",
            "dest_host": "nunique",
            "auth_result": lambda x: (x == "failure").sum(),
            "auth_method": "nunique",
        })

        # Time-of-day features
        auth_logs["hour"] = pd.to_datetime(
            auth_logs["timestamp"]
        ).dt.hour
        time_features = auth_logs.groupby("username")["hour"].agg([
            "mean", "std", "min", "max"
        ])

        # Network features
        net_features = network_logs.groupby("src_user").agg({
            "bytes_sent": ["sum", "mean", "max", "std"],
            "bytes_received": ["sum", "mean", "max"],
            "dest_ip": "nunique",
            "dest_port": "nunique",
            "protocol": "nunique",
            "duration": ["mean", "max"],
        })

        # File access features
        file_features = file_logs.groupby("username").agg({
            "file_path": ["count", "nunique"],
            "action": lambda x: (x == "delete").sum(),
            "file_size": ["sum", "mean", "max"],
            "sensitivity_label": lambda x: (
                x.isin(["confidential", "restricted"])
            ).sum(),
        })

        # Combine all features into single dataframe
        # (merge on username, flatten column names)
        # ... implementation details ...

        return features

    def compute_peer_deviation(self, user_features: pd.DataFrame,
                                role_column: str) -> pd.DataFrame:
        """Calculate per-user deviation from peer group norms."""

        peer_stats = user_features.groupby(role_column).transform(
            lambda x: (x - x.mean()) / x.std()
        )

        # Rename columns to indicate peer-relative values
        peer_stats.columns = [
            f"{col}_peer_zscore" for col in peer_stats.columns
        ]

        return pd.concat([user_features, peer_stats], axis=1)

Stage 3: Model Training and Evaluation

Security ML has unique evaluation challenges. Standard accuracy metrics are misleading because the class distribution is extremely imbalanced — malicious events may represent 0.01% of all data.

Appropriate metrics for security ML:

Metric What It Measures Target Range
Precision Of alerts fired, how many are true positives > 80% for production
Recall Of real attacks, how many were detected > 90% (miss rate < 10%)
F1 Score Harmonic mean of precision and recall > 0.85
AUC-ROC Discrimination ability across all thresholds > 0.95
AUC-PR Precision-recall tradeoff (better for imbalanced data) > 0.70
False Positive Rate Percentage of normal events flagged < 0.1%
Alert Volume Total alerts per day Manageable by SOC team
Detection Latency Time from event to alert < 5 minutes

Stage 4: Model Deployment and MLOps

┌─────────────────────────────────────────────────────────────────┐
│                    SECURITY MLOPS LIFECYCLE                       │
│                                                                  │
│  ┌──────────┐    ┌──────────┐    ┌──────────┐    ┌──────────┐  │
│  │ Develop  │───▶│ Validate │───▶│ Stage    │───▶│ Produce  │  │
│  │          │    │          │    │          │    │          │  │
│  │ Jupyter  │    │ Test on  │    │ Shadow   │    │ Live     │  │
│  │ Feature  │    │ labeled  │    │ mode     │    │ alerting │  │
│  │ Explore  │    │ holdout  │    │ (score   │    │ with     │  │
│  │          │    │          │    │  but no  │    │ SOAR     │  │
│  │          │    │          │    │  alert)  │    │ integr.  │  │
│  └──────────┘    └──────────┘    └──────────┘    └──────────┘  │
│                                                                  │
│  Model Registry: version control, rollback capability            │
│  Drift Detection: statistical tests on feature distributions     │
│  Retraining: weekly cadence with analyst-verified labels          │
│  A/B Testing: new model scores alongside production model        │
└─────────────────────────────────────────────────────────────────┘

Shadow mode is critical for security ML deployment. Before a model generates real alerts, run it in shadow mode for 2-4 weeks:

  • Model scores every event in real time
  • Scores are logged but do not trigger alerts
  • SOC analysts review a sample of high-score events
  • Compare shadow model performance against existing rules
  • Only promote to production after shadow validation

Model Drift Detection

Security data distributions shift constantly — new applications are deployed, user behavior changes seasonally, network topology evolves. A model trained on January data may be ineffective by March.

# Model drift detection for security ML (synthetic)
from scipy import stats
import numpy as np

class DriftDetector:
    """Detect distribution shift in model input features."""

    def __init__(self, reference_data: np.ndarray,
                 significance_level: float = 0.05):
        self.reference = reference_data
        self.alpha = significance_level

    def detect_drift(self, current_data: np.ndarray) -> dict:
        """Run statistical tests for feature drift."""
        results = {}

        for feature_idx in range(self.reference.shape[1]):
            ref_feat = self.reference[:, feature_idx]
            cur_feat = current_data[:, feature_idx]

            # Kolmogorov-Smirnov test
            ks_stat, ks_pval = stats.ks_2samp(ref_feat, cur_feat)

            # Population Stability Index
            psi = self._calculate_psi(ref_feat, cur_feat)

            results[f"feature_{feature_idx}"] = {
                "ks_statistic": ks_stat,
                "ks_pvalue": ks_pval,
                "psi": psi,
                "drift_detected": ks_pval < self.alpha or psi > 0.2,
            }

        return results

    @staticmethod
    def _calculate_psi(expected: np.ndarray,
                       actual: np.ndarray,
                       bins: int = 10) -> float:
        """Population Stability Index — measures distribution shift."""
        expected_hist, bin_edges = np.histogram(expected, bins=bins)
        actual_hist, _ = np.histogram(actual, bins=bin_edges)

        # Add small constant to avoid division by zero
        expected_pct = (expected_hist + 1) / (len(expected) + bins)
        actual_pct = (actual_hist + 1) / (len(actual) + bins)

        psi = np.sum(
            (actual_pct - expected_pct) * np.log(actual_pct / expected_pct)
        )
        return psi

For more on AI/ML integration into security operations, see Chapter 10: AI/ML for SOC.


7. Adversarial ML Risks

Every ML model deployed in security creates a new attack surface. Adversaries actively research and exploit ML weaknesses. If your detection model can be evaded, poisoned, or stolen, it becomes a liability rather than an asset.

Threat Model for Security ML

┌─────────────────────────────────────────────────────────────────┐
│              ADVERSARIAL ML THREAT MODEL                         │
│                                                                  │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │                   ATTACK VECTORS                         │    │
│  │                                                          │    │
│  │  ┌────────────┐  ┌────────────┐  ┌────────────────┐    │    │
│  │  │  Evasion   │  │  Poisoning │  │  Model         │    │    │
│  │  │            │  │            │  │  Extraction    │    │    │
│  │  │ Modify     │  │ Corrupt    │  │                │    │    │
│  │  │ input to   │  │ training   │  │ Steal model    │    │    │
│  │  │ avoid      │  │ data to    │  │ via query      │    │    │
│  │  │ detection  │  │ degrade    │  │ access         │    │    │
│  │  │            │  │ model      │  │                │    │    │
│  │  └────────────┘  └────────────┘  └────────────────┘    │    │
│  │                                                          │    │
│  │  ┌────────────┐  ┌────────────┐  ┌────────────────┐    │    │
│  │  │  Inversion │  │  Backdoor  │  │  Supply Chain  │    │    │
│  │  │            │  │            │  │                │    │    │
│  │  │ Extract    │  │ Implant    │  │ Compromise     │    │    │
│  │  │ training   │  │ trigger    │  │ pre-trained    │    │    │
│  │  │ data from  │  │ pattern    │  │ model or       │    │    │
│  │  │ model      │  │ that       │  │ training       │    │    │
│  │  │ outputs    │  │ causes     │  │ framework      │    │    │
│  │  │            │  │ misclass.  │  │                │    │    │
│  │  └────────────┘  └────────────┘  └────────────────┘    │    │
│  └──────────────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────────────┘

Evasion Attacks

The adversary modifies malicious inputs so they are classified as benign by the ML model. This is the most common attack against security ML.

Example — Evading Malware Classifiers:

An attacker who knows (or can infer) that a malware classifier relies on import table features can pad their malware with benign imports. The malware functionality remains intact, but the feature vector shifts toward the benign distribution:

Original malware features:
  import_count: 12  (malicious pattern — few imports)
  entropy: 7.8     (packed/encrypted)
  string_count: 45  (low for file size)
  → Classifier prediction: MALICIOUS (confidence: 0.97)

After evasion:
  import_count: 156  (padded with benign imports)
  entropy: 5.2      (added benign sections)
  string_count: 2400 (embedded benign strings)
  → Classifier prediction: BENIGN (confidence: 0.82)

Defense strategies:

  • Adversarial training: Include adversarial examples in the training set
  • Ensemble models: Multiple models with different feature sets — harder to evade all simultaneously
  • Feature diversification: Use features the attacker cannot easily control (behavioral features, network patterns, temporal sequences)
  • Input validation: Detect adversarial perturbations before they reach the model

Data Poisoning

The adversary corrupts the training data to degrade model performance or implant backdoors. This is particularly dangerous for models that retrain on production data.

Attack scenario: An insider threat actor performs a series of deliberately suspicious-looking benign activities over several weeks. Analysts investigate and mark them as false positives. These false-positive labels enter the retraining pipeline, teaching the model that similar patterns are benign. The attacker then performs the actual malicious activity with the same pattern — and the poisoned model ignores it.

Defense strategies:

  • Data provenance: Track the origin and labeling history of every training sample
  • Outlier detection on labels: Detect sudden shifts in label distribution
  • Holdout validation: Always validate retrained models against a clean, curated holdout set
  • Label audit: Periodically review analyst labels for consistency
  • Anomaly detection on the training pipeline itself: Monitor for unusual data injections

Model Stealing

The adversary queries the model repeatedly to reconstruct its decision boundary. With enough query-response pairs, they can build a functionally equivalent copy (a "surrogate model") and then develop evasion attacks offline.

Defense strategies:

  • Rate limiting: Restrict the number of queries per entity per time window
  • Output obfuscation: Return confidence bands instead of exact probabilities
  • Query auditing: Monitor for systematic probing patterns
  • Watermarking: Embed identifiable patterns in model outputs that persist in stolen copies

Defense-in-Depth for ML Systems

Layer Defense Implementation
Data Input validation, data provenance tracking Schema enforcement, lineage tools
Feature Feature importance monitoring, drift detection Statistical tests on feature distributions
Model Adversarial training, ensemble voting Regular adversarial evaluation
Output Confidence calibration, output smoothing Platt scaling, temperature adjustment
Pipeline Access control, audit logging, integrity checks MLOps platform with RBAC
Monitoring Performance tracking, anomaly detection on predictions Dashboard with automated alerts

The Arms Race Reality

Adversarial ML is an active arms race. Every defense creates a new attack surface, and every attack motivates a new defense. There is no permanent solution — only continuous adaptation. The goal is to raise the cost of evasion higher than the cost of alternative attack paths.

For comprehensive coverage of adversarial AI and LLM security, see Chapter 50: Adversarial AI & LLM Security.

For AI-specific security controls and governance frameworks, see Chapter 37: AI Security.


8. Case Study: Quantum Financial Services

Note: Quantum Financial Services is a completely fictional company created for educational purposes. All IP addresses use RFC 5737 ranges. All domain names use example.com. All data is synthetic.

Company Profile

  • Name: Quantum Financial Services (fictional)
  • Industry: Financial services — retail and commercial banking
  • Size: 12,000 employees, 400 branch offices, 3 data centers
  • SOC: 35 analysts across 3 shifts, Tier 1/2/3 structure
  • SIEM: Microsoft Sentinel (primary), Splunk (legacy integration)
  • Endpoints: 18,000 managed devices, CrowdStrike Falcon EDR
  • Cloud: Azure (primary), AWS (legacy workloads)

The Problem

Before ML deployment, Quantum Financial Services faced a detection crisis:

Metric Before ML (Q1 2026) Target
Daily alerts 14,200 < 500 actionable
True positive rate 0.08% (11 of 14,200) > 50%
Mean time to detect (MTTD) 18.3 days < 4 hours
Mean time to respond (MTTR) 6.2 days < 2 hours
Analyst burnout (turnover) 45% annual < 15%
Missed incidents (discovered externally) 4 in Q1 0
Rule maintenance burden 2 FTEs (full-time rule tuning) < 0.5 FTE

The ML Detection Architecture

Quantum Financial deployed a three-layer ML detection stack over 6 months:

┌──────────────────────────────────────────────────────────────────────────┐
│          QUANTUM FINANCIAL — ML DETECTION ARCHITECTURE                    │
│                                                                           │
│  Layer 3: ADVANCED ML ──────────────────────────────────────────────────  │
│  │  Graph Neural Network — lateral movement & attack path detection       │
│  │  NLP Phishing Engine — email content + header + URL analysis           │
│  │  Deep Autoencoder — network traffic anomaly detection                  │
│  │                                                                        │
│  Layer 2: UEBA ─────────────────────────────────────────────────────────  │
│  │  Isolation Forest — multivariate user behavioral anomalies             │
│  │  Z-Score Ensemble — per-dimension anomaly scoring                      │
│  │  Peer Group Analysis — role-based deviation detection                  │
│  │                                                                        │
│  Layer 1: RULES (enhanced) ─────────────────────────────────────────────  │
│  │  4,200 correlation rules (existing)                                    │
│  │  ML-tuned thresholds (dynamic, not static)                            │
│  │  Alert deduplication and clustering                                    │
│  │                                                                        │
│  ┌──────────────────────────────────────────────────────────────────┐    │
│  │  FUSION ENGINE                                                    │    │
│  │  Combines rule alerts + ML scores → composite risk score          │    │
│  │  Deduplicates related signals → single enriched investigation     │    │
│  └──────────────────────────────────────────────────────────────────┘    │
└──────────────────────────────────────────────────────────────────────────┘

Deployment Timeline

Month Phase Activities
Month 1 Data foundation Unified log pipeline, feature store deployment, baseline data collection
Month 2 UEBA baseline 30-day behavioral baseline for all 12,000 users. Peer group definition (47 role clusters). Initial Isolation Forest training
Month 3 Shadow mode UEBA model scoring in shadow mode. No alerts. Analyst review of top-100 daily scores. Threshold calibration
Month 4 UEBA production UEBA alerts go live. NLP phishing model enters shadow mode. Alert clustering reduces rule-based noise by 60%
Month 5 Layer 2 + 3 NLP phishing model goes live. Graph-based detection enters shadow mode. Autoencoder for network anomalies in training
Month 6 Full deployment All three layers active. Fusion engine combining signals. A/B testing against rule-only baseline

The APT That Rules Missed

In month 5, the ML detection system identified a previously undetected APT campaign that had been active for approximately 47 days. The 4,200 existing rules had not fired a single alert because the attacker operated entirely within "normal" thresholds for individual indicators.

Attack timeline and ML detection signals:

Day 1-14: Initial Access & Reconnaissance
  ────────────────────────────────────────
  Attacker compromised contractor account: vendor_maint@example.com
  Method: Credential stuffing from breach database
  IP: 198.51.100.44 (VPN connection from unusual geography)

  Rule detection: NONE
    - VPN login succeeded (no failed attempts — credential was valid)
    - Contractor accounts regularly use VPN
    - No geolocation rule for contractor accounts

  ML detection signals:
    ✓ UEBA: Login time 02:47 UTC — 3.2σ from contractor baseline
    ✓ UEBA: New source subnet — never seen for this account
    ✓ Graph: New authentication edge (vendor_maint → DC-PROD-01)
    Individual signals below alert threshold at this stage

Day 15-28: Lateral Movement
  ────────────────────────────────────────
  Attacker used contractor access to enumerate Active Directory
  Kerberoasted service account: svc_reporting@example.com
  Moved to internal host: 10.50.20.118 (reporting server)

  Rule detection: NONE
    - AD enumeration queries below volume threshold
    - Kerberos TGS requests are normal for service accounts
    - Reporting server legitimately accessed by many users

  ML detection signals:
    ✓ UEBA: vendor_maint accessing 12 new hosts in 48 hours
      (baseline: 0.3 new hosts/week) — z-score: 8.7
    ✓ Graph: Authentication path vendor_maint → DC → svc_reporting
      flagged as structurally anomalous (2-hop privilege jump)
    ✓ Network autoencoder: Reconstruction error spike on
      reporting server outbound traffic pattern

    >>> UEBA COMPOSITE RISK SCORE: 89/100 — ALERT GENERATED <<<
    >>> Day 23: SOC Tier 2 analyst begins investigation <<<

Day 29-47: Data Staging & Exfiltration Attempt
  ────────────────────────────────────────
  Attacker accessed financial reporting database
  Staged 2.3 GB of quarterly results (pre-announcement)
  Attempted DNS tunneling exfiltration to dns.data-analytics.example.com

  Rule detection: NONE
    - Database access via legitimate service account
    - Data volume below daily threshold (reporting system moves more)
    - DNS queries individually unremarkable

  ML detection signals:
    ✓ UEBA: svc_reporting downloading data at 03:15 UTC
      (baseline activity: business hours only)
    ✓ NLP: No phishing involved, but DNS analysis model
      flagged high-entropy subdomain queries:
      aGVsbG8gd29ybGQ.dns.data-analytics.example.com
    ✓ Network autoencoder: Unusual DNS query pattern —
      regular interval, fixed payload sizes = beaconing
    ✓ Graph: New external edge from svc_reporting → external IP
      203.0.113.77 (never seen in 90-day baseline)

    >>> MULTIPLE ML MODELS CONVERGING — RISK SCORE: 97/100 <<<
    >>> Day 29: Escalated to Tier 3 + IR team <<<
    >>> Day 30: Containment actions executed <<<
    >>> Day 31: Exfiltration blocked, account disabled <<<

Results After 6 Months

Metric Before ML (Q1) After ML (Q3) Change
Daily alerts 14,200 380 (actionable) -97% volume
True positive rate 0.08% 73% +912x improvement
Mean time to detect (MTTD) 18.3 days 3.2 hours -99.3%
Mean time to respond (MTTR) 6.2 days 1.8 hours -98.8%
Analyst burnout (turnover) 45% annual 12% projected -73%
Missed incidents 4 per quarter 0 -100%
Rule maintenance burden 2 FTEs 0.3 FTE -85%
APTs detected by ML only N/A 3 campaigns New capability
Phishing detection rate 82% 98.4% +20%
False positive rate (phishing) 3.2% 0.08% -97.5%

Key Insight

The APT was undetectable by rules because no single indicator exceeded any threshold. The attacker stayed below every individual rule trigger. ML detected the composite anomaly — the convergence of multiple weak signals across authentication, network, and behavioral dimensions. No single model detected it alone; the fusion of UEBA, graph analysis, and network autoencoder signals created the detection.

Lessons Learned

  1. Shadow mode is non-negotiable: The 4-week shadow period identified 3 feature engineering bugs and 2 threshold misconfigurations that would have caused alert storms in production.

  2. Peer groups matter more than absolute baselines: The contractor account was flagged partly because its behavior deviated from the "contractor" peer group. Using a global baseline would have missed it.

  3. Feedback loops require governance: When analysts mark ML alerts as false positives, those labels feed the retraining pipeline. A process for label quality auditing prevented two potential data poisoning scenarios.

  4. Graph detection finds what UEBA misses: The structural anomaly (2-hop privilege jump from contractor to service account to domain controller) was invisible to statistical behavioral analysis. Graph-based detection provided unique signal.

  5. Executive buy-in requires metrics, not models: The CISO presentation focused entirely on MTTD reduction and cost savings — not on model architecture or algorithm selection.


9. Getting Started Guide

Deploying ML-based detection is a multi-year journey. Start with proven techniques and expand as your team builds competence. The following maturity model provides a practical roadmap.

ML Detection Maturity Model

Level 5: AUTONOMOUS ──────────────────────────────────────────────────
│  Self-tuning models, automated response, adversarial robustness
│  testing, continuous red team validation, custom GNN/transformer
│  models, real-time retraining pipeline
Level 4: ADVANCED ────────────────────────────────────────────────────
│  Multi-model fusion, graph-based detection, NLP phishing,
│  adversarial training, A/B model deployment, drift monitoring,
│  custom feature engineering, analyst-in-the-loop retraining
Level 3: INTERMEDIATE ───────────────────────────────────────────────
│  UEBA with peer groups, ML-tuned alert thresholds, anomaly
│  detection on network flows, phishing scoring (pre-built models),
│  basic MLOps (model versioning, monitoring)
Level 2: FOUNDATIONAL ───────────────────────────────────────────────
│  Statistical baselines (z-score, IQR), alert clustering/dedup,
│  simple anomaly scoring, pre-built UEBA from SIEM vendor,
│  feature store for security data
Level 1: RULE-BASED ─────────────────────────────────────────────────
│  Correlation rules only, signature matching, threshold alerts,
│  static IOC feeds, manual alert triage

Practical Steps for Each Maturity Level

Level 1 → Level 2 (3-6 months)

Goal: Introduce statistical baselines to reduce alert noise.

  1. Identify your noisiest rules: Export alert volume by rule ID. The top 20 rules typically generate 80% of alerts.
  2. Add statistical thresholds: For the top 20 rules, replace static thresholds with z-score or IQR-based dynamic thresholds.
  3. Deploy vendor UEBA: Most modern SIEMs (Sentinel, Splunk ES, QRadar) include built-in UEBA capabilities. Enable them with default settings.
  4. Build a feature store: Start collecting and aggregating behavioral features even before you train custom models.
  5. Establish a label collection process: Every analyst investigation should produce a "true positive" or "false positive" verdict that feeds future training.

Quick win — Alert clustering with KQL:

// Cluster related alerts to reduce volume
// Groups alerts by entity + time window + technique
SecurityAlert
| where TimeGenerated >= ago(24h)
| extend AlertEntity = tostring(parse_json(Entities)[0].Address)
| summarize
    AlertCount = count(),
    AlertNames = make_set(AlertName, 10),
    Tactics = make_set(Tactics, 5),
    FirstSeen = min(TimeGenerated),
    LastSeen = max(TimeGenerated),
    MaxSeverity = max(AlertSeverity)
    by AlertEntity, bin(TimeGenerated, 1h)
| where AlertCount > 1
| sort by AlertCount desc

Level 2 → Level 3 (6-12 months)

Goal: Custom UEBA with peer groups and ML-tuned thresholds.

  1. Define peer groups: Cluster users by role, department, and access patterns. Start with Active Directory groups and refine with behavioral clustering.
  2. Train Isolation Forest models: Use the code patterns from Section 3. Start with authentication anomalies — they have the highest signal-to-noise ratio.
  3. Integrate ML scores into SIEM: Write model output back to the SIEM as custom fields. Create analytics rules that combine ML scores with rule-based detections.
  4. Deploy pre-built phishing models: Tools like Microsoft Defender for Office 365 and Google Workspace include ML-based phishing detection. Enable and tune before building custom models.
  5. Implement shadow mode: Every new model runs in shadow mode for 30 days before generating alerts.

Level 3 → Level 4 (12-24 months)

Goal: Multi-model fusion, graph detection, custom NLP.

  1. Build a model registry: Use MLflow or similar to version, track, and roll back models.
  2. Implement graph-based detection: Start with authentication graph anomaly detection (Section 5). Identity is the highest-value graph for initial deployment.
  3. Train custom NLP models: Fine-tune a transformer on your organization's phishing corpus. Internal language patterns improve detection accuracy beyond generic models.
  4. Deploy adversarial robustness testing: Regularly test models against adversarial examples. Include adversarial samples in retraining.
  5. Automated drift detection: Implement the drift detection pipeline from Section 6 with automated alerting when feature distributions shift.

Open Source Tool Recommendations

Tool Purpose Maturity Level
scikit-learn Classical ML (Isolation Forest, Random Forest, SVM) Level 2+
XGBoost / LightGBM Gradient boosted trees for tabular security data Level 2+
PyTorch / TensorFlow Deep learning (autoencoders, transformers) Level 3+
PyTorch Geometric Graph neural networks Level 4+
Hugging Face Transformers NLP models (phishing detection, log analysis) Level 3+
MLflow Model registry, experiment tracking, deployment Level 3+
Apache Kafka Real-time feature streaming Level 3+
Redis Feature store (low-latency feature serving) Level 2+
Feast Feature store for ML (open source) Level 3+
Great Expectations Data quality validation for ML pipelines Level 3+
Evidently AI Model monitoring and drift detection Level 3+
ONNX Runtime Model serving (convert any framework to ONNX) Level 3+
Sigma Detection-as-code (rules that complement ML) Level 1+
MITRE ATLAS Adversarial ML threat framework Level 4+

Team Skills Matrix

Skill Level 2 Level 3 Level 4
Python programming Basic Intermediate Advanced
Statistics (z-score, distributions) Required Required Required
Scikit-learn / pandas Basic Intermediate Advanced
Deep learning frameworks Not needed Basic Intermediate
Graph theory Not needed Not needed Basic
NLP fundamentals Not needed Basic Intermediate
MLOps (CI/CD for models) Not needed Basic Intermediate
Security domain expertise Expert Expert Expert

Start With What You Have

The biggest mistake teams make is trying to build a Level 4 ML detection system from scratch. Start at Level 2: statistical baselines, vendor UEBA, and alert clustering. These three capabilities alone typically reduce alert volume by 60-80% and improve true positive rates by 5-10x. Build custom ML only after you have exhausted the value of statistical methods and vendor tools.


10. Detection Queries — ML-Augmented Scenarios

The following queries demonstrate how to integrate ML model outputs with traditional SIEM detection. These assume ML model scores are written back to the SIEM as custom fields or enrichment tables.

KQL — ML-Augmented Anomalous Authentication

// Detect high-risk authentication events using ML risk scores
// Requires: UEBA model output in custom table UEBARiskScores
let risk_threshold = 80;
let lookback = 1h;
//
UEBARiskScores
| where TimeGenerated >= ago(lookback)
| where RiskScore >= risk_threshold
| join kind=inner (
    SigninLogs
    | where TimeGenerated >= ago(lookback)
    | where ResultType == 0  // Successful logins only
    | project
        UserPrincipalName,
        IPAddress,
        Location,
        AppDisplayName,
        DeviceDetail,
        ConditionalAccessStatus,
        AuthenticationDetails,
        SigninTime = TimeGenerated
) on $left.UserId == $right.UserPrincipalName
| extend RiskFactors = parse_json(RiskFactors_s)
| project
    UserId,
    RiskScore,
    RiskFactors,
    IPAddress,
    Location,
    AppDisplayName,
    DeviceDetail,
    ConditionalAccessStatus,
    SigninTime,
    ModelVersion
| sort by RiskScore desc

KQL — DNS Tunneling Detection with ML Scoring

// Detect DNS tunneling using entropy and ML model scores
// Combines statistical features with ML anomaly scores
let entropy_threshold = 4.0;
let query_length_threshold = 50;
let lookback = 1h;
//
DnsEvents
| where TimeGenerated >= ago(lookback)
| where QueryType in ("A", "AAAA", "TXT", "CNAME")
| extend SubdomainPart = tostring(split(Name, ".")[0])
| extend QueryLength = strlen(Name)
| extend SubdomainLength = strlen(SubdomainPart)
// Calculate character frequency distribution for entropy
| extend CharSet = extract_all("(.)", SubdomainPart)
| extend UniqueChars = array_length(make_set(CharSet))
| extend EntropyEstimate = log2(1.0 * UniqueChars) *
    (SubdomainLength / max_of(QueryLength, 1))
// Filter for suspicious patterns
| where QueryLength > query_length_threshold
    or EntropyEstimate > entropy_threshold
    or SubdomainLength > 30
// Aggregate per source and domain
| summarize
    QueryCount = count(),
    AvgQueryLength = avg(QueryLength),
    MaxQueryLength = max(QueryLength),
    AvgEntropy = avg(EntropyEstimate),
    UniqueSubdomains = dcount(SubdomainPart),
    FirstQuery = min(TimeGenerated),
    LastQuery = max(TimeGenerated),
    QueryInterval_sec = datetime_diff(
        "second", max(TimeGenerated), min(TimeGenerated)
    ) / max_of(count() - 1, 1)
    by ClientIP, ParentDomain = strcat(
        tostring(split(Name, ".")[-2]), ".",
        tostring(split(Name, ".")[-1])
    )
| where QueryCount > 10
    and AvgEntropy > 3.5
    and UniqueSubdomains > 5
// Join with ML anomaly scores if available
// | join kind=leftouter MLDNSScores on ClientIP, ParentDomain
| extend TunnelingIndicators = pack(
    "high_entropy", AvgEntropy > 4.0,
    "regular_interval", QueryInterval_sec between (1 .. 60),
    "high_volume", QueryCount > 100,
    "long_subdomains", AvgQueryLength > 60
)
| extend IndicatorCount = toint(AvgEntropy > 4.0) +
    toint(QueryInterval_sec between (1 .. 60)) +
    toint(QueryCount > 100) +
    toint(AvgQueryLength > 60)
| where IndicatorCount >= 2
| sort by IndicatorCount desc, QueryCount desc

SPL — UEBA Risk Score Correlation

| inputlookup ueba_risk_scores.csv
| where risk_score >= 80
| join type=inner user
    [search index=authentication sourcetype=WinEventLog
     EventCode=4624 Logon_Type=10
     earliest=-1h latest=now
     | stats count as login_count,
         values(Source_Network_Address) as src_ips,
         values(Workstation_Name) as workstations,
         dc(Source_Network_Address) as unique_src_ips
         by Account_Name
     | rename Account_Name as user]
| eval risk_category = case(
    risk_score >= 95, "CRITICAL",
    risk_score >= 85, "HIGH",
    risk_score >= 70, "MEDIUM",
    1=1, "LOW")
| eval alert_priority = case(
    risk_score >= 95 AND unique_src_ips > 3, "P1 - Immediate",
    risk_score >= 85, "P2 - Urgent",
    risk_score >= 70, "P3 - Standard",
    1=1, "P4 - Low")
| sort - risk_score
| table user, risk_score, risk_category, alert_priority,
    login_count, unique_src_ips, src_ips, workstations,
    risk_factors

SPL — ML-Enhanced Lateral Movement Detection

| tstats summariesonly=t count as auth_count,
    dc(Authentication.dest) as unique_dests,
    values(Authentication.dest) as dest_hosts,
    dc(Authentication.src) as unique_srcs
    from datamodel=Authentication
    where Authentication.action=success
    by Authentication.user, _time span=1h
| rename Authentication.user as user
| eventstats avg(unique_dests) as avg_dests,
    stdev(unique_dests) as stdev_dests,
    avg(auth_count) as avg_auths,
    stdev(auth_count) as stdev_auths
    by user
| eval dest_zscore = if(stdev_dests > 0,
    (unique_dests - avg_dests) / stdev_dests, 0)
| eval auth_zscore = if(stdev_auths > 0,
    (auth_count - avg_auths) / stdev_auths, 0)
| eval composite_zscore = (dest_zscore * 0.6) + (auth_zscore * 0.4)
| where composite_zscore > 3.0
    AND _time >= relative_time(now(), "-1h")
| eval lateral_movement_indicators = case(
    unique_dests > 10, "HIGH - many unique destinations",
    unique_dests > 5, "MEDIUM - elevated destination count",
    1=1, "LOW - moderate deviation")
| sort - composite_zscore
| table _time, user, unique_dests, auth_count,
    dest_zscore, auth_zscore, composite_zscore,
    dest_hosts, lateral_movement_indicators

KQL — Anomalous Process Execution with ML Scoring

// Detect anomalous process execution patterns
// Uses ML baseline of normal process-parent relationships
let lookback = 1h;
let rare_threshold = 5; // Seen fewer than 5 times in 30 days
//
// Build process baseline (30-day history)
let ProcessBaseline = DeviceProcessEvents
| where Timestamp between (ago(30d) .. ago(1d))
| summarize
    BaselineCount = count(),
    HostCount = dcount(DeviceName),
    UserCount = dcount(InitiatingProcessAccountName)
    by ProcessName = FileName,
       ParentProcess = InitiatingProcessFileName,
       CommandLineHash = hash_md5(ProcessCommandLine)
| where BaselineCount >= rare_threshold;
//
// Current activity — find processes NOT in baseline
DeviceProcessEvents
| where Timestamp >= ago(lookback)
| extend ProcessName = FileName
| extend ParentProcess = InitiatingProcessFileName
| extend CmdLineHash = hash_md5(ProcessCommandLine)
| join kind=leftanti ProcessBaseline
    on ProcessName, ParentProcess
// These are process-parent combinations never/rarely seen
| project
    Timestamp,
    DeviceName,
    AccountName = InitiatingProcessAccountName,
    ProcessName,
    ParentProcess,
    CommandLine = ProcessCommandLine,
    ProcessId,
    ParentProcessId = InitiatingProcessId
| extend SuspicionLevel = case(
    // Known LOLBins with unusual parents
    ProcessName in ("powershell.exe", "cmd.exe", "wscript.exe",
        "cscript.exe", "mshta.exe", "certutil.exe", "bitsadmin.exe")
        and ParentProcess !in ("explorer.exe", "svchost.exe",
            "services.exe"), "HIGH",
    // Any process spawned by Office applications
    ParentProcess in ("winword.exe", "excel.exe", "powerpnt.exe",
        "outlook.exe"), "HIGH",
    // Processes from temp/download directories
    CommandLine has_any ("\\Temp\\", "\\Downloads\\",
        "\\AppData\\"), "MEDIUM",
    1 == 1, "LOW"
)
| where SuspicionLevel in ("HIGH", "MEDIUM")
| sort by SuspicionLevel asc, Timestamp desc

Summary and Key Takeaways

AI-powered threat detection is not a future aspiration — it is a present necessity. The gap between what rules can detect and what adversaries deploy grows wider every quarter. ML fills that gap, not by replacing rules, but by extending detection into the behavioral, structural, and semantic dimensions that static signatures cannot reach.

Core principles to remember:

  1. Rules are the foundation, not the ceiling: Keep your correlation rules. They provide deterministic, fast, explainable detection for known threats. ML extends beyond rules, not around them.

  2. Start with UEBA: Statistical baselines and anomaly detection deliver the highest ROI with the lowest complexity. Most SOC teams should spend 12-18 months at Level 2-3 before pursuing advanced ML.

  3. Shadow mode before production: Never deploy an ML detection model directly into alerting. Shadow mode catches configuration errors, threshold miscalibrations, and feature engineering bugs before they become alert storms.

  4. Multi-model fusion beats single-model accuracy: The Quantum Financial case study demonstrated that no single model detected the APT. The convergence of UEBA, graph analysis, and network anomaly detection created the detection signal.

  5. Adversarial robustness is not optional: If you deploy ML detection, adversaries will attempt to evade it. Include adversarial testing in your MLOps lifecycle from day one.

  6. Metrics drive adoption: MTTD reduction, false positive rates, and analyst retention are the metrics that earn executive support and budget. Model accuracy scores do not resonate in the boardroom.

The organizations that master ML-based detection will operate fundamentally differently from those that do not. Their SOCs will be quieter (fewer false positives), faster (automated triage), and more effective (detecting attacks that rules miss). The transition is not easy, but it is inevitable.


Continue Learning

This post covered the foundations of ML-based threat detection. For deeper dives into specific topics:


Certification Paths for AI Security

Ready to formalize your AI security expertise? These certifications validate ML and AI security skills that are increasingly in demand:

Recommended Certifications

Investing in these certifications while the AI security field is still maturing positions you ahead of the curve. Organizations are actively hiring for ML security engineering roles, and certified professionals command a 15-25% salary premium over non-certified peers.


This post is part of the Nexus SecOps threat intelligence blog. All data, companies, IP addresses, and scenarios are entirely fictional and created for educational purposes. IP addresses use RFC 5737 (192.0.2.x, 198.51.100.x, 203.0.113.x) and RFC 1918 (10.x) ranges. Domain names use example.com. No real organizations, individuals, or incidents are referenced.