Chapter 1: Introduction to SOC & AI - Quiz¶
Instructions¶
Test your understanding of SOC fundamentals, analyst tiers, AI opportunities and limitations, and the MITRE ATT&CK framework. Each question includes detailed explanations and links to relevant concepts.
Question 1: What is the primary responsibility of a Tier 1 SOC analyst?
A) Proactive threat hunting and malware analysis B) Initial alert triage, enrichment, and escalation C) Detection engineering and rule development D) Incident command and executive reporting
Answer
Correct Answer: B) Initial alert triage, enrichment, and escalation
Explanation: Tier 1 analysts serve as the first line of defense, monitoring SIEM dashboards, performing initial triage to determine if alerts are true positives or false positives, gathering basic enrichment data, and escalating complex or high-severity incidents to Tier 2. Threat hunting and advanced analysis are Tier 3 responsibilities, while detection engineering is a specialized role.
Reference: Chapter 1, Section 1.2 - SOC Team Structure
Question 2: Your SIEM generates an alert for 'Impossible Travel' - a user logged in from New York at 2:45 AM and Beijing at 2:52 AM. What should be your FIRST step in triage?
A) Immediately disable the user account B) Enrich the alert with threat intelligence, user context, and historical login patterns C) Escalate to law enforcement D) Ignore it as VPN usage is common
Answer
Correct Answer: B) Enrich the alert with threat intelligence, user context, and historical login patterns
Explanation: Before taking any action, a Tier 1 analyst should gather context: Is this user known to use VPNs? Does the account have a history of traveling? Are these IPs associated with known VPN services or threat actors? Enrichment provides the necessary information to make an informed triage decision. Immediately disabling accounts without context can disrupt legitimate business activity.
Reference: Chapter 1, Curiosity Hook - The 3 AM Alert
Question 3: Which metric measures the average time from when a security incident occurs to when it is detected by the SOC?
A) Mean Time to Acknowledge (MTTA) B) Mean Time to Respond (MTTR) C) Mean Time to Detect (MTTD) D) Dwell Time
Answer
Correct Answer: C) Mean Time to Detect (MTTD)
Explanation: MTTD measures detection speed from the moment an incident occurs to when the SOC identifies it. MTTA measures acknowledgment time after an alert fires, MTTR measures response/remediation time, and Dwell Time is the total period an attacker remains undetected (which includes MTTD but extends through investigation and eradication).
Reference: Glossary - Mean Time to Detect
Question 4: A SOC receives 500 alerts per day with a 60% false positive rate. Tier 1 analysts spend an average of 10 minutes per alert. How many hours per day are wasted on false positives?
A) 25 hours B) 50 hours C) 83 hours D) 100 hours
Answer
Correct Answer: B) 50 hours
Explanation: - Total alerts: 500/day - False positives: 500 × 0.60 = 300 - Time per alert: 10 minutes - Total FP time: 300 × 10 = 3,000 minutes = 50 hours
This demonstrates the severe impact of alert fatigue. A team of even 5 analysts would struggle to handle this volume, leading to burnout and missed true threats.
Reference: Chapter 1, Section 1.3 - Challenge 1: Alert Fatigue
Question 5: What is a key limitation of using AI hallucination in the context of SOC operations?
A) AI becomes too fast for analysts to keep up B) LLMs can generate plausible but incorrect information, such as fabricated ATT&CK technique IDs C) AI cannot process log data D) AI always requires human approval for every decision
Answer
Correct Answer: B) LLMs can generate plausible but incorrect information, such as fabricated ATT&CK technique IDs
Explanation: Hallucination occurs when Large Language Models (LLMs) confidently produce false information that sounds legitimate. For example, an LLM might invent a MITRE ATT&CK technique like "T1234.567 - Advanced Persistent Exfiltration" that doesn't exist. This is dangerous in SOC operations because analysts may waste time investigating non-existent threats or apply incorrect response procedures.
Mitigation: Use Retrieval-Augmented Generation (RAG) to ground LLM outputs in verified sources, implement guardrails that validate outputs against known databases, and train analysts to verify LLM suggestions.
Reference: Chapter 1, Section 1.5 - Limitation 3: Hallucination & Misinformation
Question 6: What is the purpose of the MITRE ATT&CK framework in SOC operations?
A) To automatically generate detection rules without human input B) To provide a common language for describing adversary behavior and measuring detection coverage C) To replace SIEM platforms with a unified threat detection system D) To calculate MTTA and MTTR metrics
Answer
Correct Answer: B) To provide a common language for describing adversary behavior and measuring detection coverage
Explanation: MITRE ATT&CK is a knowledge base of adversary tactics, techniques, and procedures (TTPs) based on real-world observations. SOC teams use it to: 1. Map detection rules to specific techniques 2. Identify coverage gaps 3. Communicate about threats using standardized terminology (e.g., "T1059.001 - PowerShell") 4. Support purple teaming exercises
It does NOT replace SIEMs or auto-generate rules, but serves as a framework for detection engineering.
Reference: Chapter 1, Section 1.7 - The MITRE ATT&CK Framework
Question 7: Which SOC maturity level is characterized by proactive threat hunting, threat intelligence integration, and purple teaming?
A) Level 1: Initial B) Level 2: Developing C) Level 3: Defined D) Level 5: Optimizing
Answer
Correct Answer: C) Level 3: Defined
Explanation: According to the SOC Maturity Model: - Level 1 (Initial): Basic monitoring, SIEM deployed, manual triage, high false positives - Level 2 (Developing): Structured processes, documented runbooks, tier structure - Level 3 (Defined): Proactive hunting, threat intel integration, automation, purple teaming - Level 4 (Managed): AI-assisted triage, continuous improvement, predictive capabilities - Level 5 (Optimizing): Advanced AI, zero-trust architecture, industry benchmarking
Most organizations operate at Level 2-3.
Reference: Chapter 1, Section 1.1 - SOC Maturity Levels
Question 8: An ML-based alert scoring system flags an alert with an 89/100 risk score. The reasoning includes: IP on threat feed, service account targeted, velocity exceeds baseline. What is the PRIMARY benefit of this AI-augmented approach?
A) It eliminates all false positives B) It replaces the need for Tier 1 analysts C) It reduces Mean Time to Acknowledge by pre-sorting high-confidence threats D) It prevents all security breaches
Answer
Correct Answer: C) It reduces Mean Time to Acknowledge by pre-sorting high-confidence threats
Explanation: AI-augmented alert triage accelerates the detection-to-response cycle by automatically scoring alerts based on multiple contextual factors (threat intelligence matches, user risk scores, asset criticality, behavioral baselines). High-score alerts (e.g., 89/100) are immediately surfaced to analysts, while low-score alerts can be deprioritized or auto-closed.
Benefits: - Reduces MTTA by highlighting genuine threats - Provides consistency across analyst shifts - Handles alert volume spikes
Important: AI does NOT eliminate false positives entirely, replace analysts, or prevent all breaches. Human judgment remains essential.
Reference: Chapter 1, Section 1.4 - Use Case 1: Alert Triage Acceleration
Question 9: What is a major risk of over-automation without human oversight in SOC operations?
A) Analysts become too efficient B) SOAR playbooks auto-block legitimate IPs or services based on false positives C) Detection rules become too accurate D) Incident response times increase
Answer
Correct Answer: B) SOAR playbooks auto-block legitimate IPs or services based on false positives
Explanation: Over-automation without proper guardrails can cause significant business disruption. For example:
Scenario: A SOAR playbook automatically blocks IPs flagged by an ML model. The model incorrectly identifies a legitimate partner VPN as C2 infrastructure and auto-blocks it, disrupting partner access and causing business impact.
Mitigation Strategies: - Implement approval gates for high-impact actions (blocking critical IPs, disabling accounts) - Set confidence thresholds (e.g., auto-block only if ML score > 95%) - Implement rollback mechanisms (auto-unblock after 24 hours if not confirmed malicious) - Require human review for edge cases
Reference: Chapter 1, Section 1.5 - Risk: Over-Automation Without Human Oversight
Question 10: In the MegaCorp case study, what was the result of deploying ML alert scoring, auto-enrichment, and LLM copilots?
A) False positive rate increased from 35% to 60% B) MTTA increased from 6 minutes to 12 minutes C) False positive rate decreased from 60% to 35% and MTTA decreased from 12 minutes to 6 minutes D) All SOC analysts were replaced by AI
Answer
Correct Answer: C) False positive rate decreased from 60% to 35% and MTTA decreased from 12 minutes to 6 minutes
Explanation: MegaCorp's SOC faced: - Problem: 300 alerts/day, 60% FP rate, MTTA of 12 minutes, high analyst turnover
AI Intervention: 1. ML alert scorer trained on 6 months of labeled alerts 2. Auto-enrichment (threat intel lookups, user context) 3. LLM copilot for runbook suggestions
Results After 3 Months: - FP rate: 60% → 35% (better tuning informed by ML insights) - MTTA: 12 min → 6 min (pre-scored alerts + auto-enrichment) - Analyst satisfaction: +25%
Lesson: AI augments analysts by handling repetitive tasks, enabling them to focus on genuine threats.
Reference: Chapter 1, Mini Case Study - Improving Triage at MegaCorp
Question 11: What is 'ground truth scarcity' and why is it a challenge for security ML models?
A) There are too many labeled true positives, overwhelming the model B) Organizations lack sufficient labeled training data for rare attack types C) ML models cannot process security log data D) Ground truth refers to physical security, not cybersecurity
Answer
Correct Answer: B) Organizations lack sufficient labeled training data for rare attack types
Explanation: Ground truth scarcity is a fundamental challenge in security ML:
Problem: Most organizations don't have thousands of labeled true positive incidents for every attack type. Security events are rare (especially sophisticated attacks), making it difficult to train robust models.
Impact: - Models may overfit to the limited examples they've seen - Poor detection of rare or novel attack techniques - High false positive rates on edge cases
Mitigation: - Use threat intelligence and synthetic data augmentation - Start with high-volume use cases (e.g., phishing, brute force) - Implement continuous retraining as new incidents are confirmed - Combine ML with signature-based and behavioral detections
Reference: Chapter 1, Section 1.5 - Limitation 1: Ground Truth Scarcity
Question 12: Which of the following is NOT an ethical practice in defensive AI for SOC operations?
A) Building AI to detect and respond to attacks B) Understanding attacker TTPs to improve defensive detections C) Developing AI-powered tools to evade other organizations' security controls D) Protecting organizational assets and data with AI-augmented monitoring
Answer
Correct Answer: C) Developing AI-powered tools to evade other organizations' security controls
Explanation: This textbook maintains a strictly defensive ethical approach:
✅ Ethical AI Uses: - Detect and defend against attacks - Understand attacker TTPs to build better detections - Safe deployment of AI with guardrails - Protect organizational assets and data
❌ Unethical/Out of Scope: - Exploit vulnerabilities or develop exploits - Malware development or weaponization - Techniques for evading defensive controls - Offensive hacking or penetration of other organizations
Why This Matters: AI amplifies capabilities. Defensive AI protects; offensive AI can cause harm. SOC operations focus exclusively on defense.
Reference: Chapter 1, Section 1.6 - Ethical & Safety Considerations
Question 13: A Tier 2 analyst is investigating a suspected lateral movement incident. Which of the following is a typical Tier 2 responsibility?
A) Monitoring SIEM dashboards and acknowledging alerts B) Deep investigation, timeline reconstruction, and coordination with IT for containment C) Architecture design and tool selection for the SOC D) Closing false positive alerts with minimal documentation
Answer
Correct Answer: B) Deep investigation, timeline reconstruction, and coordination with IT for containment
Explanation: Tier 2 Incident Responders handle: - Deep investigation of escalated incidents - Timeline reconstruction and root cause analysis - Coordination with IT teams for containment actions - Threat hunting based on intelligence - Mentoring Tier 1 analysts
Typical Metrics: - Mean Time to Respond (MTTR): < 2 hours - Investigation depth and accuracy - Successful containment rate
Tier 1 monitors dashboards and performs initial triage. Tier 3 handles threat hunting, advanced malware analysis, detection engineering, and architecture.
Reference: Chapter 1, Section 1.2 - Tier 2: Incident Responders
Question 14: An attacker uses an ML evasion technique by adding benign-looking comments to malicious PowerShell code to reduce its entropy score. What is this an example of?
A) Ground truth scarcity B) Adversarial evasion C) Hallucination D) Alert fatigue
Answer
Correct Answer: B) Adversarial evasion
Explanation: Adversarial evasion occurs when attackers intentionally manipulate features to bypass ML-based detections.
Example: - Detection: ML model flags PowerShell malware based on high entropy (encrypted/obfuscated code) - Evasion: Attacker adds benign-looking comments and variable names to reduce entropy - Result: Model misclassifies malware as benign
Mitigation: - Defense in depth: Combine ML with signature-based and behavioral detections - Adversarial training: Train models on adversarially modified samples - Explainability: Understand which features drive predictions to detect suspicious manipulations - Monitor for evasion patterns: Unusual padding, benign strings in otherwise suspicious contexts
Reference: Chapter 1, Section 1.5 - Limitation 2: Adversarial Evasion
Question 15: According to the chapter, which statement about AI in SOC operations is TRUE?
A) AI will completely replace SOC analysts within 5 years B) AI augments analysts by handling repetitive tasks, but human judgment remains essential C) AI models always achieve perfect precision and recall D) Deploying AI is a 'set and forget' solution requiring no maintenance
Answer
Correct Answer: B) AI augments analysts by handling repetitive tasks, but human judgment remains essential
Explanation: This aligns with the core philosophy of AI-augmented SOC operations:
Reality: - AI excels at repetitive, high-volume tasks (alert enrichment, log correlation, query generation) - Human analysts excel at creativity, contextual understanding, and handling novel threats - AI accelerates workflows but doesn't replace human decision-making
Common Misconceptions Debunked: - "AI will replace analysts" → FALSE. AI handles the 80% that's repetitive; humans handle the complex 20% - "AI models are always right" → FALSE. ML models make predictions based on training data and can be wrong - "Set and forget" → FALSE. AI requires continuous monitoring, retraining, and validation
Best Practice: Use AI to augment human capabilities, not replace them.
Reference: Chapter 1, Section 1.4 - AI in Security Operations: Opportunities and Common Misconceptions
Score Interpretation¶
- 13-15 correct: Excellent! You have a strong grasp of SOC fundamentals and AI integration.
- 10-12 correct: Good understanding. Review the areas where you missed questions.
- 7-9 correct: Adequate baseline. Revisit the chapter sections for deeper understanding.
- Below 7: Review Chapter 1 thoroughly before proceeding to Chapter 2.