Chapter 6: Threat Intelligence - Quiz¶
Instructions¶
Test your knowledge of threat intelligence types, STIX/TAXII protocols, IOCs vs TTPs, confidence scoring, threat hunting, and intelligence lifecycle management.
Question 1: What is the primary difference between strategic and tactical threat intelligence?
A) Strategic is for executives, tactical is for analysts B) Strategic covers long-term trends and business risks, tactical focuses on attacker TTPs and campaigns C) Strategic is always classified, tactical is public D) There is no difference
Answer
Correct Answer: B) Strategic covers long-term trends and business risks, tactical focuses on attacker TTPs and campaigns
Explanation:
Strategic Threat Intelligence: - Audience: Executive leadership, board members, CISOs - Timeframe: 6-12+ months - Content: Industry trends, geopolitical threats, risk assessments, budget justification - Example: "APT41 targeting healthcare organizations expected to increase 30% in 2026"
Tactical Threat Intelligence: - Audience: SOC managers, threat hunters, detection engineers - Timeframe: Weeks to months - Content: Attacker TTPs, campaigns, threat actor profiles - Example: "APT41 using Cobalt Strike with specific C2 patterns"
Reference: Chapter 6, Section 6.1 - Types of Threat Intelligence
Question 2: Which threat intelligence type provides specific, actionable indicators like IP addresses, file hashes, and domains?
A) Strategic B) Tactical C) Operational D) Technical
Answer
Correct Answer: D) Technical
Explanation:
Technical Threat Intelligence (IOCs): - Specificity: Highly specific indicators of compromise - Audience: SIEM, IDS/IPS, EDR, firewall - Lifespan: Hours to days (IPs/domains change frequently) - Automated: Often consumed via threat feeds
Examples: - File hash: abc123def456... - IP: 203.0.113.45 - Domain: evil-c2.example.com - URL: http://malicious.site/payload.exe
Use Case: Block known malicious IP in firewall, alert on file hash detection in EDR
Question 3: What does STIX stand for and what is its purpose?
A) Secure Threat Information Exchange - encryption protocol for threat data B) Structured Threat Information Expression - standardized language for describing threats C) Strategic Tactical Intelligence Exchange - coordination framework D) Security Tool Integration Extension - API standard
Answer
Correct Answer: B) Structured Threat Information Expression - standardized language for describing threats
Explanation:
STIX (Structured Threat Information Expression): - Purpose: Standardized JSON-based language for sharing threat intelligence - Version: STIX 2.1 is current standard - Benefits: Machine-readable, interoperable across tools, consistent terminology
STIX Objects: - Indicators: IOCs (IP, hash, domain) - Attack Patterns: MITRE ATT&CK techniques - Threat Actors: APT groups, campaigns - Malware: Malware families - Relationships: Links between objects (e.g., malware uses attack pattern)
Example:
{
"type": "indicator",
"pattern": "[ipv4-addr:value = '203.0.113.45']",
"labels": ["malicious-activity"],
"confidence": 85
}
Reference: Chapter 6, Section 6.2 - STIX/TAXII
Question 4: TAXII is used to transport STIX data. Which TAXII service model allows clients to request specific threat intelligence on-demand?
A) Collection B) Channel C) Push D) Broadcast
Answer
Correct Answer: A) Collection
Explanation:
TAXII (Trusted Automated Exchange of Intelligence Information): - Purpose: Transport protocol for sharing STIX data - Version: TAXII 2.1
TAXII Service Models:
1. Collection (Pull Model): - Clients request intelligence from server - On-demand retrieval - Example: SOC queries feed for Emotet IOCs
2. Channel (Push Model): - Server pushes updates to subscribed clients - Real-time distribution - Example: ISAC distributes new IOCs to members
Typical Workflow:
[Threat Intel Platform] --TAXII Collection--> [SIEM/TIP]
[SIEM/TIP] parses STIX objects --> [Auto-create detection rules/blocklists]
Reference: Chapter 6, Section 6.2 - TAXII Protocol
Question 5: Why are TTPs (Tactics, Techniques, and Procedures) considered more valuable than IOCs for long-term threat detection?
A) TTPs are easier to collect than IOCs B) TTPs describe attacker behavior patterns that are harder to change than infrastructure C) TTPs can be automatically blocked by firewalls D) IOCs are always false positives
Answer
Correct Answer: B) TTPs describe attacker behavior patterns that are harder to change than infrastructure
Explanation:
IOCs (Indicators of Compromise): - What: Specific artifacts (IP, hash, domain) - Lifespan: Hours to days (attackers change infrastructure frequently) - Detection: Signature-based (exact match) - Evasion: Trivial (register new domain, change IP)
TTPs (Tactics, Techniques, Procedures): - What: Behavior patterns (how attackers operate) - Lifespan: Months to years (changing tradecraft requires retooling) - Detection: Behavioral (patterns) - Evasion: Difficult (requires new tools/training)
Example: - IOC: Malware hash abc123 (attacker changes hash in minutes) - TTP: T1003.001 LSASS Memory Dump (requires fundamentally different credential theft method to evade)
Detection Strategy: Use IOCs for immediate blocking, TTPs for durable detections
Reference: Chapter 6, Section 6.3 - IOCs vs TTPs
Question 6: What does a confidence score of 30/100 on a threat intelligence indicator suggest?
A) High confidence - immediately block B) Low confidence - use for monitoring/alerting only, not automatic blocking C) The indicator is 30 days old D) 30% of systems are affected
Answer
Correct Answer: B) Low confidence - use for monitoring/alerting only, not automatic blocking
Explanation:
Confidence Scoring (0-100): - 90-100: High confidence - verified by multiple sources, confirmed malicious - 70-89: Medium-high - likely malicious, single trusted source - 50-69: Medium - possible threat, requires validation - 30-49: Low - unverified, use for awareness only - 0-29: Very low - rumor, uncorroborated
Confidence Score 30/100: - Meaning: Unconfirmed, possibly false positive - Use Case: Log for correlation, do NOT auto-block - Action: Monitor for additional context, escalate if behavior confirms
Example: - IP 203.0.113.45 flagged by single OSINT blog (confidence: 30) - Action: Create SIEM alert (do not firewall block) - Rationale: Could be legitimate CDN/cloud IP misidentified as malicious
Tuning: Adjust confidence thresholds based on risk tolerance and false positive tolerance
Reference: Chapter 6, Section 6.4 - Confidence Scoring
Question 7: A threat feed provides an IP address with last_seen timestamp of 90 days ago. What is the primary concern with using this indicator?
A) The indicator is too old and likely no longer malicious (stale) B) The indicator is at peak freshness C) 90 days is the perfect age for threat intelligence D) Age doesn't matter for threat intelligence
Answer
Correct Answer: A) The indicator is too old and likely no longer malicious (stale)
Explanation:
Indicator Freshness:
IP/Domain Lifespan: - Fresh (< 7 days): High likelihood still malicious - Aging (7-30 days): Moderate likelihood, may be reassigned - Stale (> 30 days): Low likelihood, infrastructure often rotates - Very Stale (> 90 days): Very low likelihood, high false positive risk
90-Day-Old IP Concerns: 1. Infrastructure Rotation: Attackers abandon old IPs, register new ones 2. Legitimate Reassignment: Cloud/hosting providers reassign IPs to legitimate customers 3. False Positives: Blocking stale IOCs disrupts legitimate services
Example: - 90 days ago: 203.0.113.45 hosted Emotet C2 - Today: Same IP reassigned to legitimate SaaS company - Impact: Auto-block causes business disruption
Best Practice: - Age out IOCs after 30-60 days (configurable per indicator type) - Prioritize fresh intelligence - Validate aged indicators before enforcement
Reference: Chapter 6, Section 6.5 - Indicator Freshness
Question 8: What is the primary goal of hypothesis-driven threat hunting?
A) Randomly search logs for anomalies B) Use threat intelligence to form hypotheses about attacker presence and proactively search for evidence C) Wait for alerts to fire before investigating D) Only hunt after a confirmed breach
Answer
Correct Answer: B) Use threat intelligence to form hypotheses about attacker presence and proactively search for evidence
Explanation:
Hypothesis-Driven Threat Hunting:
Process: 1. Form Hypothesis: Based on threat intel (e.g., "APT41 may be using SQL injection against our web apps") 2. Define Indicators: What evidence would confirm? (Unusual SQL queries, web shell deployment) 3. Hunt: Proactively search logs for indicators 4. Analyze: Determine if hypothesis is confirmed or refuted 5. Document: Record findings, improve detections
Example Hunt:
Hypothesis: "Attackers may use Living-off-the-Land binaries (LOLBins) for persistence"
Hunt Query:
index=endpoint process_name IN ("certutil.exe", "bitsadmin.exe", "mshta.exe")
| where NOT [whitelist_processes]
| stats count by process_name, command_line, user, host
Result: Found certutil.exe downloading suspicious DLL → Escalate
Benefits: - Proactive (find threats before alerts fire) - Reduces dwell time - Improves detection coverage
Reference: Chapter 6, Section 6.6 - Threat Hunting
Question 9: Your SIEM receives a threat feed with 50,000 IOCs. After correlation, only 12 alerts fire. What does this indicate?
A) The threat feed is useless and should be disabled B) This is normal - most IOCs won't match your environment, focus on the 12 hits C) All 50,000 IOCs are false positives D) Your SIEM is broken
Answer
Correct Answer: B) This is normal - most IOCs won't match your environment, focus on the 12 hits
Explanation:
Threat Feed Reality: - Broad Coverage: Feeds contain IOCs for diverse threats across many organizations - Low Hit Rate: Only a small fraction will match your specific environment - Value in Negatives: Confirming absence of known threats is valuable
Example: - Feed: 50,000 IOCs (ransomware, APTs, commodity malware) - Your org: 12 matches (2 Emotet IPs, 10 phishing domains) - Action: Investigate the 12 matches, dismiss the rest
Feed Tuning: 1. Filter by Relevance: Select feeds aligned with your threat model (e.g., healthcare org prioritizes healthcare-targeted threats) 2. Confidence Thresholds: Only ingest high-confidence IOCs 3. Freshness: Auto-expire stale indicators 4. Deduplication: Merge overlapping feeds
False Positive Prevention: - Don't auto-block all 50,000 IOCs (high FP risk) - Use for correlation/alerting, require confirmation before enforcement
Question 10: What is a 'false positive indicator' in the context of threat intelligence?
A) An indicator that correctly identifies a threat B) An indicator incorrectly flagged as malicious when it is actually benign C) An indicator that is too old to be useful D) An indicator with low confidence score
Answer
Correct Answer: B) An indicator incorrectly flagged as malicious when it is actually benign
Explanation:
False Positive Indicator: - Definition: Benign artifact misclassified as malicious - Impact: Wasted analyst time, potential service disruption if auto-blocked
Common Causes: 1. Shared Infrastructure: Legitimate services on same IP/domain as malware (CDN, cloud hosting) 2. Domain Squatting: Typo domains flagged but actually parked/unused 3. Stale Intelligence: Previously malicious IP reassigned to legitimate org 4. Low-Quality Feeds: Unvetted OSINT sources with poor validation
Example: - Feed flags cdn.example.com as malware C2 - Reality: Legitimate CDN also used by attackers for hosting - Result: Blocking disrupts legitimate app functionality
Mitigation: - Validate: Cross-reference against multiple feeds - Whitelist: Maintain allowlist of known-good infrastructure - Confidence Thresholds: Only enforce high-confidence indicators - Feedback Loop: Report false positives to feed providers
Reference: Chapter 6, Section 6.8 - False Positive Indicators
Question 11: In the threat intelligence lifecycle, what phase involves determining intelligence requirements and priority intelligence requirements (PIRs)?
A) Collection B) Analysis C) Direction D) Dissemination
Answer
Correct Answer: C) Direction
Explanation:
Threat Intelligence Lifecycle:
1. Direction (Planning): - Goal: Define intelligence requirements - Activities: Identify PIRs, stakeholder needs, threat model - Example PIR: "What ransomware groups target healthcare in North America?"
2. Collection: - Goal: Gather raw data from sources - Activities: OSINT, ISACs, commercial feeds, internal telemetry - Sources: Threat feeds, dark web, security blogs, MISP
3. Processing: - Goal: Normalize and structure data - Activities: Parse STIX, deduplicate, enrich with context
4. Analysis: - Goal: Derive insights from processed data - Activities: Correlate IOCs, identify TTPs, assess relevance
5. Dissemination: - Goal: Deliver intelligence to stakeholders - Activities: Reports, briefings, automated feed integration
6. Feedback: - Goal: Refine requirements based on effectiveness - Activities: Measure hit rates, gather analyst feedback, adjust PIRs
Question 12: A threat intel report states: 'APT41 is using T1059.001 (PowerShell) and T1003.001 (LSASS dumping) in recent campaigns.' How should a SOC use this information?
A) Ignore it since it's not specific IOCs B) Review existing detection coverage for these techniques and create/tune rules if gaps exist C) Immediately disable PowerShell on all systems D) Only use this for executive reporting
Answer
Correct Answer: B) Review existing detection coverage for these techniques and create/tune rules if gaps exist
Explanation:
TTP-Based Intelligence Operationalization:
Step 1: Map to ATT&CK: - T1059.001: Command and Scripting Interpreter - PowerShell - T1003.001: OS Credential Dumping - LSASS Memory
Step 2: Assess Coverage: - Do we have detections for suspicious PowerShell? (encoded commands, ScriptBlockLogging) - Do we detect LSASS access? (EDR, Sysmon Event ID 10)
Step 3: Implement/Tune:
# Detection: LSASS Memory Access
index=sysmon EventCode=10 TargetImage="*\\lsass.exe"
| where NOT [whitelist_processes]
| stats count by SourceImage, SourceUser, Computer
Step 4: Hunt: - Proactively search for historical evidence of these techniques - Look for gaps detection didn't catch
Step 5: Test: - Purple team exercise: Can we detect these TTPs if executed?
Why Not Disable PowerShell? - Breaks legitimate IT operations (too disruptive) - Better: Monitor and detect suspicious usage patterns
Reference: Chapter 6, Section 6.10 - Operationalizing TTP Intelligence
Question 13: What is the advantage of using a Threat Intelligence Platform (TIP) compared to manually managing threat feeds?
A) TIPs eliminate all false positives automatically B) TIPs aggregate, normalize, deduplicate, and enrich intelligence from multiple sources with automated distribution C) TIPs are always free and require no maintenance D) TIPs replace the need for SOC analysts
Answer
Correct Answer: B) TIPs aggregate, normalize, deduplicate, and enrich intelligence from multiple sources with automated distribution
Explanation:
Threat Intelligence Platform (TIP) Capabilities:
1. Aggregation: - Ingest from 50+ feeds (commercial, OSINT, ISACs, internal) - Centralized repository
2. Normalization: - Convert diverse formats (CSV, STIX, JSON) to unified schema - Standardize confidence scoring
3. Deduplication: - Remove overlapping IOCs from multiple feeds - Reduce alert fatigue
4. Enrichment: - Add context: WHOIS, geolocation, threat actor attribution - Correlate IOCs to campaigns
5. Automated Distribution: - Push to SIEM, firewall, EDR via APIs - Real-time updates
6. Workflow: - Track investigation status, analyst notes - False positive feedback loop
Example Workflow:
[50 Threat Feeds] → [TIP] → [Aggregates, deduplicates, scores] → [Auto-push high-confidence IOCs to SIEM/Firewall]
Popular TIPs: MISP, ThreatConnect, Anomali, ThreatQuotient
Reference: Chapter 6, Section 6.11 - Threat Intelligence Platforms
Question 14: You receive threat intel that 'APT41 is targeting SQL servers with CVE-2024-12345 exploitation.' Your organization has 200 SQL servers. What is the FIRST action?
A) Immediately shut down all SQL servers B) Assess if CVE-2024-12345 affects your SQL versions and if patches are available C) Wait for an exploit attempt before taking action D) Ignore the intelligence and continue normal operations
Answer
Correct Answer: B) Assess if CVE-2024-12345 affects your SQL versions and if patches are available
Explanation:
Threat Intelligence Operationalization Workflow:
Step 1: Validate Relevance: - Does CVE-2024-12345 affect our SQL Server versions? - Query asset inventory: "SELECT version FROM sql_servers"
Step 2: Assess Exposure: - Are vulnerable servers internet-facing? - Are they behind compensating controls (WAF, network segmentation)?
Step 3: Prioritize: - Critical servers first (production, customer-facing) - Risk score: (Exploitability × Impact × Exposure)
Step 4: Remediate: - If patch available: Deploy to test → production - If no patch: Apply workarounds (disable feature, segment network, deploy virtual patch)
Step 5: Detect: - Create detection rule for CVE-2024-12345 exploitation attempts - Hunt for historical exploitation evidence
Step 6: Monitor: - Track patching status - Alert on exploitation attempts
Why Not Shutdown? - Too disruptive without validation - May be unaffected versions
Reference: Chapter 6, Practice Task - Operationalizing Intel
Question 15: What is the primary risk of consuming unvetted OSINT threat intelligence feeds?
A) OSINT is always malicious B) High false positive rate due to unverified indicators and potential for disinformation C) OSINT is too expensive D) OSINT cannot be parsed by SIEMs
Answer
Correct Answer: B) High false positive rate due to unverified indicators and potential for disinformation
Explanation:
OSINT (Open Source Intelligence) Risks:
1. Unverified Indicators: - Sources: Twitter, blogs, paste sites - Quality: Varies widely (no validation standards) - Risk: Benign IPs/domains misidentified as malicious
2. Disinformation: - Attackers can poison feeds (post fake IOCs to misdirect defenders) - Example: Attacker posts IP of competitor as "malware C2" → defenders block legitimate business partner
3. Context Gaps: - OSINT often lacks context (confidence, TTPs, actor attribution) - Result: Difficult to prioritize
4. Volume Overload: - Millions of unfiltered IOCs - Impact: Alert fatigue, wasted triage time
Best Practices: 1. Multi-Source Validation: Require 2+ independent sources before enforcement 2. Reputation Scoring: Weight feeds by historical accuracy 3. Human Review: Don't auto-block OSINT IOCs without analyst validation 4. Combine with Commercial: Blend OSINT (volume) with commercial feeds (quality)
Reference: Chapter 6, Common Pitfalls
Score Interpretation¶
- 13-15 correct: Excellent! You have strong threat intelligence fundamentals and can operationalize intel effectively.
- 10-12 correct: Good understanding. Review confidence scoring and indicator freshness concepts.
- 7-9 correct: Adequate baseline. Focus on STIX/TAXII protocols and IOCs vs TTPs.
- Below 7: Review Chapter 6 thoroughly, especially intelligence lifecycle and feed management.