Skip to content

Chapter 2: Telemetry & Log Sources - Quiz

← Back to Chapter 2


Instructions

Test your knowledge of telemetry sources, log normalization, schema standards, enrichment, and retention strategies. Each question includes detailed explanations.


Question 1: Which telemetry source would be MOST useful for detecting lateral movement within a network?

A) Cloud audit logs (AWS CloudTrail) B) Internal firewall logs and Windows authentication logs C) Email gateway logs D) Public DNS resolver logs

Answer

Correct Answer: B) Internal firewall logs and Windows authentication logs

Explanation: Lateral movement involves an attacker moving from one system to another within the internal network. Key indicators include: - Internal firewall logs showing unusual SMB/RDP connections between workstations - Windows Event ID 4624 (successful logins) with Logon Type 3 (network) or 10 (RDP) - Unusual authentication patterns (workstation-to-workstation connections, which are rare in normal operations)

Cloud logs track cloud resource access, email logs track email activity, and public DNS logs don't provide internal network visibility.

Reference: Chapter 2, Section 2.1 - Network Telemetry


Question 2: What is the primary purpose of log normalization in a SIEM?

A) To compress logs for storage efficiency B) To convert diverse log formats to a common schema for consistent querying C) To encrypt logs in transit D) To delete unnecessary log fields

Answer

Correct Answer: B) To convert diverse log formats to a common schema for consistent querying

Explanation: Log normalization maps different source formats (Syslog, Windows Event XML, JSON, CEF) to standardized field names. This enables analysts to write queries that work across all log sources.

Example: - Before normalization: One source uses src_ip, another uses source_address, another uses client_ip - After normalization (ECS): All use source.ip - Benefit: Single query source.ip="203.0.113.45" searches across all sources

Reference: Chapter 2, Section 2.2 - Log Normalization


Question 3: Which Sysmon Event ID tracks process creation with detailed command-line arguments?

A) Event ID 3 B) Event ID 1 C) Event ID 7 D) Event ID 4688

Answer

Correct Answer: B) Event ID 1

Explanation: - Sysmon Event ID 1: Process creation (detailed) - Sysmon Event ID 3: Network connection - Sysmon Event ID 7: Image (DLL) loaded - Windows Event ID 4688: Process creation (native Windows, less detailed than Sysmon unless command-line auditing is enabled)

Sysmon Event ID 1 provides rich details including full command line, parent process, hashes, and user context.

Reference: Chapter 2, Section 2.1 - Endpoint Telemetry


Question 4: An analyst is investigating potential DNS tunneling. Which log source would provide the BEST visibility into this attack technique?

A) Firewall permit/deny logs B) DNS query logs showing high-frequency, high-entropy subdomains C) VPN authentication logs D) Email gateway logs

Answer

Correct Answer: B) DNS query logs showing high-frequency, high-entropy subdomains

Explanation: DNS tunneling is a technique where attackers exfiltrate data by encoding it in DNS queries to a malicious domain they control.

Detection Pattern:

Query: 4f2a3b1c.malicious-domain.com
Query: 8e9d6c5a.malicious-domain.com
Query: 7b3f8e2d.malicious-domain.com

Indicators: - High frequency of DNS queries to the same parent domain - High entropy (random-looking) subdomains - Unusual query volumes from a single host

Firewall logs show connections but not DNS details; VPN and email logs are unrelated to DNS tunneling.

Reference: Chapter 2, Section 2.1 - Network Telemetry


Question 5: What is log enrichment, and why is it valuable for SOC operations?

A) Encrypting logs to protect sensitive data B) Adding contextual information (threat intel, asset data, user behavior) to raw events to aid decision-making C) Compressing logs to save storage space D) Deleting old logs to free up disk space

Answer

Correct Answer: B) Adding contextual information (threat intel, asset data, user behavior) to aid decision-making

Explanation: Enrichment transforms raw log events into actionable intelligence by adding context:

Example:

Raw Alert: Failed login from 203.0.113.45 to admin account

Enriched:
- IP 203.0.113.45: TOR exit node (threat intel)
- Account "admin": Service account, should only auth from 10.0.1.5
- Historical: Zero failed logins for this account in past 90 days
- Asset: Target system is domain controller (critical asset)

Conclusion: HIGH-priority alert

Enrichment Sources: - Threat intelligence (IP/domain/hash reputation) - Asset inventory (criticality, owner, location) - User context (department, typical behavior) - Historical behavior baselines

Reference: Chapter 2, Section 2.3 - Log Enrichment


Question 6: Which common log schema is associated with Elastic Stack?

A) CIM (Common Information Model) B) CEF (Common Event Format) C) ECS (Elastic Common Schema) D) Syslog

Answer

Correct Answer: C) ECS (Elastic Common Schema)

Explanation: - ECS (Elastic Common Schema): Elastic Stack's standard schema with fields like @timestamp, event.category, source.ip, user.name - CIM (Common Information Model): Splunk's normalization framework - CEF (Common Event Format): ArcSight's format - Syslog: Universal transport protocol, not a schema

Each SIEM platform has its preferred schema for normalizing diverse log sources.

Reference: Chapter 2, Section 2.2 - Common Schema Standards


Question 7: According to PCI-DSS requirements, how long must critical security logs (like authentication logs) be readily accessible?

A) 7 days B) 30 days C) 3 months D) 1 year

Answer

Correct Answer: C) 3 months

Explanation: PCI-DSS (Payment Card Industry Data Security Standard) requires: - 3 months: Readily available (hot/warm storage, fast search) - 1 year: Total retention (includes 9 months in archive/cold storage)

Other Framework Requirements: - HIPAA: 6 years for audit logs - GDPR: "No longer than necessary" + data minimization - SOX: 7 years for financial system logs

Common SOC Practice: - Hot storage (SIEM): 90 days - Cold storage (archive): 2-7 years based on compliance needs

Reference: Chapter 2, Section 2.4 - Compliance Frameworks


Question 8: An organization experiences a breach where attackers compromised a workstation 14 days ago. However, the firewall logs are only retained for 7 days. What is the CONSEQUENCE?

A) The breach can still be fully investigated using email logs alone B) Critical network activity evidence is lost, limiting investigation scope C) The SIEM can reconstruct missing logs automatically D) No impact, as 7 days is sufficient for all investigations

Answer

Correct Answer: B) Critical network activity evidence is lost, limiting investigation scope

Explanation: Insufficient log retention is a common gap in security programs. In this scenario:

Problem: - Breach occurred 14 days ago - Firewall logs show outbound connections, C2 communication, lateral movement - Only 7 days of logs available → Last 7 days of the attack visible, first 7 days lost

Impact: - Cannot determine initial access vector - Cannot identify full scope of lateral movement - Incomplete timeline for incident response - Potential compliance violations (PCI-DSS requires 90 days hot + 1 year archive)

Lesson: From the "Invisible Breach" curiosity hook - You can't detect what you can't see. Proper retention is critical.

Reference: Chapter 2, Curiosity Hook - The Invisible Breach


Question 9: Which of the following is a PRIMARY advantage of a data lake approach over traditional SIEM for security log storage?

A) Faster real-time alerting B) Lower storage costs for long-term retention of raw logs C) Better out-of-the-box correlation rules D) Simpler user interface for analysts

Answer

Correct Answer: B) Lower storage costs for long-term retention of raw logs

Explanation:

Data Lake Advantages: - Lower cost: Store raw logs in object storage (S3, Azure Blob) at $0.02/GB vs. SIEM indexed storage at $100+/GB/month - Flexibility: Retain data in native formats, query with tools like Athena, Databricks - ML-friendly: Direct access to raw data for model training

Data Lake Disadvantages: - Slower query performance (scan files vs. indexed search) - No native real-time alerting (requires additional stream processing) - More complex setup

Best Practice: Hybrid architecture - SIEM for hot data (30-90 days, real-time alerts), data lake for cold data (1-7 years, investigations, ML training)

Reference: Chapter 2, Section 2.5 - Data Lake Approach


??? question "Question 10: Given this attack chain, identify which log sources would detect EACH step: 1. Phishing email delivered 2. Malware executes 3. C2 connection established 4. Credential dumping 5. Lateral movement via RDP 6. Data exfiltrated via HTTPS**

**A)** Email gateway, EDR, DNS logs, Windows logs, NetFlow, proxy logs
**B)** Only EDR for all steps
**C)** Only SIEM for all steps
**D)** No logs needed, prevention is sufficient

??? success "Answer"
    **Correct Answer: A) Email gateway, EDR, DNS logs, Windows logs, NetFlow, proxy logs**

    **Explanation:** Defense in depth requires multiple telemetry sources:

    1. **Phishing email:** Email gateway logs, email security tools
    2. **Malware executes:** EDR (behavioral detection), Windows Event ID 4688 (process creation), Sysmon Event ID 1
    3. **C2 connection:** Firewall logs, DNS logs, NetFlow, EDR network monitoring
    4. **Credential dumping:** EDR (LSASS access detection), Windows Security logs (unusual process behavior), Sysmon
    5. **Lateral movement (RDP):** Windows Event ID 4624 (Logon Type 10), internal firewall logs, NetFlow
    6. **Data exfiltration (HTTPS):** Proxy logs (large uploads), firewall logs (unusual volumes), cloud access logs

    **Lesson:** No single log source detects the entire attack chain. Comprehensive telemetry coverage is essential.

    **Reference:** [Chapter 2, Practice Tasks - Task 1](../chapters/ch02-telemetry-and-logging.md)

Question 11: What is the purpose of log forwarders in a centralized SIEM architecture?

A) To generate alerts based on correlation rules B) To collect logs from endpoints/servers and send them to the SIEM C) To visualize dashboards for analysts D) To automatically respond to incidents

Answer

Correct Answer: B) To collect logs from endpoints/servers and send them to the SIEM

Explanation: Log forwarders (also called agents or collectors) are lightweight software installed on systems to: - Collect logs locally (Windows Event Logs, Syslog, application logs) - Forward logs to centralized SIEM via secure protocols - Provide buffering if network connectivity is interrupted

Examples: - Splunk Universal Forwarder: Collects and forwards to Splunk - Elastic Beats: Lightweight shippers for Elasticsearch - FluentD/Fluentbit: Open-source log collectors

Architecture:

[Endpoints/Servers] → [Log Forwarders] → [SIEM Indexers] → [Search Layer]

Correlation, alerting, and visualization happen in the SIEM, not the forwarders.

Reference: Chapter 2, Section 2.5 - Centralized SIEM


Question 12: An analyst notices a normalized log entry showing user.name: jsmith, source.ip: 10.0.5.22, event.action: login_failed. Which schema standard uses fields like user.name and source.ip?

A) Syslog B) CEF (Common Event Format) C) ECS (Elastic Common Schema) D) Raw Windows Event XML

Answer

Correct Answer: C) ECS (Elastic Common Schema)

Explanation: The field naming convention user.name, source.ip, event.action is characteristic of Elastic Common Schema (ECS).

Schema Comparison: - ECS: Dot notation (source.ip, destination.port, event.category) - CIM (Splunk): Simple fields (src, dest, user, action) - CEF: Key-value pairs with specific headers - Syslog: Free-form message field with facility/severity

ECS provides hierarchical, structured field names for consistent querying across Elastic Stack deployments.

Reference: Chapter 2, Section 2.2 - Normalization Example


??? question "Question 13: Which cloud telemetry source would detect IAM privilege escalation (e.g., a user granting themselves admin rights)?**

**A)** VPC Flow Logs
**B)** AWS CloudTrail / Azure Activity Logs
**C)** S3 Access Logs
**D)** DNS query logs

??? success "Answer"
    **Correct Answer: B) AWS CloudTrail / Azure Activity Logs**

    **Explanation:** Cloud audit logs capture API calls and resource changes, including IAM modifications:

    **AWS CloudTrail detects:**
    - IAM policy changes (e.g., `AttachUserPolicy`, `PutUserPolicy`)
    - Role assumption (`AssumeRole`)
    - User/group creation
    - Permission modifications

    **Azure Activity Logs detect:**
    - Role assignments (`Microsoft.Authorization/roleAssignments/write`)
    - User privilege changes
    - Resource modifications

    **Detection Example:**
    ```
    Event: iam:AttachUserPolicy
    User: low-privileged-user
    Policy: AdministratorAccess
    Alert: Privilege escalation detected
    ```

    **Reference:** [Chapter 2, Section 2.1 - Cloud Telemetry](../chapters/ch02-telemetry-and-logging.md)

Question 14: Why is collecting endpoint command-line arguments (e.g., PowerShell command lines) critical for SOC detections?

A) It reduces storage costs B) Command-line arguments reveal attacker techniques (encoded payloads, download cradles, credential dumping) C) It eliminates all false positives D) Command-line logging is required by all compliance frameworks

Answer

Correct Answer: B) Command-line arguments reveal attacker techniques (encoded payloads, download cradles, credential dumping)

Explanation: Process creation logs without command-line arguments only show: - powershell.exe executed by user123

With command-line arguments:

powershell.exe -enc JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGU...

Detection Opportunities: - Encoded commands: -enc, -encodedcommand (common obfuscation) - Download cradles: Invoke-WebRequest, DownloadString, IEX (New-Object...) - Credential access: sekurlsa::logonpasswords, lsadump::sam - Lateral movement: PsExec \\remote-host

Enablement: - Windows: Enable command-line auditing in Group Policy - Sysmon: Provides this by default (Event ID 1) - EDR: Most EDR platforms capture full command lines

Reference: Chapter 2, Section 2.1 - Endpoint Telemetry


Question 15: An organization must balance investigation needs, storage costs, and compliance. Which retention strategy is MOST appropriate for critical authentication logs?

A) 7 days hot storage, no archival B) 90 days hot storage, 2-7 years cold storage C) Infinite retention in hot storage D) 24 hours hot storage, immediate deletion

Answer

Correct Answer: B) 90 days hot storage, 2-7 years cold storage

Explanation: This balances all three concerns:

Investigation Needs: - Most incidents are detected within 90 days (many within 7-30 days) - Hot storage enables fast SIEM queries for active investigations

Compliance: - PCI-DSS: 90 days hot + 1 year archive - HIPAA: 6 years retention - SOX: 7 years for financial systems - GDPR: Retain only as long as necessary (2-7 years defensible for security logs)

Cost Optimization: - Hot storage (indexed SIEM): Expensive ($100+/GB/month) → Limit to 90 days - Cold storage (S3/Glacier): Cheap ($0.02/GB/month) → Retain 2-7 years

Common Retention Matrix: | Log Type | Hot (SIEM) | Cold (Archive) | |----------|------------|----------------| | Critical (auth, EDR) | 90 days | 2-7 years | | Network flows | 30 days | 1 year | | Proxy logs | 30 days | 1 year | | Application logs | 14 days | 6 months |

Reference: Chapter 2, Section 2.4 - Data Retention & Compliance


Score Interpretation

  • 13-15 correct: Excellent! You understand telemetry sources, normalization, and retention strategies.
  • 10-12 correct: Good grasp of log sources. Review schema standards and enrichment concepts.
  • 7-9 correct: Adequate understanding. Revisit telemetry categories and compliance requirements.
  • Below 7: Review Chapter 2 thoroughly, focusing on log sources and normalization.

← Back to Chapter 2 | Next Quiz: Chapter 3 →