Skip to content

Chapter 57 Quiz: Cloud Forensics & Investigation

Test your knowledge of cloud forensics fundamentals, shared responsibility models for evidence collection, AWS/Azure/GCP investigation techniques, container and serverless forensics, cloud-native detection queries, memory acquisition, chain of custody in cloud environments, and multi-cloud investigation correlation.


Questions

1. In a cloud forensics investigation, which model defines who is responsible for collecting evidence from different layers of the stack?

  • A) The Zero Trust model
  • B) The shared responsibility model -- the cloud provider manages physical infrastructure evidence (hardware logs, hypervisor data) while the customer is responsible for OS-level and application-level evidence collection
  • C) The NIST Cybersecurity Framework exclusively
  • D) The cloud provider is always responsible for all evidence at every layer
Answer

B -- The shared responsibility model

The shared responsibility model is fundamental to cloud forensics. In IaaS, the customer controls the OS and above, so they must collect OS logs, application data, and memory dumps. The provider controls the physical hardware and hypervisor layer. In SaaS, the provider controls nearly everything, leaving the customer with only access logs and configuration data. Understanding this boundary is critical for scoping an investigation. Refer to Chapter 57 Section 57.1.


2. Which AWS service provides a comprehensive audit trail of all API calls made in an AWS account, including the identity of the caller, the time, and the source IP?

  • A) Amazon GuardDuty
  • B) AWS Config
  • C) AWS CloudTrail
  • D) Amazon Inspector
Answer

C -- AWS CloudTrail

CloudTrail records every API call made in an AWS account as an event. Each event includes the caller identity (IAM user, role, or service), timestamp, source IP address, request parameters, and response. Management events are logged by default for 90 days; for long-term retention and data event logging (S3 object access, Lambda invocations), a trail must be configured to deliver logs to an S3 bucket. CloudTrail is the first log source investigators examine in any AWS incident. Refer to Chapter 57 Section 57.2.


3. During an AWS forensic investigation, what is the recommended method for preserving the state of a compromised EC2 instance's disk?

  • A) SSH into the instance and run dd to copy the disk
  • B) Create an EBS snapshot of the attached volumes -- this captures a point-in-time copy of the disk without modifying the running instance, preserving evidence integrity
  • C) Terminate the instance and rely on CloudWatch metrics
  • D) Download the instance metadata from the IMDS endpoint
Answer

B -- Create an EBS snapshot

EBS snapshots create an immutable point-in-time copy of a volume. This is the gold standard for disk evidence preservation in AWS because it does not alter the running instance (preserving volatile state), it can be shared across accounts for isolated analysis, and it provides a verifiable chain of custody through API audit trails in CloudTrail. The snapshot can then be attached to a clean forensic workstation instance for analysis. Refer to Chapter 57 Section 57.2.


4. Which type of cloud evidence is most at risk of being lost if an investigator does not act quickly after detecting a compromise?

  • A) S3 bucket policies
  • B) IAM role configurations
  • C) Volatile evidence -- running processes, active network connections, memory contents, and ephemeral container state that is destroyed when an instance is stopped or terminated
  • D) CloudTrail logs stored in S3
Answer

C -- Volatile evidence

Volatile evidence exists only in the runtime state of a system. In cloud environments, this risk is amplified because auto-scaling can terminate instances, containers can be destroyed and recreated in seconds, and serverless functions exist only during execution. Investigators must prioritize capturing memory dumps, process listings, network connections, and container state before any containment actions that might destroy these artifacts. Refer to Chapter 57 Section 57.1.


5. In Azure, which log source captures sign-in events, conditional access policy evaluations, and risky user detections for identity-based investigations?

  • A) Azure Activity Log
  • B) NSG Flow Logs
  • C) Azure AD (Entra ID) audit and sign-in logs
  • D) Azure Resource Graph
Answer

C -- Azure AD (Entra ID) audit and sign-in logs

Azure AD sign-in logs record every authentication attempt, including the user, application, location, device, conditional access result, and risk level. Audit logs capture directory changes such as role assignments, group modifications, and application registrations. Together, these are essential for investigating identity-based attacks like credential stuffing, token theft, and privilege escalation through Azure AD. These logs can be streamed to a Log Analytics workspace for KQL-based investigation. Refer to Chapter 57 Section 57.3.


6. A forensic investigator needs to analyze network traffic patterns for an Azure virtual machine that was involved in data exfiltration. Which log source provides flow-level metadata including source/destination IPs, ports, and byte counts?

  • A) Azure Advisor recommendations
  • B) Azure Monitor Metrics
  • C) NSG (Network Security Group) Flow Logs
  • D) Azure Blob Storage access logs
Answer

C -- NSG Flow Logs

NSG Flow Logs capture metadata about IP traffic flowing through a Network Security Group, including source and destination IP, source and destination port, protocol, traffic direction, and whether traffic was allowed or denied. Version 2 flow logs also include byte and packet counts and flow state information. These logs are stored in a storage account and can be analyzed with Traffic Analytics or exported to a SIEM for correlation with other evidence. Refer to Chapter 57 Section 57.3.


7. In GCP, which service provides a unified view of security findings, vulnerabilities, and threats across an organization's cloud assets?

  • A) Cloud Monitoring
  • B) Security Command Center
  • C) Cloud Logging (formerly Stackdriver)
  • D) Cloud Asset Inventory
Answer

B -- Security Command Center

Security Command Center (SCC) is GCP's centralized security management platform. It aggregates findings from multiple sources including Event Threat Detection (anomalous IAM grants, cryptocurrency mining), Security Health Analytics (misconfigured resources), Web Security Scanner, and third-party integrations. For forensic investigations, SCC provides a timeline of security findings correlated with Cloud Audit Logs, enabling rapid identification of the attack chain. Refer to Chapter 57 Section 57.4.


8. During a Kubernetes incident, an investigator discovers a compromised pod. Which of the following actions should be performed FIRST to preserve evidence?

  • A) Delete the pod immediately to contain the breach
  • B) Scale the deployment to zero replicas
  • C) Capture the pod logs, describe the pod state, and create a snapshot of the container image and any persistent volumes before taking containment actions
  • D) Restart the node to clear malicious processes
Answer

C -- Capture pod logs, pod state, container image snapshot, and persistent volume data before containment

In Kubernetes forensics, the priority is evidence preservation before containment. This means running kubectl logs to capture stdout/stderr, kubectl describe pod to record the pod specification and events, exporting the running container as an image (docker commit or crictl checkpoint), and snapshotting any persistent volumes. Only after evidence is preserved should containment actions -- such as network isolation via NetworkPolicy, pod deletion, or node cordoning -- be taken. Refer to Chapter 57 Section 57.5.


9. Which tool is specifically designed for acquiring memory from Linux-based cloud instances by loading a kernel module to dump the contents of physical RAM?

  • A) Volatility Framework
  • B) LiME (Linux Memory Extractor)
  • C) FTK Imager
  • D) Wireshark
Answer

B -- LiME (Linux Memory Extractor)

LiME is a loadable kernel module (LKM) that allows acquisition of volatile memory from Linux systems, including those running in cloud environments. It can dump memory in raw, padded, or LiME formats and can output to the local filesystem or over a TCP connection for remote acquisition. For cloud instances, LiME is preferred because it minimizes the forensic footprint on the target system. The resulting memory image can then be analyzed with Volatility or Rekall. Refer to Chapter 57 Section 57.8.


10. An investigator writes the following KQL query in Azure Sentinel:

AzureActivity
| where OperationNameValue == "MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE"
| where ActivityStatusValue == "Success"
| project TimeGenerated, Caller, ResourceGroup, _ResourceId

What is this query designed to detect?

  • A) Failed login attempts to virtual machines
  • B) Successful deletion of Azure virtual machines -- identifying who deleted them, when, and in which resource group, which is critical for detecting anti-forensic evidence destruction
  • C) Virtual machine CPU usage spikes
  • D) New virtual machine deployments
Answer

B -- Successful deletion of Azure virtual machines

This KQL query filters the AzureActivity table for successful VM deletion operations. In a forensic context, this is critical for detecting anti-forensic behavior where an attacker destroys evidence by deleting compromised VMs. The query projects the timestamp, the identity of the caller (which could be a compromised service principal or user), the resource group, and the resource ID. This should be correlated with sign-in logs to determine if the deletion was performed by a legitimate administrator or a threat actor. Refer to Chapter 57 Section 57.7.


11. What is the primary forensic challenge unique to serverless computing environments such as AWS Lambda or Azure Functions?

  • A) Serverless functions are too expensive to investigate
  • B) Serverless functions produce no logs whatsoever
  • C) The execution environment is ephemeral -- there is no persistent OS, filesystem, or memory to acquire after execution completes, making traditional forensic acquisition impossible and forcing investigators to rely entirely on pre-configured logging and tracing
  • D) Serverless functions cannot be compromised because they are fully managed
Answer

C -- The ephemeral execution environment eliminates traditional forensic artifacts

Serverless forensics is fundamentally different from traditional or even cloud VM forensics because the execution environment exists only for the duration of the function invocation. Once execution completes, the container may be frozen, reused, or destroyed -- there is no persistent disk or memory to image. Investigators must rely on CloudWatch Logs (Lambda), Application Insights (Azure Functions), Cloud Logging (GCP Cloud Functions), and distributed tracing (X-Ray, OpenTelemetry) that was configured before the incident. This makes proactive logging configuration essential. Refer to Chapter 57 Section 57.6.


12. When conducting a multi-cloud investigation across AWS, Azure, and GCP simultaneously, what is the most critical technical challenge for correlating events?

  • A) All three clouds use the same log format
  • B) Time synchronization and normalization -- each provider uses different timestamp formats, time zones, and clock sources, and events must be normalized to a common timeline (UTC) to accurately reconstruct the attack sequence across environments
  • C) Multi-cloud investigations are not legally permitted
  • D) Only one cloud can be investigated at a time
Answer

B -- Time synchronization and normalization

Multi-cloud investigations require correlating events across providers that use different timestamp formats (ISO 8601, epoch, provider-specific), different log schemas, and potentially different internal clock synchronization mechanisms. Without normalizing all timestamps to a common reference (UTC), the reconstructed timeline will be inaccurate. SIEMs like Sentinel, Splunk, or Chronicle address this through Common Information Model (CIM) or ASIM normalization, but investigators must validate that ingestion pipelines preserve timestamp fidelity. Refer to Chapter 57 Section 57.9.


13. Which of the following best describes the correct chain of custody procedure when collecting evidence from a cloud environment?

  • A) Download all logs to a local workstation and email them to the legal team
  • B) Document every collection action with timestamps, hash all evidence artifacts (SHA-256), store evidence in a write-protected location with restricted access, and maintain a log of every person who accesses the evidence -- all API calls used for collection should be recorded via CloudTrail/Activity Log/Audit Log
  • C) Take screenshots of the cloud console and save them in a shared folder
  • D) Rely on the cloud provider to maintain chain of custody for all evidence
Answer

B -- Document, hash, write-protect, restrict access, and log all collection API calls

Cloud evidence chain of custody requires the same rigor as physical evidence but adapted for cloud-native workflows. Every evidence artifact must be cryptographically hashed at collection time and verified at each transfer. The cloud API audit trail (CloudTrail, Activity Log, Cloud Audit Logs) automatically documents collection actions with timestamps and caller identity. Evidence should be stored in immutable storage (S3 Object Lock, Azure Immutable Blob, GCS retention policies) with access restricted to the investigation team. Refer to Chapter 57 Section 57.8.


14. An investigator examining Kubernetes etcd data discovers an attacker created a ClusterRoleBinding granting cluster-admin privileges to a service account named "debug-sa" in the default namespace. Which Splunk query would help identify when this binding was created?

  • A) index=main sourcetype=syslog "debug-sa"
  • B) index=kubernetes sourcetype=kube:apiserver verb=create objectRef.resource=clusterrolebindings objectRef.name=*debug* | table _time, user.username, objectRef.name, requestURI
  • C) index=web status=404
  • D) index=kubernetes sourcetype=kube:node "memory"
Answer

B -- Query the Kubernetes API server audit logs for ClusterRoleBinding creation events

Kubernetes API server audit logs record every request to the API, including RBAC modifications. This SPL query filters for create operations on the clusterrolebindings resource matching the suspicious name pattern, showing the timestamp, the authenticated user who made the request, the binding name, and the request URI. This is critical for determining whether the attacker used a compromised service account, a stolen kubeconfig, or exploited a vulnerability to escalate privileges. The etcd datastore also contains historical state if API audit logging was not enabled. Refer to Chapter 57 Section 57.5.


15. Microsoft's AVML (Acquire Volatile Memory for Linux) offers an advantage over traditional memory acquisition tools in cloud environments. What is that advantage?

  • A) AVML requires a graphical user interface to operate
  • B) AVML can only run on Windows systems
  • C) AVML operates entirely from user space without requiring a kernel module -- eliminating the risk of kernel version mismatches and the need to compile modules for the target system's exact kernel, which is especially valuable in cloud environments with diverse and frequently updated kernels
  • D) AVML compresses memory images using proprietary encryption
Answer

C -- AVML operates from user space without requiring a kernel module

Unlike LiME, which requires loading a kernel module compiled for the exact kernel version of the target system, AVML uses /dev/crash or /proc/kcore to acquire memory from user space. This is a significant advantage in cloud environments where instances may run many different kernel versions and compiling a matching LiME module for each is impractical. AVML produces output in LiME format or Microsoft's own AVML format, both of which can be analyzed with Volatility. This user-space approach also reduces the forensic footprint on the target system. Refer to Chapter 57 Section 57.8.


Scoring Guide

Score Rating
13-15 Expert -- You have strong command of cloud forensics and investigation techniques
10-12 Proficient -- Solid understanding with room to deepen multi-cloud and container forensics knowledge
7-9 Developing -- Review cloud-native log sources and evidence collection procedures
0-6 Foundational -- Revisit Chapter 57 and complete Lab exercises before retaking

Return to Chapter 57: Cloud Forensics & Investigation | All Quizzes