Nexus SecOps Controls Catalog¶
This catalog contains all 220 normative controls organized by domain. Each control uses RFC 2119 language (MUST/SHOULD/MAY). Use the Self-Assessment Workbook to score your organization against these controls.
Domain: TEL — Telemetry & Logging (Nexus SecOps-001–015)¶
Nexus SecOps-001: Log Source Inventory¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST maintain a current, authoritative inventory of all log sources, including source type, collection method, collection status, and data owner. |
Rationale: You cannot detect what you do not log. An inventory is the foundation for understanding coverage gaps and ensuring accountability for log collection.
Implementation Guidance: Maintain a log source registry in a CMDB, spreadsheet, or dedicated platform. Include fields: source name, IP/hostname, log type, collection agent/method, status (active/inactive), owner, last validated date. Review quarterly.
Evidence to Collect: Export of log source registry showing all fields; date of last review; evidence of quarterly review (meeting notes or ticket).
Tests/Validation: Compare log source inventory against network asset inventory and cloud resource inventory. Identify sources present in asset inventory but absent from log inventory. Calculate coverage ratio.
Metrics: Log source inventory completeness (target ≥98%); staleness (target: no source unreviewed >90 days); owner coverage (target: 100% of sources have named owner).
Common Pitfalls: Inventory created once and never updated; cloud assets omitted; shadow IT log sources unknown; no owner assigned to sources.
Framework Mappings: - NIST CSF: ID.AM-01 (Asset management) - CIS v8: Control 1 (Inventory and Control of Enterprise Assets) - ISO 27001: A.8.1 (Asset inventory) - NIST 800-53: CM-8 (System Component Inventory) - MITRE ATT&CK: Detection gap analysis lens - MITRE D3FEND: D3-NM (Network Mapping)
Nexus SecOps-002: Log Collection Coverage¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST collect logs from all critical systems and SHOULD collect logs from all systems in scope. Critical systems include identity providers, perimeter network devices, endpoints, cloud control planes, and key applications. |
Rationale: Detection coverage is bounded by collection coverage. Missing log sources create blind spots that adversaries can exploit to move undetected.
Implementation Guidance: Define criticality tiers for systems. Ensure Tier 1 (critical) systems have 100% log collection coverage. Use automated deployment of agents or centralized syslog receivers. Validate collection by checking for expected log volume from each source.
Evidence to Collect: Coverage report showing percentage of assets with active log collection by tier; configuration screenshots of collection agents; alert rules that fire when expected sources go silent.
Tests/Validation: Disable logging on a test system temporarily and verify that a coverage gap alert fires within the expected SLA. Compare actual source count to inventory count.
Metrics: Collection coverage by tier (Tier 1 target: 100%; Tier 2 target: ≥95%; Tier 3 target: ≥85%); time-to-detect collection gap (target: <1 hour for Tier 1).
Common Pitfalls: Coverage measured by device count but not by log type; cloud workloads excluded; containerized workloads not instrumented; test systems omitted from scope.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-2 (Event Logging), AU-12 (Audit Record Generation) - MITRE ATT&CK: Detection coverage lens (multiple tactics) - MITRE D3FEND: D3-HA (Hardware Auditing)
Nexus SecOps-003: Log Transport Security¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Log transport MUST be encrypted in transit using TLS 1.2 or higher for all log streams traversing untrusted networks. Mutual authentication SHOULD be implemented for log collectors receiving data from external sources. |
Rationale: Unencrypted log transport exposes sensitive operational data and allows adversaries to tamper with or suppress logs during an incident.
Implementation Guidance: Configure TLS on all log shippers (Beats, Fluentd, NXLog, etc.) and receivers. Use certificate pinning or mutual TLS for high-value log streams. Audit TLS configurations quarterly.
Evidence to Collect: TLS configuration screenshots for log shippers and receivers; certificate inventory; network capture showing encrypted log traffic (metadata only, not content).
Tests/Validation: Attempt to intercept log traffic on the network and confirm it is encrypted. Verify TLS version and cipher suite meet policy minimums.
Metrics: Percentage of log streams using TLS 1.2+ (target: 100%); percentage using mutual auth (target: ≥80% for external-facing collectors).
Common Pitfalls: Internal network log streams left unencrypted; TLS 1.0/1.1 still in use; self-signed certificates not rotated; syslog over UDP with no encryption.
Framework Mappings: - NIST CSF: PR.DS-02 (Data-in-transit protected) - CIS v8: Control 3.10 (Encrypt Sensitive Data in Transit) - ISO 27001: A.8.24 (Use of cryptography) - NIST 800-53: SC-8 (Transmission Confidentiality and Integrity) - MITRE ATT&CK: T1557 (Adversary-in-the-Middle — detection lens) - MITRE D3FEND: D3-ET (Encrypted Tunnels)
Nexus SecOps-004: Log Parsing Validation¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST validate that log parsers correctly extract expected fields from all critical log sources. Parsing failures MUST be detected, alerted, and tracked to resolution. |
Rationale: A parser that silently drops fields or misformats data creates invisible detection gaps. Analysts and detection rules depend on correctly parsed fields.
Implementation Guidance: Implement parser test suites with representative log samples for each source. Run tests in CI/CD pipelines when parsers change. Monitor parsing error rates in production. Alert on high error rates or sudden changes in field extraction quality.
Evidence to Collect: Parser test suite results; parsing error rate dashboard; alert configuration for parsing failure thresholds; sample of correctly parsed logs.
Tests/Validation: Send a known test log line and verify output fields match expected values. Inject a malformed log line and confirm it is handled gracefully without silent data loss.
Metrics: Parser error rate per source (target: <0.1%); field extraction completeness for required fields (target: ≥99%); time to detect and remediate parser failures (target: <4 hours).
Common Pitfalls: Parser changes deployed without testing; no monitoring on parsing error rates; vendor format changes break parsers silently; multiline logs not handled correctly.
Framework Mappings: - NIST CSF: DE.CM-09 (Computing hardware monitored) - CIS v8: Control 8.2 (Collect Audit Logs) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-3 (Content of Audit Records) - MITRE ATT&CK: Data quality lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-005: Timestamp Normalization¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | All log timestamps MUST be normalized to UTC and stored with millisecond precision. Source timezone SHOULD be preserved as a separate field for forensic reference. |
Rationale: Timestamp inconsistency is one of the most common investigation blockers. Mixed timezones make timeline reconstruction error-prone and slow incident response.
Implementation Guidance: Configure all log sources to emit timestamps in ISO 8601 format (UTC). If source systems emit local time, perform timezone conversion at the collection layer with documented timezone mappings. Store both original and normalized timestamps.
Evidence to Collect: Documentation of timestamp normalization pipeline; sample logs showing UTC timestamps; timezone mapping table; evidence that NTP is enforced on log sources.
Tests/Validation: Correlate logs from sources in different timezones for a known event and verify timestamps align correctly. Check NTP synchronization status on critical log sources.
Metrics: Percentage of log sources with NTP configured (target: 100%); timestamp parsing error rate (target: <0.01%); clock skew monitored (target: all sources within ±1 second of NTP).
Common Pitfalls: Log sources not using NTP; local time emitted without timezone offset; timezone conversion applied inconsistently; millisecond precision lost during normalization.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 8.4 (Standardize Time Synchronization) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-8 (Time Stamps) - MITRE ATT&CK: T1070.006 (Timestomp — detection lens) - MITRE D3FEND: D3-NTA (Network Traffic Analysis)
Nexus SecOps-006: Log Retention Compliance¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST define and enforce log retention periods that meet regulatory requirements and operational needs. Retention periods MUST be documented per log type. Logs MUST NOT be deleted before the defined retention period expires. |
Rationale: Insufficient retention prevents investigation of incidents discovered late. Excessive retention without controls creates privacy and cost risk.
Implementation Guidance: Define retention tiers: hot (30–90 days, fast query), warm (90–365 days, slower), cold/archive (1–7 years, compliance). Automate lifecycle transitions. Document retention periods per log type in a retention schedule. Test restoration from archive quarterly.
Evidence to Collect: Retention policy document; storage configuration screenshots showing lifecycle rules; retention schedule by log type; evidence of compliance with regulatory minimums (e.g., 90 days for PCI DSS).
Tests/Validation: Verify that logs older than the hot tier boundary are accessible (warm) and logs older than warm boundary are accessible from archive. Confirm logs beyond retention period are deleted on schedule.
Metrics: Compliance with defined retention periods (target: 100%); archive restoration success rate (target: ≥99%); cost per GB per tier tracked.
Common Pitfalls: Single retention period for all log types; no archive testing; regulatory minimums not reviewed when regulations change; cold storage inaccessible during incidents.
Framework Mappings: - NIST CSF: PR.IP-04 (Backups of information conducted) - CIS v8: Control 8.3 (Ensure Adequate Audit Log Storage) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-11 (Audit Record Retention) - MITRE ATT&CK: T1070 (Indicator Removal — detection lens) - MITRE D3FEND: D3-DENCR (Data Encryption at Rest)
Nexus SecOps-007: Log Integrity Protection¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 3 |
| Requirement | Organizations SHOULD implement controls to detect unauthorized modification or deletion of security logs. Immutable log storage or cryptographic verification SHOULD be used for critical audit logs. |
Rationale: Adversaries frequently attempt to delete or modify logs to cover their tracks. Log integrity protection ensures evidence is trustworthy for investigations.
Implementation Guidance: Use write-once (WORM) storage for critical log archives. Implement hash chaining or cryptographic signing for audit logs. Ship logs to a secondary repository immediately upon collection. Alert on bulk log deletion events.
Evidence to Collect: WORM or immutable storage configuration; log signing implementation documentation; alert rule for bulk deletion; evidence of log delivery to secondary repository.
Tests/Validation: Attempt to modify a log entry and verify the modification is detected. Confirm that bulk deletion triggers an alert within SLA.
Metrics: Percentage of critical audit logs with integrity protection (target: 100%); time to detect log tampering (target: <15 minutes).
Common Pitfalls: Log integrity only applied to archives, not live logs; admin accounts can disable logging; secondary repository on same infrastructure as primary.
Framework Mappings: - NIST CSF: PR.DS-01 (Data-at-rest protected) - CIS v8: Control 8.5 (Collect Detailed Audit Logs) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-9 (Protection of Audit Information) - MITRE ATT&CK: T1070 (Indicator Removal) - MITRE D3FEND: D3-DENCR (Data Encryption)
Nexus SecOps-008: Cloud Log Collection¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations using cloud services MUST collect control plane logs (e.g., CloudTrail, Azure Activity Log, GCP Audit Log) from all accounts and subscriptions. Data plane and resource logs SHOULD be collected from critical cloud workloads. |
Rationale: Cloud control plane logs capture all administrative and API activity, which is essential for detecting unauthorized changes, privilege escalation, and data exfiltration in cloud environments.
Implementation Guidance: Enable cloud-native audit logging in all accounts/subscriptions. Use organization-level log aggregation to a central SIEM. Ensure multi-region coverage. Alert on logging being disabled.
Evidence to Collect: Cloud logging configuration screenshots; list of accounts/subscriptions with logging enabled; alert rule for logging disabled events; evidence of centralized aggregation.
Tests/Validation: Disable logging in a test account and verify alert fires. Perform an administrative action and confirm it appears in the SIEM within SLA.
Metrics: Cloud accounts with control plane logging enabled (target: 100%); time for cloud logs to appear in SIEM (target: <5 minutes); coverage of Tier 1 cloud workloads for data plane logs (target: ≥95%).
Common Pitfalls: New cloud accounts provisioned without enabling logging; multi-region logging gaps; data plane logs not collected due to cost; logs not forwarded to SIEM.
Framework Mappings: - NIST CSF: DE.CM-06 (External service provider activity monitored) - CIS v8: Control 8.2 (Collect Audit Logs) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-2, AU-12 (Event Logging) - MITRE ATT&CK: T1578 (Modify Cloud Compute Infrastructure — detection lens) - MITRE D3FEND: D3-NM (Network Mapping)
Nexus SecOps-009: Endpoint Telemetry¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST deploy endpoint detection and response (EDR) or equivalent telemetry collection on all managed endpoints. Telemetry MUST include process execution, network connections, file modifications, and user activity. |
Rationale: Endpoints are primary targets for initial access and lateral movement. Rich endpoint telemetry enables detection of techniques that network logs cannot see.
Implementation Guidance: Deploy EDR agents to all managed workstations and servers. Validate coverage via deployment management tooling. Configure telemetry to include process trees, command lines, DNS queries, and authentication events. Forward EDR telemetry to SIEM.
Evidence to Collect: EDR deployment coverage report; telemetry field listing; evidence of SIEM integration; alert on EDR agent health failures.
Tests/Validation: Execute a known benign test process on an endpoint and confirm telemetry appears in SIEM within expected SLA. Verify command line arguments are captured.
Metrics: EDR coverage of managed endpoints (target: ≥99%); EDR agent health (online rate target: ≥98%); telemetry completeness (required fields present: target ≥99%).
Common Pitfalls: EDR not deployed on servers; coverage gaps in remote/OT environments; telemetry forwarding misconfigured; EDR exclusions too broad.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks and physical environment monitored) - CIS v8: Control 10 (Malware Defenses), Control 13 (Network Monitoring) - ISO 27001: A.8.7 (Protection against malware) - NIST 800-53: SI-3 (Malicious Code Protection), AU-2 (Event Logging) - MITRE ATT&CK: T1059 (Command and Scripting Interpreter — detection lens) - MITRE D3FEND: D3-PA (Process Analysis)
Nexus SecOps-010: Network Telemetry¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST collect network flow data (NetFlow, IPFIX, or equivalent) from critical network segments. Network traffic analysis SHOULD be deployed at perimeter and key internal chokepoints. |
Rationale: Network telemetry provides visibility into east-west traffic and connections that endpoint telemetry may miss, enabling detection of lateral movement, C2 beaconing, and data exfiltration.
Implementation Guidance: Configure NetFlow/IPFIX export on routers, switches, and firewalls. Deploy network sensors at perimeter, DMZ, and critical internal segments. Consider deep packet inspection metadata (not content) for high-value segments.
Evidence to Collect: Network flow collection configuration; sensor deployment map; evidence of data appearing in SIEM; list of monitored segments.
Tests/Validation: Generate known network traffic patterns and confirm they appear in flow data within expected SLA. Verify coverage of all defined critical segments.
Metrics: Network segments with flow collection (target: 100% of critical segments); flow data completeness (required fields present: target ≥99%); time for flows to appear in SIEM (target: <2 minutes).
Common Pitfalls: East-west traffic not monitored; only perimeter flows collected; encrypted traffic analysis not implemented; high-bandwidth segments dropped due to storage limits.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 13 (Network Monitoring and Defense) - ISO 27001: A.8.20 (Networks security) - NIST 800-53: AU-2 (Event Logging), SC-7 (Boundary Protection) - MITRE ATT&CK: T1071 (Application Layer Protocol — detection lens) - MITRE D3FEND: D3-NTA (Network Traffic Analysis)
Nexus SecOps-011: Identity and Authentication Telemetry¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations MUST collect authentication and authorization logs from all identity providers, directory services, and privileged access management systems. Logs MUST include success and failure events with user, source IP, device, and timestamp. |
Rationale: Identity is the primary attack surface in modern environments. Authentication logs are essential for detecting credential attacks, account compromise, and privilege abuse.
Implementation Guidance: Collect logs from: Active Directory, Azure AD/Entra ID, Okta, AWS IAM, Google Workspace, PAM systems. Ensure both success and failure events are captured. Include MFA events, token issuance, and group membership changes.
Evidence to Collect: Identity provider logging configuration; list of identity sources feeding SIEM; sample of authentication events showing required fields; coverage of all identity providers.
Tests/Validation: Perform a failed authentication attempt and confirm the event appears in SIEM within SLA with all required fields. Verify MFA bypass attempts are captured.
Metrics: Identity provider coverage (target: 100%); authentication log volume per day (baseline and anomaly detection); required fields present (target: ≥99%).
Common Pitfalls: Legacy on-prem AD logs not forwarded; SaaS identity providers not integrated; privileged account activity missing; MFA events not captured.
Framework Mappings: - NIST CSF: DE.CM-03 (Personnel activity monitored) - CIS v8: Control 5 (Account Management) - ISO 27001: A.8.5 (Secure authentication) - NIST 800-53: AC-2 (Account Management), IA-2 (Identification and Authentication) - MITRE ATT&CK: T1110 (Brute Force — detection lens) - MITRE D3FEND: D3-UAA (User Account Analysis)
Nexus SecOps-012: Application and API Telemetry¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 2 |
| Requirement | Organizations SHOULD collect security-relevant events from critical business applications and APIs. Events MUST include authentication, authorization decisions, data access, configuration changes, and errors. |
Rationale: Application-layer attacks (OWASP Top 10, API abuse, data exfiltration) are not visible in network or endpoint logs alone. Application telemetry closes this detection gap.
Implementation Guidance: Work with application owners to define and implement security event logging. Use WAF logs, application event logs, and API gateway logs. Normalize events to common schema. Prioritize customer-facing and high-sensitivity applications.
Evidence to Collect: Application logging requirements document; list of critical applications with logging status; WAF integration configuration; sample application security events in SIEM.
Tests/Validation: Trigger an application-level security event (e.g., failed login to application) and verify it appears in SIEM with required fields.
Metrics: Critical applications with security logging enabled (target: ≥90%); application log completeness (required fields: target ≥95%).
Common Pitfalls: Application teams not engaged; WAF logs collected but not parsed; internal APIs excluded; error logs confused with security events.
Framework Mappings: - NIST CSF: DE.CM-09 (Computing hardware and software monitored) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-2, AU-12 (Audit Logging) - MITRE ATT&CK: T1190 (Exploit Public-Facing Application — detection lens) - MITRE D3FEND: D3-WAN (Web Application Auditing)
Nexus SecOps-013: Container and Kubernetes Telemetry¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 3 |
| Requirement | Organizations running containerized workloads SHOULD collect telemetry from container runtimes, Kubernetes audit logs, and container orchestration control planes. |
Rationale: Containers introduce unique attack surfaces including container escapes, privileged containers, and supply chain attacks. Standard endpoint telemetry does not cover these adequately.
Implementation Guidance: Enable Kubernetes API server audit logging. Deploy container runtime security tools. Collect events: pod creation/deletion, image pulls, privileged container starts, network policy changes, secrets access. Forward to SIEM.
Evidence to Collect: Kubernetes audit log configuration; container runtime security tool deployment; list of events captured; evidence of SIEM integration.
Tests/Validation: Start a privileged container and verify the event appears in SIEM. Modify a Kubernetes namespace and confirm the audit event is captured.
Metrics: Kubernetes clusters with audit logging (target: 100%); container workload coverage (target: ≥95%); required event types captured (target: ≥90%).
Common Pitfalls: Kubernetes audit logging not enabled (off by default); container telemetry sent to separate platform not integrated with SIEM; ephemeral container logs lost.
Framework Mappings: - NIST CSF: DE.CM-09 (Computing hardware monitored) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-2, AU-12 (Audit Logging) - MITRE ATT&CK: T1610 (Deploy Container — detection lens) - MITRE D3FEND: D3-PA (Process Analysis)
Nexus SecOps-014: Telemetry Health Monitoring¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 3 |
| Requirement | Organizations MUST monitor the health of log collection infrastructure and MUST alert when log sources go silent, parsing error rates spike, or collection latency exceeds defined thresholds. |
Rationale: Silent collection failures create undetected blind spots. Telemetry health monitoring ensures that detection coverage is continuously validated, not just assumed.
Implementation Guidance: Build a telemetry health dashboard showing: events per source per hour, parsing error rates, collection latency, and last-seen timestamps. Configure alerts for: source silent >30 minutes, error rate >5%, latency >10 minutes. Treat collection failures as Severity 2 incidents.
Evidence to Collect: Telemetry health dashboard screenshot; alert configuration for collection failures; records of collection failure incidents and resolution times.
Tests/Validation: Stop logging on a test source and verify alert fires within defined SLA. Review incident tickets for collection failure response times.
Metrics: Time to detect collection failure (target: <30 minutes for Tier 1 sources); collection failure alert false positive rate (target: <10%); collection SLA compliance (target: ≥99.5% uptime).
Common Pitfalls: Health monitoring only checks agent status, not actual log volume; no alerting on silent sources; collection failures not treated as security incidents.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 8.11 (Conduct Audit Log Reviews) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-5 (Response to Audit Logging Process Failures) - MITRE ATT&CK: T1562 (Impair Defenses — detection lens) - MITRE D3FEND: D3-NM (Network Mapping)
Nexus SecOps-015: OT/IoT Telemetry¶
| Field | Value |
|---|---|
| Domain | TEL |
| Maturity Level | 3 |
| Requirement | Organizations with operational technology (OT) or IoT environments SHOULD collect network telemetry from OT/IoT segments using passive monitoring. OT/IoT device logs SHOULD be forwarded where technically feasible without disrupting operations. |
Rationale: OT/IoT devices are increasingly targeted and often lack traditional endpoint security. Network-based telemetry provides visibility without risking operational disruption.
Implementation Guidance: Deploy passive network sensors in OT/IoT segments. Use protocol-aware monitoring for industrial protocols. Avoid active scanning in OT environments. Integrate with industrial security platforms where available.
Evidence to Collect: OT/IoT network sensor deployment map; list of monitored protocols; evidence of SIEM integration; documentation of safety constraints limiting collection.
Tests/Validation: Verify passive sensor data appears in SIEM without disrupting OT operations. Confirm industrial protocol events are correctly classified.
Metrics: OT/IoT segments with passive monitoring (target: ≥90% of critical segments); events appearing in SIEM within latency SLA.
Common Pitfalls: Active scanning deployed in OT environment causing disruption; OT systems treated as IT systems; no coordination with OT operations team.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 13 (Network Monitoring) - ISO 27001: A.8.20 (Networks security) - NIST 800-53: AU-2 (Event Logging) - MITRE ATT&CK: T0840 (Network Connection Enumeration — detection lens) - MITRE D3FEND: D3-NTA (Network Traffic Analysis)
Domain: DQN — Data Quality & Normalization (Nexus SecOps-016–030)¶
Nexus SecOps-016: Security Data Schema Standard¶
| Field | Value |
|---|---|
| Domain | DQN |
| Maturity Level | 2 |
| Requirement | Organizations MUST define and document a canonical data schema for security events. All ingested log sources MUST be normalized to this schema. The schema MUST include at minimum: timestamp, source IP, destination IP, user, action, outcome, and log source. |
Rationale: A consistent schema enables cross-source correlation, reusable detection logic, and faster investigation. Without it, analysts waste time on data wrangling.
Implementation Guidance: Adopt an industry standard schema (OCSF, ECS, or CIM) or define a documented organizational schema. Create field mapping specifications for each log source. Enforce schema in the normalization pipeline.
Evidence to Collect: Schema documentation; field mapping specifications per log source; evidence of schema enforcement in pipeline; example normalized events showing required fields.
Tests/Validation: Query the SIEM for a specific normalized field across 5 different log sources and verify consistent data type and format.
Metrics: Percentage of log sources normalized to schema (target: ≥98%); required field completeness in normalized events (target: ≥99%); schema drift incidents per quarter (target: 0).
Common Pitfalls: Multiple competing schemas; schema not enforced—normalization is aspirational only; schema changes not communicated to detection engineers; legacy sources exempted indefinitely.
Framework Mappings: - NIST CSF: ID.AM-05 (Resources prioritized) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-3 (Content of Audit Records) - MITRE ATT&CK: Data quality lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-017: Enrichment Pipeline¶
| Field | Value |
|---|---|
| Domain | DQN |
| Maturity Level | 3 |
| Requirement | Organizations MUST implement automated enrichment for security events. Enrichment MUST include at minimum: asset context lookup and threat intelligence indicator matching. GeoIP resolution, user identity resolution, and vulnerability data SHOULD also be applied. |
Rationale: Raw events lack context. Enrichment transforms raw data into actionable intelligence, reducing analyst investigation time and improving decision accuracy.
Implementation Guidance: Build an enrichment pipeline that runs on all events before they reach the analyst queue. Use internal asset databases, LDAP/AD for user context, threat intel feeds for IOC matching, and GeoIP for geographic context. Track enrichment latency and failure rates.
Evidence to Collect: Enrichment pipeline architecture; list of enrichment sources and fields added; latency metrics; example enriched events showing context fields.
Tests/Validation: Submit a test event with a known IP address and verify enrichment fields (GeoIP, threat intel match, asset context) appear within expected latency.
Metrics: Enrichment coverage (events with all required enrichment fields: target ≥95%); enrichment latency (target: <500ms median); enrichment source freshness (threat intel feed updated: target ≤24 hours).
Common Pitfalls: Enrichment adds latency without benefit; enrichment data stale; enrichment applied only at query time, not at ingest; GeoIP database not updated.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 13 (Network Monitoring) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-3 (Content of Audit Records) - MITRE ATT&CK: T1071 (detection enrichment lens) - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-018: Data Quality Scoring¶
| Field | Value |
|---|---|
| Domain | DQN |
| Maturity Level | 3 |
| Requirement | Organizations SHOULD implement automated data quality scoring for security event streams. Quality dimensions MUST include completeness, accuracy, timeliness, and consistency. Quality scores SHOULD be surfaced in analyst tooling. |
Rationale: Poor data quality is a hidden cause of missed detections and false positives. Making quality visible enables targeted improvement.
Implementation Guidance: Define quality metrics per field. Compute a quality score per log source (0–100). Surface scores in SIEM dashboards. Set thresholds that trigger quality alerts. Review quality trends monthly.
Evidence to Collect: Data quality scoring framework documentation; quality dashboard screenshot; quality alert configuration; monthly quality trend report.
Tests/Validation: Introduce a known quality issue (e.g., missing field) in a test log stream and verify quality score drops and alert fires.
Metrics: Average data quality score across all sources (target: ≥85/100); sources below quality threshold (target: 0); quality score trend (improving quarter over quarter).
Common Pitfalls: Quality scoring done manually during incidents only; quality scores not visible to detection engineers; quality issues remediated reactively.
Framework Mappings: - NIST CSF: ID.IM-01 (Improvements identified) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-3 (Content of Audit Records) - MITRE ATT&CK: Data quality lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-019: Deduplication and Event Correlation Preparation¶
| Field | Value |
|---|---|
| Domain | DQN |
| Maturity Level | 3 |
| Requirement | Organizations SHOULD implement deduplication of redundant log events before indexing. Deduplication logic MUST be documented and MUST NOT suppress unique events. |
Rationale: Duplicate events inflate storage costs, slow queries, and inflate alert counts. However, aggressive deduplication that suppresses real events creates detection gaps.
Implementation Guidance: Implement deduplication based on event fingerprinting (hash of key fields). Apply time-windowed deduplication (suppress exact duplicate within N minutes). Document deduplication logic. Monitor suppression rates. Exempt security-critical event types.
Evidence to Collect: Deduplication policy documentation; suppression rate metrics; list of event types exempt from deduplication; evidence that dedup does not suppress unique events.
Tests/Validation: Submit two identical test events 5 seconds apart and verify deduplication. Submit two similar but distinct events and verify both are stored.
Metrics: Deduplication rate per source (tracked for anomaly detection); storage cost reduction from deduplication; unique events accidentally suppressed (target: 0).
Common Pitfalls: Deduplication too aggressive—suppresses events that differ only in minor fields; deduplication rules not reviewed as log formats change; no monitoring of suppression rates.
Framework Mappings: - NIST CSF: PR.DS-01 (Data-at-rest protected) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-4 (Audit Log Storage Capacity) - MITRE ATT&CK: Data management lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-020: Data Lineage and Cataloging¶
| Field | Value |
|---|---|
| Domain | DQN |
| Maturity Level | 3 |
| Requirement | Organizations SHOULD maintain data lineage records documenting the origin, transformations, and storage location of security data. A data catalog SHOULD be searchable by analysts. |
Rationale: During investigations, analysts need to know where data came from, how it was transformed, and where to find it. Without lineage, investigation efficiency suffers.
Implementation Guidance: Document data flow from source to SIEM for each log type. Record transformations applied (parsing rules, enrichment, normalization). Make catalog searchable. Update catalog when pipelines change.
Evidence to Collect: Data catalog or data dictionary; lineage documentation for 5 critical log sources; evidence catalog is updated after pipeline changes.
Tests/Validation: Ask an analyst to find the source system for a specific event type using the catalog. Measure time to locate the information.
Metrics: Log source types with documented lineage (target: ≥90%); catalog search time (target: <2 minutes for common sources); catalog staleness (target: no entry unreviewed >6 months).
Common Pitfalls: Catalog exists but is not maintained; lineage documentation only for new sources; analysts unaware catalog exists; catalog in format inaccessible during incidents.
Framework Mappings: - NIST CSF: ID.AM-05 (Resources prioritized) - CIS v8: Control 1 (Asset Inventory) - ISO 27001: A.8.1 (Asset inventory) - NIST 800-53: CM-8 (System Component Inventory) - MITRE ATT&CK: Investigation support lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-021: Timeliness SLA for Log Delivery¶
| Field | Value |
|---|---|
| Domain | DQN |
| Maturity Level | 3 |
| Requirement | Organizations MUST define and enforce maximum acceptable log delivery latency from source to SIEM for each log source tier. Tier 1 critical sources MUST have latency ≤5 minutes. Other sources SHOULD have latency ≤15 minutes. |
Rationale: Detection is only as timely as log delivery. A 30-minute log delivery delay means a 30-minute minimum MTTD regardless of detection rule quality.
Implementation Guidance: Measure end-to-end log latency (event generation to searchable in SIEM). Define SLAs per tier. Alert when SLA is breached. Review latency metrics weekly.
Evidence to Collect: Latency measurement methodology; per-source latency dashboard; SLA definition document; alert configuration for latency breaches; sample SLA compliance report.
Tests/Validation: Generate a test event on a source system and measure time until it is searchable in SIEM. Compare against defined SLA.
Metrics: Percentage of Tier 1 sources meeting ≤5 minute SLA (target: ≥99%); median log delivery latency per tier; latency SLA breach frequency (target: <1 per month per source).
Common Pitfalls: Latency measured from collection agent, not source event time; batch delivery intervals create latency spikes; network congestion not accounted for; SLA not defined per tier.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.15 (Logging) - NIST 800-53: AU-5 (Response to Audit Logging Failures) - MITRE ATT&CK: Detection timeliness lens - MITRE D3FEND: D3-NTA (Network Traffic Analysis)
Nexus SecOps-022 through Nexus SecOps-030: Additional DQN Controls¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-022 | Field Type Validation | MUST validate that fields conform to expected data types (IP addresses, timestamps, enumerated values) before indexing. | 2 |
| Nexus SecOps-023 | Schema Evolution Management | MUST have a documented process for updating the schema when log source formats change, including backward compatibility review. | 3 |
| Nexus SecOps-024 | GeoIP Enrichment | SHOULD enrich all external IP addresses with geographic context (country, ASN, organization) updated at least monthly. | 3 |
| Nexus SecOps-025 | Asset Context Enrichment | MUST enrich events with asset context (hostname, owner, criticality tier, OS) from the asset inventory for all Tier 1 assets. | 3 |
| Nexus SecOps-026 | User Identity Resolution | SHOULD resolve usernames across systems to a canonical identity, mapping service accounts, email addresses, and UPNs to a single user record. | 3 |
| Nexus SecOps-027 | Completeness Monitoring | MUST monitor required field completeness per log source and alert when completeness drops below defined thresholds (target: ≥99%). | 3 |
| Nexus SecOps-028 | Threat Intel IOC Enrichment | MUST enrich events with threat intelligence indicator matches from at least one curated feed. Matches SHOULD include confidence and TLP metadata. | 3 |
| Nexus SecOps-029 | Format Validation at Ingest | MUST reject malformed log entries that cannot be parsed, log the rejection, and alert when rejection rate exceeds threshold (target: <0.1%). | 2 |
| Nexus SecOps-030 | Data Retention Tier Management | MUST implement automated lifecycle management transitioning data between hot, warm, and cold tiers per the retention schedule with documented procedures and tested restoration. | 3 |
Domain: DET — Detection Engineering & Content Ops (Nexus SecOps-031–050)¶
Nexus SecOps-031: Detection Content Lifecycle Management¶
| Field | Value |
|---|---|
| Domain | DET |
| Maturity Level | 3 |
| Requirement | Organizations MUST implement a formal detection content lifecycle with defined stages: creation, review, testing, deployment, monitoring, tuning, and retirement. All stages MUST be documented and tracked. |
Rationale: Detection rules that are not managed deteriorate over time — they generate false positives, miss evolved threats, or become irrelevant. A lifecycle process keeps detection content effective.
Implementation Guidance: Define lifecycle stages and gate criteria for each. Use a ticketing system or detection management platform to track each rule through its lifecycle. Require sign-off before production deployment. Schedule regular rule reviews.
Evidence to Collect: Detection content lifecycle policy; workflow screenshots showing rules in different stages; gate criteria documentation; evidence of sign-off before deployment.
Tests/Validation: Pull a random sample of 10 production detection rules and verify each has lifecycle documentation, last review date, and owner.
Metrics: Detection rules with documented lifecycle stage (target: 100%); rules with review overdue >90 days (target: 0); detection content churn rate (rules retired/deployed per quarter tracked).
Common Pitfalls: Rules deployed and never reviewed; no retirement process; lifecycle exists on paper but not enforced; ownership not assigned.
Framework Mappings: - NIST CSF: DE.CM-01 (Networks monitored) - CIS v8: Control 8 (Audit Log Management) - ISO 27001: A.8.8 (Management of technical vulnerabilities) - NIST 800-53: SI-3 (Malicious Code Protection), CA-7 (Continuous Monitoring) - MITRE ATT&CK: Detection engineering lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-032: Detection-as-Code Standards¶
| Field | Value |
|---|---|
| Domain | DET |
| Maturity Level | 3 |
| Requirement | Organizations SHOULD store detection rules as code in a version control system. Detection rules MUST use a documented, consistent format. Changes MUST be tracked with author, date, and rationale. |
Rationale: Treating detection rules as code enables version control, peer review, automated testing, and rollback — the same benefits software engineering gained from version control.
Implementation Guidance: Store all detection rules in git. Use a structured format (Sigma, YARA, or platform-native with documented schema). Require pull request review before merging. Tag rules with metadata: author, ATT&CK technique, log sources required, last tested date.
Evidence to Collect: Git repository showing detection rules; pull request workflow; rule metadata schema; example rule with full metadata.
Tests/Validation: Review the git history for 5 recently changed rules. Verify each has a PR, reviewer, and commit message explaining the change.
Metrics: Detection rules in version control (target: 100%); rules with complete metadata (target: ≥95%); average PR review time (target: <24 hours); rules deployed without PR review (target: 0).
Common Pitfalls: Version control adopted but reviews bypassed; metadata fields empty; no enforcement of format standards; detection platform and git repo drift.
Framework Mappings: - NIST CSF: PR.IP-01 (Baseline configuration) - CIS v8: Control 4 (Secure Configuration) - ISO 27001: A.8.9 (Configuration management) - NIST 800-53: CM-3 (Configuration Change Control) - MITRE ATT&CK: Detection engineering lens - MITRE D3FEND: D3-DA (Data Analysis)
Nexus SecOps-033 through Nexus SecOps-050: Additional DET Controls¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-033 | ATT&CK Coverage Mapping | MUST maintain a mapping of detection rules to MITRE ATT&CK techniques. Coverage gaps MUST be reviewed quarterly and drive detection roadmap. | 3 |
| Nexus SecOps-034 | Detection Rule Testing | MUST test all new detection rules against representative synthetic data before production deployment. Tests MUST verify both true positive detection and absence of false positives on benign data. | 3 |
| Nexus SecOps-035 | False Positive Management | MUST track false positive rates per detection rule. Rules exceeding defined FP threshold (e.g., >20% of alerts are FP) MUST be tuned or reviewed within defined SLA. | 3 |
| Nexus SecOps-036 | Detection Tuning Process | MUST have a documented tuning process including: who can tune, what changes require review, how tuning decisions are documented, and how impact is measured. | 3 |
| Nexus SecOps-037 | Mean Time to Detect (MTTD) SLA | MUST define and measure MTTD for critical threat categories. MTTD SHOULD be ≤60 minutes for critical severity threats. | 3 |
| Nexus SecOps-038 | Purple Team Validation | SHOULD conduct purple team exercises at least annually to validate detection rules against realistic adversary simulation. Gaps identified MUST be tracked to remediation. | 4 |
| Nexus SecOps-039 | Detection Content Peer Review | MUST require peer review of all new detection rules by at least one experienced detection engineer before production deployment. | 3 |
| Nexus SecOps-040 | Behavioral Detection | SHOULD implement behavioral detection rules (anomaly-based, sequence-based, or statistical) in addition to signature-based detection. | 4 |
| Nexus SecOps-041 | ML-Based Detection | MAY deploy machine learning models for detection. If deployed, models MUST be governed per the AIM domain controls (Nexus SecOps-161–180). | 4 |
| Nexus SecOps-042 | Detection Coverage Gaps Process | MUST have a documented process for identifying and prioritizing detection coverage gaps. Gaps MUST be reviewed at least quarterly. | 3 |
| Nexus SecOps-043 | Detection Content Versioning | MUST assign version numbers to detection rules. Rollback to previous versions MUST be possible within defined SLA. | 3 |
| Nexus SecOps-044 | Detection Performance Monitoring | MUST monitor detection rule performance metrics including: alert volume, true positive rate, false positive rate, and query performance. | 3 |
| Nexus SecOps-045 | Alert Severity Calibration | MUST define severity levels for alerts with documented criteria. Severity calibration MUST be reviewed when FP rates or escalation patterns change. | 3 |
| Nexus SecOps-046 | Correlation Rule Management | MUST manage multi-event correlation rules with the same lifecycle rigor as single-event rules. Correlation windows and join conditions MUST be documented. | 3 |
| Nexus SecOps-047 | Threshold Tuning Documentation | MUST document the rationale for all numerical thresholds in detection rules (e.g., "5 failed logins in 1 minute"). Thresholds MUST be reviewed when log volumes change significantly. | 3 |
| Nexus SecOps-048 | Detection Retirement Process | MUST have a documented process for retiring detection rules that are obsolete, consistently low-fidelity, or replaced by better rules. Retired rules MUST be archived, not deleted. | 3 |
| Nexus SecOps-049 | Threat-Informed Detection | SHOULD use threat intelligence to drive detection priorities. New high-confidence threat intel SHOULD trigger review of related detection coverage within 72 hours. | 4 |
| Nexus SecOps-050 | Detection CI/CD Pipeline | SHOULD implement a CI/CD pipeline for detection rule deployment including automated testing, linting, and deployment to production with rollback capability. | 4 |
Domain: TRI — Triage & Investigation (Nexus SecOps-051–065)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-051 | Alert Queue Management | MUST maintain a managed alert queue with defined prioritization criteria. All alerts MUST be assigned to an analyst within defined SLA (Crit: 15 min; High: 1 hr; Med: 4 hr; Low: 24 hr). | 2 |
| Nexus SecOps-052 | Triage SLA Definition and Measurement | MUST define triage SLAs by severity and measure compliance. SLA compliance MUST be reported at least monthly. | 3 |
| Nexus SecOps-053 | Triage Playbooks | MUST maintain documented triage playbooks for the most common alert types. Playbooks MUST be accessible during triage and reviewed at least annually. | 3 |
| Nexus SecOps-054 | Enrichment Automation | SHOULD automate context gathering (asset lookup, user info, GeoIP, threat intel match) before presenting alert to analyst. Enrichment MUST complete within 60 seconds of alert creation. | 3 |
| Nexus SecOps-055 | Analyst Decision Documentation | MUST require analysts to document their triage decision (TP/FP/Benign) with rationale for all alerts above Low severity. Documentation MUST be retained for audit. | 3 |
| Nexus SecOps-056 | Escalation Criteria | MUST define documented criteria for escalating alerts from Tier 1 to Tier 2/3. Escalation criteria MUST include technical indicators and time-based triggers. | 3 |
| Nexus SecOps-057 | False Positive Feedback Loop | MUST have a process for analysts to flag false positives and route them to the detection engineering team for tuning. Feedback MUST be reviewed and acted upon within defined SLA. | 3 |
| Nexus SecOps-058 | Alert Grouping and Deduplication | SHOULD group related alerts into cases or incidents to reduce analyst context-switching. Grouping logic MUST be documented and tested. | 3 |
| Nexus SecOps-059 | Alert Priority Scoring | SHOULD implement an automated risk score for alerts that combines alert severity with asset criticality, user risk score, and threat intel context. | 4 |
| Nexus SecOps-060 | Investigation Notebooks | SHOULD provide analysts with a structured investigation notebook or case management interface that captures the full investigation record, linked evidence, and timeline. | 3 |
| Nexus SecOps-061 | Evidence Preservation During Triage | MUST document requirements for preserving evidence during triage. Analysts MUST NOT take actions that modify or destroy evidence without approval. | 3 |
| Nexus SecOps-062 | Scope Assessment | MUST train analysts to assess the potential scope of an alert (single system, multiple systems, enterprise-wide) as part of triage. Scope assessment MUST be documented. | 3 |
| Nexus SecOps-063 | Communication During Triage | MUST define communication requirements during triage for high and critical alerts, including who to notify and within what timeframe. | 3 |
| Nexus SecOps-064 | Triage Quality Review | SHOULD conduct regular quality reviews of triage decisions (sampling ≥5% of closed alerts monthly) to identify training needs and systemic issues. | 4 |
| Nexus SecOps-065 | Self-Service Analyst Tooling | SHOULD provide analysts with self-service tools for common investigation tasks (IP lookup, domain lookup, hash lookup, user history) to reduce investigation time. | 3 |
Domain: INC — Incident Response (Nexus SecOps-066–080)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-066 | IR Plan Existence | MUST have a documented Incident Response Plan approved by management. The plan MUST define: classification, severity, roles, communication, escalation, and legal/regulatory notification. | 2 |
| Nexus SecOps-067 | IR Plan Testing | MUST test the IR plan at least annually through tabletop exercises. Findings from exercises MUST be tracked to remediation. | 3 |
| Nexus SecOps-068 | Incident Classification | MUST define an incident classification taxonomy with documented criteria for each category (e.g., malware, data breach, insider threat, DDoS). | 2 |
| Nexus SecOps-069 | Incident Severity Levels | MUST define severity levels (at minimum: Critical, High, Medium, Low) with documented criteria, escalation paths, and response time expectations. | 2 |
| Nexus SecOps-070 | Containment Procedures | MUST document containment procedures for each incident category. Procedures MUST include both short-term (immediate) and long-term containment options. | 3 |
| Nexus SecOps-071 | Evidence Handling and Chain of Custody | MUST document evidence handling procedures including collection, labeling, storage, and chain of custody. Procedures MUST meet requirements for potential legal proceedings. | 3 |
| Nexus SecOps-072 | Eradication Procedures | MUST document eradication procedures for common incident types. Eradication MUST be verified before recovery begins. | 3 |
| Nexus SecOps-073 | Recovery Procedures | MUST document recovery procedures including validation steps to confirm systems are clean before returning to production. | 3 |
| Nexus SecOps-074 | Post-Incident Review (PIR) | MUST conduct a Post-Incident Review for all Critical and High severity incidents within 5 business days of resolution. PIR findings MUST be tracked to remediation. | 3 |
| Nexus SecOps-075 | Incident Communication Plan | MUST define internal and external communication procedures including: who communicates, approved channels, escalation to executives, and legal counsel involvement. | 3 |
| Nexus SecOps-076 | Legal and Regulatory Notification | MUST document legal and regulatory notification requirements applicable to the organization (e.g., GDPR 72-hour breach notification). Triggers and timelines MUST be pre-defined. | 3 |
| Nexus SecOps-077 | Incident Timeline Construction | MUST maintain a detailed incident timeline for all Critical and High incidents. Timeline MUST be updated as new information is discovered and retained in case records. | 3 |
| Nexus SecOps-078 | IR Metrics | MUST track MTTR, incident volume by category, containment time, and re-infection rate. Metrics MUST be reviewed monthly and reported to management quarterly. | 3 |
| Nexus SecOps-079 | Incident Coordination | MUST define coordination procedures for incidents affecting multiple teams, business units, or third parties. A designated incident commander role MUST be defined for Critical incidents. | 3 |
| Nexus SecOps-080 | Lessons Learned Program | MUST maintain a lessons learned database populated from PIRs. Identified systemic improvements MUST be prioritized and tracked to completion. | 3 |
Domain: CTI — Threat Intelligence (Nexus SecOps-081–095)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-081 | Threat Intelligence Platform | SHOULD deploy a Threat Intelligence Platform (TIP) or equivalent capability for ingesting, storing, and managing threat intelligence. | 3 |
| Nexus SecOps-082 | Indicator Lifecycle Management | MUST implement an indicator lifecycle policy defining: ingestion, validation, scoring, active use, expiry, and archival. Expired indicators MUST be automatically retired. | 3 |
| Nexus SecOps-083 | Strategic Intelligence | SHOULD consume strategic threat intelligence (industry reports, adversary trends) and brief leadership quarterly on relevant threats. | 3 |
| Nexus SecOps-084 | Tactical Intelligence | MUST consume tactical threat intelligence (TTPs, tools, techniques) and use it to validate and improve detection coverage within 72 hours of new high-confidence intelligence. | 3 |
| Nexus SecOps-085 | Operational Intelligence | SHOULD consume operational intelligence (campaigns, targets, infrastructure) to support active investigations and proactive hunting. | 3 |
| Nexus SecOps-086 | TLP and STIX/TAXII Standards | MUST apply Traffic Light Protocol (TLP) to all received and shared intelligence. SHOULD use STIX 2.1 and TAXII 2.1 for machine-readable intelligence exchange where supported. | 3 |
| Nexus SecOps-087 | Intel-Driven Detection | MUST maintain a process for translating threat intelligence into detection rules. Time from new intelligence to deployed detection MUST be defined and measured. | 3 |
| Nexus SecOps-088 | Campaign Tracking | SHOULD maintain records of tracked threat campaigns relevant to the organization, including TTPs, infrastructure, and victimology. | 4 |
| Nexus SecOps-089 | Threat Actor Profiling | SHOULD maintain profiles of threat actor groups relevant to the organization's sector, including their typical techniques for detection planning purposes. | 4 |
| Nexus SecOps-090 | Vulnerability Intelligence Integration | MUST integrate vulnerability intelligence (CVE data, exploit prediction) with asset data to prioritize detection and patching for actively exploited vulnerabilities. | 3 |
| Nexus SecOps-091 | Intel Quality Scoring | SHOULD implement confidence scoring and quality evaluation for threat intelligence sources. Low-quality sources SHOULD be deprioritized or removed. | 4 |
| Nexus SecOps-092 | Intel Dissemination | MUST have a process for disseminating relevant intelligence to stakeholders (SOC analysts, IR team, vulnerability team, leadership). Dissemination MUST respect TLP markings. | 3 |
| Nexus SecOps-093 | Intel Feedback Loop | MUST collect feedback from consumers of threat intelligence (detection engineers, IR analysts) to improve intel quality and relevance. | 4 |
| Nexus SecOps-094 | ISAC Participation | SHOULD participate in at least one relevant Information Sharing and Analysis Center (ISAC) for the organization's sector to receive and share threat intelligence. | 3 |
| Nexus SecOps-095 | Intel Reporting | MUST produce regular threat intelligence reports for SOC and management audiences. Reports MUST include: threat landscape summary, relevant indicators, and recommended actions. | 3 |
Domain: AUT — SOAR & Automation Safety (Nexus SecOps-096–110)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-096 | SOAR Platform Deployment | SHOULD deploy a Security Orchestration, Automation, and Response (SOAR) platform or equivalent automation capability integrated with SIEM and key security tools. | 3 |
| Nexus SecOps-097 | Playbook Inventory | MUST maintain a documented inventory of all automation playbooks including: trigger condition, actions taken, human gates, owner, and last tested date. | 3 |
| Nexus SecOps-098 | Playbook Testing | MUST test all new playbooks in a non-production environment before deployment. Tests MUST cover: happy path, error conditions, and rollback scenarios. | 3 |
| Nexus SecOps-099 | Human-in-the-Loop Gates | MUST implement human approval gates for all automated actions that modify user access, isolate systems, or send external communications. Gates MUST have defined timeout behavior (approve/deny/escalate). | 3 |
| Nexus SecOps-100 | Automation Safety Checks | MUST implement pre-action safety checks in all playbooks that take response actions. Checks MUST validate scope, verify target, and confirm action is within approved parameters before execution. | 3 |
| Nexus SecOps-101 | Rollback Capability | MUST design all automated response actions with documented rollback procedures. Rollback MUST be executable within defined SLA without requiring the original playbook author. | 3 |
| Nexus SecOps-102 | Playbook Versioning | MUST maintain playbook versions in version control. Rollback to previous playbook version MUST be possible. Changes MUST be reviewed before deployment to production. | 3 |
| Nexus SecOps-103 | Automation Metrics | MUST track automation metrics including: playbook execution volume, success rate, error rate, time saved, and human gate utilization. Review monthly. | 3 |
| Nexus SecOps-104 | Enrichment Automation | SHOULD automate all routine enrichment tasks (IP lookup, hash lookup, user context) to reduce analyst time on repetitive tasks. Enrichment automation MUST have error handling. | 3 |
| Nexus SecOps-105 | Response Automation | MAY automate response actions (e.g., account disable, network isolation) only after human approval gates are implemented and tested. Automated response MUST be logged with full audit trail. | 4 |
| Nexus SecOps-106 | Notification Automation | MUST automate incident notifications per the communication plan. Notification templates MUST be reviewed and approved. Notification failures MUST be alerted. | 3 |
| Nexus SecOps-107 | Case Management Integration | MUST integrate automation platform with case management. All automated actions MUST be logged in the associated case record. | 3 |
| Nexus SecOps-108 | Approval Workflow | MUST implement an approval workflow system for high-impact automated actions. Approvals MUST be logged with approver identity, timestamp, and decision rationale. | 3 |
| Nexus SecOps-109 | Automation Error Handling | MUST implement error handling in all playbooks. Errors MUST be logged, alerted, and not silently fail. Failed playbooks MUST trigger escalation to human analyst. | 3 |
| Nexus SecOps-110 | Automation Audit Trail | MUST maintain a complete, tamper-evident audit trail of all automated actions taken by the SOAR platform including: action, target, initiating event, approver (if applicable), outcome, and timestamp. | 3 |
Domain: IAM — Identity & Access Signals (Nexus SecOps-111–120)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-111 | Identity Telemetry Completeness | MUST collect authentication logs from all identity providers, covering success, failure, MFA, token issuance, and group membership change events. | 2 |
| Nexus SecOps-112 | Authentication Anomaly Detection | MUST implement detection rules for authentication anomalies including: brute force, password spray, and impossible travel. Rules MUST be tuned to organizational baselines. | 3 |
| Nexus SecOps-113 | Privilege Escalation Detection | MUST implement detection rules for privilege escalation including: admin account creation, group membership changes, and use of privileged accounts from unusual contexts. | 3 |
| Nexus SecOps-114 | Service Account Monitoring | MUST monitor service account authentication and flag anomalies including: interactive logons, logons outside expected time/location, and new service account creation. | 3 |
| Nexus SecOps-115 | MFA Status Monitoring | MUST monitor MFA enrollment and bypass events. Detection rules MUST fire on MFA bypass attempts and MFA disabled events. | 3 |
| Nexus SecOps-116 | Impossible Travel Detection | MUST implement impossible travel detection that flags authentication from geographically impossible locations within the expected travel time. | 3 |
| Nexus SecOps-117 | Identity Correlation | SHOULD maintain identity correlation mapping that links accounts across systems (AD username, email, cloud identity, SaaS accounts) to a single canonical user record. | 3 |
| Nexus SecOps-118 | PAM Integration | SHOULD integrate Privileged Access Management (PAM) systems with the SIEM. PAM session recordings SHOULD be accessible during investigations. | 3 |
| Nexus SecOps-119 | Identity Governance Signals | SHOULD ingest signals from Identity Governance and Administration (IGA) systems including: access reviews, orphaned accounts, and excessive permission flags. | 4 |
| Nexus SecOps-120 | Identity Incident Response | MUST maintain documented IR procedures specific to identity-based incidents (account compromise, insider threat). Procedures MUST include: account disable, session revocation, password reset, and forensic preservation. | 3 |
Domain: CLD — Cloud Security Operations (Nexus SecOps-121–135)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-121 | Cloud Control Plane Logging | MUST enable control plane audit logging across all cloud accounts and subscriptions. Logging MUST be centralized and forwarded to SIEM within defined SLA. | 2 |
| Nexus SecOps-122 | Cloud Configuration Monitoring | MUST implement continuous cloud configuration monitoring (CSPM or equivalent). Policy violations MUST alert within defined SLA. | 3 |
| Nexus SecOps-123 | Cloud Workload Telemetry | SHOULD collect workload-level telemetry from critical cloud compute resources including process execution, network connections, and file modifications. | 3 |
| Nexus SecOps-124 | Serverless and Container Monitoring | SHOULD monitor serverless functions and container workloads for security events. Cold-start and ephemeral workload logs MUST be captured before termination. | 3 |
| Nexus SecOps-125 | Cloud Identity Monitoring | MUST monitor cloud IAM activity including: role assignments, policy changes, service account key creation, and federated identity usage. | 3 |
| Nexus SecOps-126 | Cloud Storage Monitoring | MUST monitor cloud storage services for: public access changes, bulk download events, cross-account access, and encryption changes. | 3 |
| Nexus SecOps-127 | Cloud Network Monitoring | SHOULD collect VPC flow logs, cloud DNS logs, and cloud load balancer access logs from all critical cloud environments. | 3 |
| Nexus SecOps-128 | Multi-Cloud Correlation | SHOULD implement cross-cloud correlation to detect attacks that span multiple cloud environments or cloud-to-on-premises movement. | 4 |
| Nexus SecOps-129 | Cloud Compliance Monitoring | MUST implement compliance monitoring for cloud environments against defined policy baselines. Non-compliant resources MUST be reported and tracked to remediation. | 3 |
| Nexus SecOps-130 | CSPM Integration | SHOULD integrate Cloud Security Posture Management (CSPM) findings with the SIEM and case management system for unified risk visibility. | 3 |
| Nexus SecOps-131 | CWPP Integration | SHOULD integrate Cloud Workload Protection Platform (CWPP) or cloud-native runtime security with the SIEM. | 3 |
| Nexus SecOps-132 | Cloud IR Procedures | MUST document cloud-specific IR procedures including: snapshot preservation, cloud forensic collection, cross-region containment, and cloud provider engagement process. | 3 |
| Nexus SecOps-133 | Cloud Forensics Readiness | SHOULD maintain documented procedures for cloud forensic data collection including: log preservation, snapshot creation, memory capture (where supported), and chain of custody. | 3 |
| Nexus SecOps-134 | Cloud Automation Safety | MUST apply SOAR automation safety controls (Nexus SecOps-099–110) to all cloud response automation. Cloud actions MUST have explicit human approval for irreversible operations. | 4 |
| Nexus SecOps-135 | Cloud Cost Anomaly for Security | MAY monitor cloud cost anomalies as a signal for cryptomining, data exfiltration, or unauthorized workload deployment. Cost anomalies SHOULD be correlated with security events. | 3 |
Domain: END — Endpoint & Network Operations (Nexus SecOps-136–150)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-136 | EDR Deployment Coverage | MUST deploy EDR or equivalent endpoint telemetry on ≥99% of managed endpoints. Coverage MUST be measured and reported monthly. | 2 |
| Nexus SecOps-137 | EDR Telemetry Quality | MUST validate that EDR telemetry includes required fields: process name, process ID, parent process, command line, user, timestamp, hash, and network connections. | 3 |
| Nexus SecOps-138 | Network Detection Coverage | SHOULD deploy network detection capability (NDR or IDS equivalent) at perimeter and critical internal segments. Detection signatures MUST be updated at least weekly. | 3 |
| Nexus SecOps-139 | NDR Integration | SHOULD integrate network detection alerts with SIEM and correlate with endpoint and identity events for incident context. | 3 |
| Nexus SecOps-140 | Host-Based Detection | MUST implement host-based detection rules for critical threat categories including: malicious process execution, privilege escalation, persistence mechanisms, and lateral movement techniques. | 3 |
| Nexus SecOps-141 | File Integrity Monitoring | SHOULD implement file integrity monitoring on critical system files and configurations. Changes MUST alert within defined SLA. | 3 |
| Nexus SecOps-142 | Removable Media Monitoring | SHOULD monitor removable media connections and file transfers. Unauthorized removable media use MUST alert. | 3 |
| Nexus SecOps-143 | DNS Monitoring | MUST collect and analyze DNS query logs. Detection rules MUST fire on: DGA-like domains, known malicious domains, and unusual DNS query patterns. | 3 |
| Nexus SecOps-144 | Email Security Telemetry | MUST collect email security gateway logs and forward to SIEM. Detection MUST cover: phishing indicators, malicious attachment delivery, and suspicious link clicks. | 3 |
| Nexus SecOps-145 | Web Proxy Telemetry | SHOULD collect web proxy or DNS security gateway logs and forward to SIEM. Detection MUST cover: C2 beaconing patterns, malicious categorized URLs, and data exfiltration indicators. | 3 |
| Nexus SecOps-146 | Lateral Movement Detection | MUST implement detection rules for lateral movement techniques including: Pass-the-Hash, Pass-the-Ticket, remote service exploitation patterns (detection only), and admin share access. | 3 |
| Nexus SecOps-147 | Persistence Detection | MUST implement detection rules for persistence mechanisms including: scheduled task creation, service installation, registry run key modification, and startup folder modification. | 3 |
| Nexus SecOps-148 | Data Loss Detection | SHOULD implement detection rules for potential data exfiltration including: large upload to cloud storage, mass file access, unusual port activity, and encrypted channel anomalies. | 3 |
| Nexus SecOps-149 | Endpoint Forensics Readiness | MUST maintain documented procedures for endpoint forensic data collection including: memory dump, disk image, log preservation, and chain of custody. | 3 |
| Nexus SecOps-150 | Endpoint Health Monitoring | MUST monitor EDR agent health and alert when agents go offline, are tampered with, or are excluded from coverage. | 3 |
Domain: VUL — Vulnerability/Exposure Signal Integration (Nexus SecOps-151–160)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-151 | Vulnerability Scan Coverage | MUST integrate vulnerability scan results with the SIEM. Asset vulnerability data MUST enrich security events to add context about exploitation risk. | 3 |
| Nexus SecOps-152 | Scan Frequency | MUST scan critical assets at least weekly and all other in-scope assets at least monthly. Scan results MUST be imported into the vulnerability management platform within 24 hours. | 3 |
| Nexus SecOps-153 | Vulnerability Prioritization | MUST prioritize vulnerabilities using risk-based scoring incorporating: CVSS, exploit availability, asset criticality, and network exposure. | 3 |
| Nexus SecOps-154 | Exploit Prediction Integration | SHOULD integrate exploit prediction data (e.g., EPSS scores) to prioritize patching of vulnerabilities with high exploitation probability. | 4 |
| Nexus SecOps-155 | Patch Status Correlation | MUST correlate patch status with detection rules to prioritize alerts on unpatched systems. Detection severity SHOULD increase for assets with known unpatched critical vulnerabilities. | 3 |
| Nexus SecOps-156 | Vulnerability-to-Detection Mapping | SHOULD maintain a mapping of known vulnerabilities affecting in-scope assets to detection rules that would detect exploitation attempts. | 4 |
| Nexus SecOps-157 | Exposure Management | SHOULD implement attack surface management to identify internet-exposed assets and correlate exposure with vulnerability and threat intelligence data. | 4 |
| Nexus SecOps-158 | Attack Surface Monitoring | SHOULD monitor for unauthorized changes to external attack surface (new exposed services, new subdomains, certificate changes). | 3 |
| Nexus SecOps-159 | Vulnerability SLA Tracking | MUST define and track patching SLAs by severity (Critical: 15 days; High: 30 days; Medium: 90 days). SLA compliance MUST be reported monthly. | 3 |
| Nexus SecOps-160 | Vulnerability Reporting | MUST produce regular vulnerability reports for technical and management audiences. Reports MUST include: open critical/high count, SLA compliance rate, and remediation trend. | 3 |
Domain: AIM — AI/ML Model Risk Management (Nexus SecOps-161–180)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-161 | ML Model Inventory | MUST maintain an inventory of all ML models used in security operations including: model purpose, training data source, deployment date, owner, and risk level. | 3 |
| Nexus SecOps-162 | Model Risk Assessment | MUST conduct a risk assessment for each ML model before deployment covering: data quality risk, model accuracy risk, bias risk, and adversarial robustness risk. | 3 |
| Nexus SecOps-163 | Training Data Governance | MUST document the source, collection method, and validation process for all training data. Training data MUST be reviewed for bias and quality before use. | 3 |
| Nexus SecOps-164 | Model Validation | MUST validate all models before production deployment using held-out test data. Validation MUST measure: precision, recall, F1, AUC, and performance on adversarial examples. | 3 |
| Nexus SecOps-165 | Model Monitoring for Drift | MUST implement continuous monitoring for model drift (degradation in accuracy or behavior over time). Alerts MUST fire when performance drops below defined thresholds. | 3 |
| Nexus SecOps-166 | Model Explainability | SHOULD implement explainability capabilities for ML models used in alert generation or triage prioritization. Analysts MUST be able to understand why a model made a prediction. | 4 |
| Nexus SecOps-167 | Adversarial Robustness Testing | SHOULD test models for adversarial robustness: the ability to maintain performance when input data is subtly manipulated. | 4 |
| Nexus SecOps-168 | Bias Evaluation | MUST evaluate ML models for potential bias in outcomes related to protected characteristics. Bias evaluation results MUST be documented. | 3 |
| Nexus SecOps-169 | Model Versioning | MUST maintain versioned model artifacts. Rollback to a previous model version MUST be possible within defined SLA. | 3 |
| Nexus SecOps-170 | Model Rollback Procedure | MUST document model rollback procedures. Rollback MUST be executable by on-call staff without requiring the model's original developer. | 3 |
| Nexus SecOps-171 | ML Pipeline Security | MUST apply security controls to the ML pipeline including: access control to training data, code signing for model artifacts, and audit logging of pipeline executions. | 4 |
| Nexus SecOps-172 | Feature Store Security | SHOULD implement access controls on feature stores. Unauthorized modification of features used in production models MUST alert. | 4 |
| Nexus SecOps-173 | Model Access Control | MUST implement access controls on production models. Model endpoints MUST be authenticated, and access MUST be logged. | 3 |
| Nexus SecOps-174 | Inference Logging | MUST log all production model inferences including: input features (non-PII), prediction, confidence score, and timestamp. Logs MUST be retained per the log retention policy. | 3 |
| Nexus SecOps-175 | Model Performance Metrics | MUST track model performance metrics in production (precision, recall, FP rate) and compare against validated thresholds. Review weekly. | 3 |
| Nexus SecOps-176 | Anomaly Detection Model Ops | For anomaly detection models: MUST define baseline training window, retraining schedule, and process for reviewing anomalies flagged by the model. | 4 |
| Nexus SecOps-177 | Classification Model Ops | For classification models: MUST define class definitions, decision thresholds, and calibration procedure. Threshold changes MUST be documented and reviewed. | 4 |
| Nexus SecOps-178 | NLP Model Ops | For NLP models used in SOC (e.g., log parsing, alert summarization): MUST validate on representative security text. Performance MUST be monitored for new log formats and threat terminology. | 4 |
| Nexus SecOps-179 | Model Incident Response | MUST define IR procedures for ML model failures including: detection of anomalous model behavior, rollback, root cause analysis, and stakeholder notification. | 3 |
| Nexus SecOps-180 | AI Ethics Review | MUST conduct an ethics review of AI/ML systems used in security decisions that affect individuals (e.g., insider threat detection). Review MUST address: bias, fairness, transparency, and appeal rights. | 4 |
Domain: LLM — LLM Copilots & Guardrails (Nexus SecOps-181–200)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-181 | LLM Deployment Inventory | MUST maintain an inventory of all LLM-based tools used in security operations including: model/provider, use case, data sensitivity level, and access controls. | 3 |
| Nexus SecOps-182 | Prompt Injection Defense | MUST implement controls to detect and prevent prompt injection in LLM-based security tools. Controls MUST include: input sanitization, instruction separation, and output validation. | 3 |
| Nexus SecOps-183 | LLM Output Filtering | MUST implement output filtering that prevents LLM responses from containing: sensitive internal data, credentials, PII, and harmful content. Filtered outputs MUST be logged. | 3 |
| Nexus SecOps-184 | Grounding and RAG Requirements | SHOULD ground LLM responses in verified organizational knowledge using Retrieval-Augmented Generation (RAG). Grounded responses MUST cite their sources. | 4 |
| Nexus SecOps-185 | Hallucination Detection | MUST implement controls to detect and flag potentially hallucinated outputs from LLMs. Controls MUST include: fact-checking against known sources and confidence scoring. | 3 |
| Nexus SecOps-186 | PII and Sensitive Data Filtering | MUST implement input filtering that prevents sending PII, credentials, or classified information to external LLM APIs. Filtering MUST be applied before data leaves organizational boundaries. | 3 |
| Nexus SecOps-187 | LLM Audit Logging | MUST log all LLM interactions in security tools including: prompt (sanitized), response, user identity, timestamp, and model version. Logs MUST be retained per retention policy. | 3 |
| Nexus SecOps-188 | LLM Access Control | MUST implement role-based access controls for LLM-based security tools. Access MUST be logged and reviewed quarterly. | 3 |
| Nexus SecOps-189 | Token Budget Management | SHOULD implement per-user and per-application token budgets to prevent excessive LLM API usage. Budget alerts MUST be configured. | 3 |
| Nexus SecOps-190 | LLM Evaluation Framework | MUST implement a formal evaluation framework for LLM tools used in security operations. Evaluation MUST measure: accuracy on security tasks, safety (hallucination rate), relevance, and bias. | 4 |
| Nexus SecOps-191 | Human Oversight Requirements | MUST define and enforce human oversight requirements for LLM outputs. High-stakes decisions (e.g., containment recommendations) MUST require human review before action. | 3 |
| Nexus SecOps-192 | LLM Incident Response | MUST define IR procedures for LLM security incidents including: prompt injection attacks, data leakage via LLM, hallucinated IOCs acted upon, and model provider outages. | 3 |
| Nexus SecOps-193 | LLM Content Policy | MUST define and enforce a content policy for LLM tools specifying: permitted use cases, prohibited uses, escalation for edge cases, and consequence of misuse. | 3 |
| Nexus SecOps-194 | LLM Model Selection Criteria | MUST document criteria for selecting LLM models for security use including: security and privacy considerations, data residency requirements, and performance on security tasks. | 3 |
| Nexus SecOps-195 | Context Window Management | SHOULD implement context window management strategies to prevent sensitive data accumulation across sessions. Session isolation MUST be implemented for multi-user deployments. | 3 |
| Nexus SecOps-196 | LLM Integration Security | MUST apply secure coding practices to LLM integration code. API keys MUST be stored in secrets management systems, not in code or configuration files. | 3 |
| Nexus SecOps-197 | LLM Fine-Tuning Governance | For fine-tuned models: MUST apply the same governance as AIM domain (Nexus SecOps-161–180). Training data for security fine-tuning MUST be reviewed and sanitized. | 4 |
| Nexus SecOps-198 | Retrieval Pipeline Security | For RAG implementations: MUST secure the retrieval pipeline. Unauthorized modification of the knowledge base MUST alert. Retrieved content MUST be validated before inclusion in prompts. | 4 |
| Nexus SecOps-199 | LLM Performance Monitoring | MUST monitor LLM tool performance including: response accuracy (sampled human review), response time, error rate, and user satisfaction. Review monthly. | 3 |
| Nexus SecOps-200 | LLM Cost Management | SHOULD track LLM API costs by use case and user group. Anomalous cost spikes SHOULD alert as a potential security signal (unauthorized usage). | 3 |
Domain: GOV — Governance, Training & Resilience (Nexus SecOps-201–220)¶
| Control ID | Title | Requirement Summary | Maturity |
|---|---|---|---|
| Nexus SecOps-201 | Security Operations Policy | MUST maintain an approved Security Operations Policy defining scope, objectives, roles, responsibilities, and compliance requirements. Policy MUST be reviewed annually. | 2 |
| Nexus SecOps-202 | Change Management for Security Tools | MUST apply a change management process to all changes in security tooling (SIEM, EDR, SOAR, etc.). Changes MUST be tested and approved before production deployment. | 3 |
| Nexus SecOps-203 | Detection Content Change Control | MUST apply a documented change control process to all detection rule creation, modification, and retirement. Changes MUST be reviewed, tested, and approved. | 3 |
| Nexus SecOps-204 | Automation Change Control | MUST apply change control to all SOAR playbook changes. Production playbook changes MUST be tested in non-production before deployment. | 3 |
| Nexus SecOps-205 | Security Operations Training Program | MUST maintain a formal training program for SOC staff covering: technical skills, tool proficiency, and security concepts. All staff MUST complete required training within 30 days of hire. | 3 |
| Nexus SecOps-206 | Certification Requirements | SHOULD define relevant security certifications for each role. Staff SHOULD be supported in pursuing certifications relevant to their role. | 3 |
| Nexus SecOps-207 | Cross-Training Program | SHOULD implement cross-training to reduce key-person dependencies. Critical capabilities MUST be executable by at least two qualified staff members. | 3 |
| Nexus SecOps-208 | Tabletop Exercises | MUST conduct tabletop exercises at least annually covering the most likely and most impactful threat scenarios. Findings MUST be tracked to remediation. | 3 |
| Nexus SecOps-209 | Purple Team Program | SHOULD implement a regular purple team program (at least annually) to validate detection and response capabilities against realistic adversary simulation. | 4 |
| Nexus SecOps-210 | Operational Metrics Reporting | MUST produce operational metrics reports at least monthly for SOC leadership. Reports MUST include: alert volume, MTTD, MTTR, SLA compliance, and automation rate. | 3 |
| Nexus SecOps-211 | Executive Metrics Reporting | MUST produce executive security metrics reports at least quarterly. Reports MUST translate operational metrics to business risk language. | 3 |
| Nexus SecOps-212 | Compliance Mapping | MUST maintain a mapping of security operations controls to applicable regulatory and contractual requirements. Mapping MUST be reviewed annually. | 3 |
| Nexus SecOps-213 | Audit Readiness | MUST maintain audit-ready documentation for all security operations controls. Documentation MUST be accessible within 24 hours of an audit request. | 3 |
| Nexus SecOps-214 | Documentation Standards | MUST define and enforce documentation standards for all security operations procedures. All procedures MUST have: owner, version, review date, and approval signatures. | 3 |
| Nexus SecOps-215 | Knowledge Management | MUST maintain a searchable knowledge base for SOC staff. Knowledge base MUST include: runbooks, investigation notes, threat profiles, and lessons learned. | 3 |
| Nexus SecOps-216 | Staffing Model | MUST define a staffing model appropriate for the organization's security risk profile. Staffing requirements MUST be reviewed annually and updated based on threat landscape and tool changes. | 3 |
| Nexus SecOps-217 | SLA Framework | MUST define SLAs for all critical security operations functions. SLA performance MUST be measured and reported monthly. SLA breaches MUST trigger root cause analysis. | 3 |
| Nexus SecOps-218 | Continuous Improvement Program | MUST implement a continuous improvement program for security operations. Improvements MUST be tracked, measured, and reported quarterly. | 4 |
| Nexus SecOps-219 | Resilience Testing | MUST test the resilience of security operations infrastructure at least annually including: tool failover, backup systems, and recovery from simulated outages. | 3 |
| Nexus SecOps-220 | Lessons Learned Integration | MUST integrate lessons learned from incidents, exercises, and audits into training, processes, and controls. Lessons MUST be tracked from identification to completion. | 3 |