Skip to content

Nexus SecOps Maturity Model

The Nexus SecOps maturity model defines five progressive levels of security operations capability. Each level has specific characteristics, observable behaviors, and gate criteria that must be met before advancing.


Overview

graph LR
    L0["Level 0<br/>Non-Existent"] --> L1["Level 1<br/>Initial / Ad Hoc"]
    L1 --> L2["Level 2<br/>Developing"]
    L2 --> L3["Level 3<br/>Defined"]
    L3 --> L4["Level 4<br/>Managed"]
    L4 --> L5["Level 5<br/>Optimizing"]

    style L0 fill:#b71c1c,color:#fff
    style L1 fill:#e64a19,color:#fff
    style L2 fill:#f57f17,color:#fff
    style L3 fill:#2e7d32,color:#fff
    style L4 fill:#1565c0,color:#fff
    style L5 fill:#4a148c,color:#fff

Level 0 — Non-Existent

Description: The organization has no meaningful awareness of or capability for the control area. The topic has not been addressed intentionally.

Characteristics: - No policies, procedures, or documentation exist. - No tooling in place. - No staff with relevant skills or responsibilities. - The organization may not be aware of the risk.

Examples of Level 0: - No centralized log collection exists. - No incident response plan has been written. - No one has been assigned responsibility for detection engineering.

Gate Criteria to Advance to Level 1: - [ ] Someone is formally assigned responsibility for this capability. - [ ] The risk associated with the gap is acknowledged. - [ ] A plan to begin basic implementation exists.


Level 1 — Initial / Ad Hoc

Description: Basic capability exists but is reactive, undocumented, and highly dependent on specific individuals. Success is inconsistent and not repeatable.

Characteristics: - Processes exist informally, based on tribal knowledge. - Actions are taken reactively (after an incident or complaint). - Documentation is minimal or absent. - Results vary significantly depending on who is available. - Tooling may exist but is not consistently used.

Examples of Level 1: - Logs are collected from some systems but not inventoried or consistently. - Incidents are handled based on analyst judgment with no standard process. - Detection rules exist but were never formally tested.

Gate Criteria to Advance to Level 2: - [ ] Basic documentation has been created for the process. - [ ] At least one person beyond the originator can execute the capability. - [ ] A tool or platform is in place supporting the capability.


Level 2 — Developing

Description: The capability is partially documented and implemented. There is evidence of intent to standardize, but execution is inconsistent across teams and time.

Characteristics: - Policies and procedures exist but may be outdated or incomplete. - Tooling is in place and used by most team members. - Processes are followed most of the time but not universally. - Basic metrics are tracked but not regularly reviewed. - Some automation exists but manual steps remain significant.

Examples of Level 2: - Most critical log sources are collected; an inventory exists but is not current. - Detection rules exist and are partially tested; tuning happens reactively. - An IR plan exists but was last tested 2+ years ago.

Gate Criteria to Advance to Level 3: - [ ] Policies and procedures are documented, reviewed, and approved. - [ ] All relevant staff have been trained on the process. - [ ] Metrics are formally tracked and reviewed on a regular cadence. - [ ] Process is consistently followed (observable in records/tickets).


Level 3 — Defined

Description: The capability is fully documented, consistently executed across the team, and regularly measured. Processes are understood and followed by all relevant staff.

Characteristics: - Comprehensive documentation: policy, procedure, runbooks. - Consistent execution; deviations are rare and documented. - Metrics tracked, reviewed, and used to drive decisions. - Training program exists; all staff complete it. - Tooling is fully deployed and integrated. - Moderate automation reduces manual effort.

Examples of Level 3: - Log source inventory is complete, reviewed quarterly, with coverage ≥95%. - Detection rules follow a defined lifecycle with testing and sign-off. - IR plan is tested annually; lessons learned are documented and acted upon.

Gate Criteria to Advance to Level 4: - [ ] Quantitative performance targets are defined and consistently met. - [ ] Significant automation is in place for repetitive tasks. - [ ] Process performance is predictable within defined bounds. - [ ] Continuous feedback loops exist to improve the process.


Level 4 — Managed

Description: The capability is quantitatively managed. Performance is predictable, automation is substantial, and continuous improvement is embedded in operations.

Characteristics: - Statistical process control or quantitative targets with SLAs. - High degree of automation; human intervention for exceptions only. - Feedback loops: performance data drives regular improvements. - Cross-team integration; capability supports other capabilities. - Anomalies trigger investigation and root cause analysis.

Examples of Level 4: - Log coverage ≥99% with automated alerting on coverage gaps. - Detection rule deployment is fully automated via CI/CD with coverage metrics. - MTTR is tracked, SLA compliance is ≥95%, and every breach triggers a review.

Gate Criteria to Advance to Level 5: - [ ] Predictive analytics are used to anticipate issues before they occur. - [ ] The capability contributes to improvements in other capabilities. - [ ] Industry benchmarking shows performance in the top quartile. - [ ] AI/ML augments decision-making in this area.


Level 5 — Optimizing

Description: The capability is at the frontier of industry practice. It is predictive, AI-augmented, largely self-healing, and a source of innovation for the organization and potentially the broader community.

Characteristics: - Predictive capabilities identify degradation before impact. - AI/ML actively augments human decision-making. - Self-healing or self-tuning elements reduce human effort. - Organization contributes improvements back to the field (publications, open-source, intel sharing). - Continuous innovation; the capability improves year over year.

Examples of Level 5: - ML-based anomaly detection on log collection health flags coverage gaps before analysts notice. - Detection engineering uses AI-assisted threat modeling and auto-generates candidate rules from threat reports. - AI copilot assists analysts with triage, is monitored for accuracy, and its performance improves monthly.


Maturity by Domain — Reference Table

Domain Level 0 Level 1 Level 2 Level 3 Level 4 Level 5
TEL No logging Some logs Most sources Full coverage Automated monitoring Predictive health
DQN No schema Ad hoc parsing Partial normalization Full schema, enriched Quality scored, tracked ML-driven quality
DET No rules Manual rules Tested rules Managed lifecycle CI/CD, ATT&CK coverage AI-assisted creation
TRI No process Informal triage Basic SLA Defined workflow Automated enrichment AI-assisted triage
INC No plan Verbal response Written plan Tested plan Integrated, automated Predictive response
CTI No intel Manual feeds TIP in place Intel-driven detection Automated integration Predictive intel
AUT No automation Scripts SOAR in place Tested playbooks Fully automated + safe Self-optimizing
IAM No identity signals Manual review Identity logs Anomaly detection Automated response Behavioral AI
CLD No cloud logging Some cloud logs Multi-cloud collection Full cloud coverage Automated remediation Predictive cloud
END EDR not deployed Partial EDR Full EDR Integrated NDR Automated response Behavioral AI
VUL No vuln data Manual scans Regular scans Risk-prioritized Integrated with detections Predictive exposure
AIM No AI used Ad hoc models Some governance Full model lifecycle MLOps, monitored Self-optimizing AI
LLM No LLM used Informal LLM Basic guardrails Full governance Evaluated, monitored Autonomous with oversight
GOV No governance Informal Basic policies Comprehensive governance Quantitative management Continuous improvement

Roadmap Planning by Current Level

If You Are at Level 0–1 (Focus: Get Basics In Place)

  • Priority: TEL, INC, GOV
  • Goal: Achieve Level 2 across all domains within 12 months
  • Key actions: Assign ownership, create basic documentation, deploy foundational tooling

If You Are at Level 2 (Focus: Standardize and Measure)

  • Priority: DET, TRI, AUT
  • Goal: Achieve Level 3 across core domains within 12 months
  • Key actions: Formalize processes, implement metrics, complete training program

If You Are at Level 3 (Focus: Automate and Quantify)

  • Priority: AUT, AIM, LLM (if AI is deployed)
  • Goal: Achieve Level 4 in core domains, Level 3 in AI domains within 18 months
  • Key actions: Build automation, establish quantitative SLAs, deploy AI/ML selectively

If You Are at Level 4 (Focus: Optimize and Innovate)

  • Priority: All domains, especially AIM and LLM
  • Goal: Achieve Level 5 in 1–2 domains within 24 months
  • Key actions: Deploy predictive capabilities, contribute to community, benchmark externally