Skip to content

Nexus SecOps Scoring Methodology

This page defines how to calculate maturity scores at the control, domain, and overall levels, and how to interpret and use those scores.


Scoring Philosophy

Purpose of Scoring

Scores are a communication tool, not a goal. A score of 3.2 means little without the narrative of which controls are gaps and why. Use scores to track progress over time and communicate with stakeholders — not as the primary deliverable.


Control-Level Scoring

Each control receives a score of 0–5, corresponding directly to the maturity levels defined in the Maturity Model.

Score Label Description
0 Non-Existent No capability in place
1 Initial Ad hoc, undocumented capability
2 Developing Partial implementation, inconsistent
3 Defined Fully documented and consistently practiced
4 Managed Quantitatively managed with automation
5 Optimizing Predictive, AI-augmented, frontier practice

Scoring Criteria

When scoring a control, evaluate:

  1. Existence — Does the capability exist at all?
  2. Documentation — Is it documented and approved?
  3. Consistency — Is it consistently practiced across the team?
  4. Measurement — Are metrics tracked and reviewed?
  5. Automation — Is it automated where practical?
  6. Improvement — Is there a feedback loop for continuous improvement?

Each criterion roughly maps to a maturity level, providing a structured scoring ladder.


Domain-Level Scoring

Domain Score = Average of all control scores within the domain

Domain Score = (Sum of Control Scores) / (Number of Controls)

Example — TEL Domain (15 controls):

Control Score
Nexus SecOps-001 4
Nexus SecOps-002 3
Nexus SecOps-003 3
Nexus SecOps-004 2
Nexus SecOps-005 3
... (10 more) Avg: 3.0
Domain Score 3.1

Overall Score

Overall Score = Average of all Domain Scores

Overall Score = (Sum of Domain Scores) / 14

Optional: Weighted Scoring

Organizations may weight domains by risk priority:

Domain Default Weight Example Risk-Weighted
TEL 1.0 1.5 (critical foundation)
DET 1.0 1.5 (core mission)
INC 1.0 1.5 (critical response)
AIM 1.0 0.8 (if AI not yet deployed)
LLM 1.0 0.5 (if no LLM in use)
Others 1.0 1.0

If using weights:

Weighted Score = (Sum of Domain Score × Domain Weight) / (Sum of Weights)


Score Interpretation

Score Range Label Interpretation
0.0 – 0.9 Critical Foundational gaps; significant incident risk
1.0 – 1.9 Low Ad hoc capability; major improvement needed
2.0 – 2.9 Developing Partial capability; significant inconsistency
3.0 – 3.9 Proficient Solid foundation; room for optimization
4.0 – 4.9 Advanced Strong capability; optimize for predictability
5.0 Optimizing Industry-leading; sustain and contribute

Handling Not Applicable (N/A) Controls

Some controls may not apply to all organizations:

  • CLD domain controls may be N/A for organizations with no cloud presence.
  • LLM domain controls may be N/A for organizations with no LLM deployment.
  • VUL domain controls may partially apply depending on program scope.

Rules for N/A:

  1. Document the rationale for the N/A designation.
  2. Get management approval for the N/A scoping decision.
  3. Exclude N/A controls from score calculations (don't count them as 0 or 5).
  4. Revisit N/A decisions at each annual assessment.

N/A Abuse

Overusing N/A designations inflates scores. Challenge any N/A for controls in TEL, DET, TRI, INC, and GOV — these apply to virtually all organizations with a security function.


Pass/Fail Thresholds

For organizations using Nexus SecOps for compliance or certification-style assessment:

Threshold Type Criteria
Baseline Certification Overall score ≥ 2.0; no domain below 1.5
Standard Certification Overall score ≥ 3.0; no domain below 2.5
Advanced Certification Overall score ≥ 4.0; no domain below 3.5

These thresholds are advisory. Organizations should define their own gates based on risk tolerance.


Tracking Progress Over Time

Score assessments should be tracked longitudinally to demonstrate improvement:

Metric Tracked Per Assessment
Overall score Yes
Domain scores (all 14) Yes
Number of controls at each maturity level Yes
Number of critical findings Yes
Number of findings remediated since last assessment Yes

Minimum reassessment frequency: Annual

Recommended trigger-based reassessment: After major incidents, major tool changes, AI deployment.


Scoring in the Workbook

The Self-Assessment Workbook and CSV include auto-calculated fields for domain and overall scores. Fill in the "Current_Score" column for each control, and the summary section will reflect the aggregated scores.