How to Use This Benchmark¶
This page guides practitioners through using Nexus SecOps for self-assessment, team training, or formal audit preparation.
Assessment Workflow¶
flowchart TD
A[1. Define Scope] --> B[2. Assemble Team]
B --> C[3. Review Maturity Model]
C --> D[4. Complete Self-Assessment]
D --> E[5. Gather Evidence]
E --> F[6. Score Controls]
F --> G[7. Calculate Domain Scores]
G --> H[8. Document Findings]
H --> I[9. Prioritize Gaps]
I --> J[10. Build Remediation Roadmap]
J --> K[11. Execute Improvements]
K --> L{Reassess in 6-12 mo}
L --> D Step 1 — Define Scope¶
Scoping Questions
- Which geographic regions and business units are in scope?
- Which environments are in scope (on-prem, AWS, Azure, GCP, hybrid)?
- Full benchmark or focused domain review?
- Who is the intended audience for findings?
| Scope Type | Description | Recommended For |
|---|---|---|
| Full Benchmark | All 14 domains, all 220 controls | Annual assessment, maturity program |
| Domain Subset | 3–5 most critical domains | Quick gap assessment, specific initiative |
| Single Domain Deep Dive | One domain, all controls | Targeted improvement project |
| Maturity Gate Review | Controls at target maturity level only | Pre-certification review |
Step 2 — Assemble Assessment Team¶
| Role | Contribution |
|---|---|
| SOC Manager / Lead | Operational context, process knowledge |
| Senior Detection Engineer | Technical depth on detection controls |
| SIEM/Data Platform Engineer | Telemetry and data quality controls |
| IR Lead | Incident response domain |
| Security Automation Engineer | SOAR and automation controls |
| Threat Intel Analyst | CTI domain |
| AI/ML Practitioner | AIM and LLM domains |
| Compliance/Risk Officer | Governance and evidence review |
Step 3 — Calibrate Using the Maturity Model¶
Read Maturity Model before scoring. Calibration prevents score inflation.
| Common Mistake | Correction |
|---|---|
| Scoring Level 4 because a tool exists | Level 4 requires consistent, measured, automated practice |
| Scoring 0 when partial capability exists | Use Level 1 for ad-hoc, Level 2 for developing capability |
| Applying highest-water-mark score | Score based on consistent practice, not best-case |
Step 4 — Work Through the Self-Assessment Workbook¶
Open self-assessment.md or the CSV version.
For each control:
- Read the Control Statement — the normative requirement.
- Evaluate Current State — what is your organization actually doing?
- Assign Current Score (0–5).
- Assign Target Score based on risk appetite.
- Note Evidence Available (Yes / Partial / No).
- Add Notes explaining your scoring rationale.
Be Honest
Assessment value is proportional to honesty. Score what is consistently true across the organization — not the best team or best week.
Step 5 — Gather Evidence¶
For each control scored above 0, collect the evidence listed in the control's "Evidence to Collect" field. See the Evidence Catalog.
| Evidence Type | Examples |
|---|---|
| Documentation | Policies, procedures, runbooks, architecture diagrams |
| Configuration | System configs, rule exports, playbook definitions |
| Logs/Metrics | Dashboard screenshots, report exports, trend data |
| Process Artifacts | Tickets, case records, change management logs |
| Interview Notes | Notes from discussions with practitioners |
Step 6 — Score and Calculate¶
- Finalize scores for each control (0–5).
- Domain score = average of all control scores in the domain.
- Overall score = average of all domain scores.
See Scoring for detailed formulas and weighting options.
Step 7 — Document Findings¶
Use the Findings Template. Each finding includes:
- Current vs. expected state description
- Risk impact if unaddressed
- Recommended remediation action
- Priority (Critical / High / Medium / Low)
Step 8 — Prioritize¶
Priority logic: - High Impact + Low Effort = Quick Win (do first) - High Impact + High Effort = Strategic Project (plan carefully) - Low Impact + Low Effort = Batch and schedule - Low Impact + High Effort = Challenge the requirement, deprioritize
Guidance by Organization Size¶
Small Organizations (<50 in scope)¶
Tip
Focus first on: TEL, DET, TRI, INC, GOV. Target overall maturity Level 2–3. Prioritize: logging coverage → detection basics → IR plan → change control.
Medium Organizations (50–500 in scope)¶
Tip
Assess all 14 domains. Allow 2–4 weeks for full assessment. Target Level 3 for core domains, Level 2 for AI/ML if AI is not yet deployed.
Large Organizations (500+ in scope)¶
Tip
Phase assessment by domain (one domain per sprint). Assign domain champions. Target Level 4 for core domains, Level 3 for AI/ML.
Using Nexus SecOps for Training¶
- Read chapters in order (Ch 1 → Ch 15).
- Complete the quiz at the end of each chapter.
- Run the relevant MicroSim to reinforce concepts.
- Complete the lab exercises for hands-on practice.
- After all chapters, run the self-assessment to consolidate learning.
Estimated time: 40–60 hours for full curriculum.
Reassessment Cadence¶
| Trigger | Recommended Action |
|---|---|
| Annual (default) | Full benchmark reassessment |
| Major tool change | Reassess affected domains |
| Significant incident | Reassess INC, TRI, and AUT domains |
| AI/ML deployment | Assess AIM and LLM domains |
| Post-audit finding | Targeted control reassessment |