Skip to content

Editorial Style Guide

Version: 1.0 Applies to: All Nexus SecOps textbook and benchmark content


Purpose

This guide ensures consistency in voice, formatting, terminology, and technical accuracy across all Nexus SecOps content. All contributors and editors should review this guide before creating or revising content.


1. Voice and Tone

1.1 Primary Voice

Nexus SecOps content uses a direct, practitioner-oriented voice. Write as if speaking to an experienced colleague, not a student. Avoid condescension, excessive hedging, and marketing language.

Preferred:

Detection rules SHALL be peer-reviewed before promotion to production.

Avoid:

It might be a good idea to consider having someone else look at your detection rules before you use them in production, as this can potentially help catch errors.

1.2 Person and Perspective

  • Use second person ("you", "your") for instructional content and labs
  • Use third person ("the analyst", "the organization") for policy templates and benchmark controls
  • Avoid first-person plural ("we") unless representing organizational voice (e.g., policy templates)

1.3 Active vs. Passive Voice

Prefer active voice. Passive is acceptable when the actor is unknown or irrelevant.

Active (preferred):

The analyst reviews the alert and determines disposition.

Passive (acceptable when actor is unimportant):

Alerts are queued in priority order.

1.4 Regulatory Language in Controls

All Nexus SecOps benchmark controls use RFC 2119 normative language:

Term Meaning
SHALL Absolute requirement; no exceptions without formal exception process
SHALL NOT Absolute prohibition
SHOULD Recommended; justified deviation is acceptable
SHOULD NOT Not recommended; justified deviation is acceptable
MAY Optional; permissible

Write these terms in all caps in control descriptions. Do not use "must", "will", or "required to" as substitutes — use SHALL.


2. Formatting Conventions

2.1 Headings

  • Use sentence case for headings (not Title Case): "Detection rule lifecycle" not "Detection Rule Lifecycle"
  • Exception: Product names, proper nouns, and acronyms retain their casing
  • Maximum heading depth: H3 (###) for regular content; H4 (####) only in highly structured references

2.2 Control References

Always format Nexus SecOps control references as: Nexus SecOps-NNN

  • Correct: Nexus SecOps-041, Nexus SecOps-099
  • Incorrect: Nexus SecOps041, Control 41, control Nexus SecOps-41

When referencing a range: Nexus SecOps-031 through Nexus SecOps-060

2.3 Code Blocks

Use fenced code blocks with language specifier for all code:

```yaml
# YAML example
key: value
```

```python
# Python example
def example():
    pass
```

```bash
# Shell example
mkdocs serve
```

For inline code (commands, field names, values): use single backticks: alert.src_ip

2.4 Tables

Use tables for structured comparisons, mappings, and reference data. Always include a header row. Left-align text columns; use default alignment for others.

Column order in benchmark control tables: Control ID | Domain | Title | RFC Level | Maturity | Description

2.5 Admonitions

Use MkDocs Material admonitions for callouts:

!!! note "Note title"
    Use for important supplementary information.

!!! warning "Warning title"
    Use for information that could prevent errors or harm.

!!! tip "Tip title"
    Use for optional but valuable practical advice.

!!! danger "Danger"
    Use only for security or data loss warnings.

??? success "Expandable answer"
    Use for quiz answers and lab answer keys.

Do not use admonitions for normal body content — they draw attention away from the surrounding text. Maximum 2–3 admonitions per page section.

2.6 Lists

  • Use unordered lists for items without inherent sequence
  • Use ordered lists for steps, procedures, and ranked items
  • Maximum two levels of nesting
  • Each list item ends without a period unless the item is a complete sentence
  • Parallel structure: all items in a list use the same grammatical form

2.7 Mermaid Diagrams

Use Mermaid for flow diagrams. Always include a description of the diagram in text before or after it:

The following diagram shows the alert triage flow:

```mermaid
flowchart LR
    A[Alert] --> B{Severity?}
    B -->|High| C[Tier 2]
    B -->|Low| D[Tier 1]
Cap diagram complexity at ~15 nodes for readability. If a diagram is more complex, split it into multiple diagrams or use a table.

---

## 3. Terminology

### 3.1 Preferred Terms

| Preferred | Avoid |
|---|---|
| security operations center (SOC) | security operations, security ops center |
| detection rule | detection signature, alert rule |
| true positive (TP) | correct alert, valid alert |
| false positive (FP) | false alarm (informal OK in non-technical contexts) |
| analyst | operator, user (when referring to a human SOC analyst) |
| playbook | workflow, recipe |
| runbook | SOP, procedure (SOP is OK for templates) |
| SOAR | orchestration platform |
| alert triage | alert analysis, alert review |
| human-in-the-loop | human oversight, human review (all are OK; HITL preferred in technical contexts) |

### 3.2 Acronyms

Spell out acronyms on first use per page, with the acronym in parentheses:

> Security Information and Event Management (SIEM)

Exceptions: IOC, TTPs, IP, DNS, HTTP — these are sufficiently universal to use without spelling out.

Nexus SecOps domain codes (TEL, DQN, DET, etc.) are defined in the Controls Catalog and do not need spelling out after the first mention per document.

### 3.3 Product Names

Do not mention specific vendor products in benchmark controls or chapter content except:
- Where the product name is the standard reference (MITRE ATT&CK, STIX/TAXII)
- In architecture diagrams where one product is used as an example (must label it as "Example: [Product]")
- In lab exercises where a specific tool is needed

### 3.4 AI/LLM Terminology

| Preferred | Avoid |
|---|---|
| LLM copilot | AI assistant, chatbot, GPT (as generic term) |
| prompt injection | prompt attack, jailbreak (jailbreak implies different intent) |
| hallucination | AI lying, confabulation (hallucination is established term) |
| grounding | anchoring, context injection |
| retrieval-augmented generation (RAG) | document retrieval, context stuffing |

---

## 4. Technical Accuracy Standards

### 4.1 MITRE ATT&CK References

- Always use the official technique ID: T1059.001, not "PowerShell execution technique"
- Include both tactic and technique when first referenced: "Execution — PowerShell (T1059.001)"
- Do not make up technique IDs; verify against attack.mitre.org

### 4.2 CVE References

- Do not cite specific CVEs without noting the disclosure date and whether patching guidance is current
- Prefer describing vulnerability classes over specific CVEs in foundational content
- Lab exercises that reference CVEs should use synthetic/fictional CVE numbers (CVE-YYYY-XXXXX format clearly labeled as synthetic)

### 4.3 Metric Calculations

State the formula for any metric before citing a value:

> MTTD = time of detection − time of initial compromise

When citing industry benchmarks for metrics (e.g., median dwell time), cite the source and year.

### 4.4 Regulatory Claims

Do not make definitive statements about regulatory requirements without noting that:
- Requirements may vary by jurisdiction
- Readers should consult legal counsel for their specific situation
- Policy templates are starting points, not legal advice

---

## 5. Lab and Quiz Standards

### 5.1 Labs

- Every lab must state: difficulty (⭐ scale), duration estimate, chapter reference, Nexus SecOps controls
- Synthetic data must be clearly labeled as synthetic
- Answer keys must be in collapsible `??? success` blocks, never visible on page load
- Labs must include a scoring rubric
- Labs must end with a link to the next lab or relevant benchmark section

### 5.2 Quizzes

- 10 questions per chapter quiz
- Mix of: recall (20%), application (50%), analysis (30%)
- No "trick questions" — questions should test understanding, not gotcha phrasing
- Each answer must include an explanation, not just the correct option
- Reference the specific chapter section where the answer is found

### 5.3 Answer Key Format

```markdown
??? success "Click to reveal answer"
    **Correct answer: B**

    [Explanation of why B is correct and why the other options are not]

    *Covered in: [Chapter/Section name]*


6. Images and Diagrams

  • All images must be in docs/figures/ with descriptive filenames
  • Include alt text for all images (MkDocs: ![Alt text](path/to/image.png))
  • Diagrams created with external tools (draw.io, Lucidchart) must be exported as PNG or SVG, not embedded as proprietary formats
  • Prefer Mermaid diagrams over static images for maintainability
  • No screenshots of real systems, real IP addresses, or real organizational data

7. Review Checklist

Before submitting any content, verify:

  • [ ] RFC 2119 terms are in all caps (SHALL, SHOULD, MAY)
  • [ ] Nexus SecOps control references use correct format (Nexus SecOps-NNN)
  • [ ] Code blocks have language specifier
  • [ ] No real IP addresses, usernames, or organizational data
  • [ ] Synthetic data is labeled as synthetic
  • [ ] Acronyms spelled out on first use
  • [ ] MITRE ATT&CK technique IDs verified against attack.mitre.org
  • [ ] Admonitions used sparingly (max 3 per section)
  • [ ] Tables have header rows
  • [ ] Mermaid diagrams are readable (max ~15 nodes)
  • [ ] Lab answer keys are in collapsible blocks
  • [ ] Links to related content work correctly