Brain Level 1 Activation Playbook¶
This is the operational playbook for taking the Nexus Brain from IDLE to Brain Level 1 (L1): a supervised autonomous agent that opens narrow, low-risk PRs against main, runs through the 6-gate auto-merge workflow, and is rolled back instantly if anything goes sideways.
L1 is not "the brain runs the project". L1 is "the brain handles the boring 30-line PRs while a human watches the first five cycles closely". Read every section before flipping the switch.
Status as of 2026-04-24 (s54): Brain is IDLE. Last autonomous PR was #16 on 2026-04-12 (legacy). No LLM API keys are set in GitHub Actions secrets. The workflow is scheduled (Mon/Thu 07:00 UTC) but produces no PRs because, without an API key,
nexus_brain.pyfalls through to local fallback reasoning, which generates few or no actions. There is no separate dry-run guard to remove — the gating mechanism is the absence of secrets. See Activation procedure.
1. Pre-flight checklist¶
Every item below MUST be true before activation. If any item fails, stop and resolve before proceeding.
1.1 At least one LLM provider key is set in GitHub Actions secrets¶
Check via:
You need at least one of: MISTRAL_API_KEY, GEMINI_API_KEY, GROQ_API_KEY, COHERE_API_KEY, ANTHROPIC_API_KEY. The workflow wires all five env vars through env: (verified in .github/workflows/nexus-brain.yml lines 42-47; ANTHROPIC_API_KEY line added s56). The LLM router (scripts/llm_router.py, get_available_providers(), lines 142-149) auto-discovers which keys are present and only routes to those.
Recommended minimum for L1: set TWO providers so the bandit has fall-through. Mistral primary (best free-tier quality on this codebase historically) + Gemini secondary. Adding a third does not hurt. Anthropic is paid (no free tier) -- set it only if you've decided the brain's quality jump is worth the per-call cost.
1.2 All 5 validators CLEAN locally and in last 5 main-branch CI runs¶
Run locally:
python scripts/count_validator.py
python scripts/chapter_names_validator.py
python scripts/cross_reference_validator.py
python scripts/content_cascade_validator.py
python scripts/voice_quality_check.py
All five must exit 0 with no warnings. Then verify the last 5 main runs of each on GitHub:
gh run list --workflow=count-validator.yml --branch=main --limit=5
gh run list --workflow=chapter-names-validator.yml --branch=main --limit=5
gh run list --workflow=cross-reference-validator.yml --branch=main --limit=5
gh run list --workflow=content-cascade.yml --branch=main --limit=5
(Voice quality check is currently invoked via pre-commit, not its own workflow — see .pre-commit-config.yaml.)
If anything is red on main, fix it before activating the brain. Brain PRs run the same gates and will be auto-blocked anyway, but the noise will mask real signals from the brain's first PRs.
1.3 Adversarial test 13/13 PASS in last 3 runs¶
python scripts/adversarial_validator_test.py
gh run list --workflow=adversarial-validator-test.yml --branch=main --limit=3
This script (629 lines, s53; extended s54 to 13 scenarios) confirms each of the four content validators behaves as advertised by feeding them synthetic broken inputs and verifying they catch the breakage. If the validators don't catch what they claim to catch, the brain's safety net is theoretical.
1.4 GitHub Actions quota is healthy¶
Check resources.actions if available, and verify the project is past 2026-05-01 (the quota reset for the s40 burn). Brain cycle averages ~3-5 minutes of Linux runner time per run × 2 runs/week ≈ ~30 min/week base. Add ~2 min per brain PR's auto-merge workflow run. Budget envelope: ~50 min/week brain-attributable under normal cadence.
If repo is still private and quota is tight, do not activate. Either wait for the cycle reset, or take the project public (Free plan unlimited Actions for public repos).
1.5 Pre-commit hooks installed and passing¶
The hook config at .pre-commit-config.yaml runs: count-validator, chapter-names-validator, voice-quality-check, content-cascade-validator, adversarial-validator-test. All must pass. Brain PRs will skip pre-commit (they're created by nexus-brain[bot], not by you), but you'll be running pre-commit when you fix anything brain breaks.
1.6 First-PR human-review covenant¶
You (the maintainer) commit, in writing, to the following:
I will personally review the first 5 brain PRs in full before allowing any auto-merge to fire. I will not delegate this review. If I cannot review a brain PR within 24 hours of opening, I will add the
needs-human-reviewlabel, which the auto-merge workflow respects (it will not merge labelled PRs without my approval).
Add a calendar reminder for Mon/Thu 09:00 your local time, 2 hours after the cron at 07:00 UTC. Brain PRs land overnight in many timezones — make sure you have a daylight window before the 1-hour cool-down expires.
2. Step-by-step activation procedure¶
2.1 Set the LLM secrets¶
# Mistral (primary recommended)
gh secret set MISTRAL_API_KEY --repo SpaceCadet019/nexus-secops --body "<your-key>"
# Gemini (secondary recommended)
gh secret set GEMINI_API_KEY --repo SpaceCadet019/nexus-secops --body "<your-key>"
# Optional fallbacks
gh secret set GROQ_API_KEY --repo SpaceCadet019/nexus-secops --body "<your-key>"
gh secret set COHERE_API_KEY --repo SpaceCadet019/nexus-secops --body "<your-key>"
# Optional paid (Anthropic). DO NOT paste keys in chat -- gh prompts inline:
gh secret set ANTHROPIC_API_KEY --repo SpaceCadet019/nexus-secops
# (Then also add ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
# to the env: block of .github/workflows/nexus-brain.yml)
Provider signup pages: - Mistral (free): https://console.mistral.ai/ - Gemini (free): https://aistudio.google.com/ - Groq (free): https://console.groq.com/ - Cohere (free): https://dashboard.cohere.com/ - Anthropic (paid, billing required): https://console.anthropic.com/settings/keys
2.2 Edit the workflow — there is no dry-run guard to remove¶
This is the most important honest fact in the playbook: .github/workflows/nexus-brain.yml does not contain a hard-coded dry-run guard. The --dry-run flag is an opt-in workflow_dispatch input that defaults to false:
# .github/workflows/nexus-brain.yml lines 14-25
workflow_dispatch:
inputs:
dry_run:
description: "Run in dry-run mode (no PRs, no commits)"
required: false
type: boolean
default: false
The cron schedule (schedule: cron: "0 7 * * 1,4", lines 11-13) does not pass any inputs, so inputs.dry_run is empty/false on scheduled runs. The condition if: inputs.dry_run != true (line 74, 110, 123, 138) lets every step run by default.
This means: as soon as you set an LLM API key, the very next Mon or Thu 07:00 UTC run will execute the full 10-phase cycle and may open a PR. There is no extra activation step.
If you want to confirm-then-execute (recommended for the very first run), do it via manual trigger after secrets are set.
2.3 First manual cycle¶
From the project root, after secrets are set:
# Dry-run first to confirm wiring (no PR created)
gh workflow run nexus-brain.yml \
--repo SpaceCadet019/nexus-secops \
--field dry_run=true \
--field trigger_source=manual
# Wait ~5 min, then read the run log
gh run list --workflow=nexus-brain.yml --limit=1
gh run view --log <run-id>
Look in the log for these lines (they confirm LLM routing is live):
LLM Router: N providers available(N >= 1)LLM providers : N (mistral, gemini, ...)Updated <provider> quality for reason: <score>(means a real LLM call returned a critique-scored plan)** DRY RUN -- no side-effects **in Phase 7 ACT (because--dry-runwas passed via the workflow input)
If those lines appear, the brain is wired correctly.
Then trigger the live first cycle:
gh workflow run nexus-brain.yml \
--repo SpaceCadet019/nexus-secops \
--field dry_run=false \
--field trigger_source=manual
If a PR is opened, it gets the brain-generated label automatically (workflow line 190). It will sit in the cool-down window for 1 hour before the auto-merge workflow even considers it (Gate 5).
3. First-PR criteria¶
For the first 5 brain PRs, only the following types are acceptable for auto-merge. Anything else, you gh pr edit <num> --add-label needs-human-review and review by hand.
Allowed (within the brain's actual capability set, verified in nexus_brain.py lines 2025-2105)¶
The brain can produce four action types: create_issue, create_content, update_content, create_detection. Of those, the early-cycle whitelist is:
| Action type | Allowed example | Why it's safe |
|---|---|---|
create_issue | "Quiz missing for Ch58" | Issues are reversible; human triages. |
create_content | New docs/learning-graph/ node JSON snippet, new docs/runbooks/ short page | Writes new file only; no existing content disturbed. |
update_content | Add SEO meta-description block to a chapter that lacks one; add a missing index.md | The brain APPENDS — see nexus_brain.py line 2079 — so existing content is never replaced. |
create_detection | Add a new KQL/SPL detection appended to docs/tools/attck-technique-reference.md | Domain quality gate (line 1881-1890) checks for KQL/SPL patterns + balanced parens before allowing. |
Forbidden for the first 5 cycles (manual-review required)¶
- Any chapter content rewrites that replace existing prose. The brain's
update_contentonly appends; if a PR shows replacement diffs in a chapter, something is wrong — investigate. - Anything touching
scripts/,.github/workflows/,mkdocs.yml,CLAUDE.md. This is enforced automatically by Gate 4 in the auto-merge workflow (.github/workflows/brain-auto-merge.ymllines 132-137 — see Gate cheat sheet). - Security headers changes (
docs/_headers, CSP overrides). - Dependency bumps (
requirements.txt). - Anything in
overrides/(theme overrides). - Net-new chapters. Chapter-level voice quality has a 50-point ceiling enforced by
voice_quality_check.py. The brain has no historical proof it can write at that bar; new chapters are out of scope for L1.
Mapping forbidden categories to the 6 gates¶
- Workflow / scripts / mkdocs / CLAUDE.md changes: Gate 4 hard-blocks these (the BLOCKED_PATTERNS list).
- Diff > 500 lines: Gate 3 hard-blocks.
- Build break: Gate 2 hard-blocks.
- Critique quality < 6.0: Gate 6 hard-blocks via
.brain-quality-flagfile. - Other forbidden categories above (security headers, dependencies, theme): NOT gate-blocked. You enforce these by reviewing the diff and adding
needs-human-reviewif the PR strays.
4. The 6-gate auto-merge cheat sheet¶
Verified against .github/workflows/brain-auto-merge.yml (288 lines).
| # | Gate | Source line | One-liner | Status |
|---|---|---|---|---|
| 1 | brain-generated label present | 69-82 | PR must carry the bot-applied label, else exit | Implemented |
| 2 | MkDocs strict build passes | 84-106 | python -m mkdocs build --strict returns 0 | Implemented |
| 3 | Diff < 500 lines | 108-122 | gh pr diff \| wc -l < 500 | Implemented |
| 4 | No critical-file changes | 124-151 | Path must not start with .github/workflows/, scripts/nexus_brain.py, mkdocs.yml, or CLAUDE.md | Implemented |
| 5 | 1-hour cool-down period | 153-179 | PR must be ≥ 3600s old (createdAt) before merge | Implemented |
| 6 | Brain quality score above threshold | 181-196 | If .brain-quality-flag file exists in repo, block | Implemented (flag written by nexus_brain.py line 2604-2611 when critique quality < MIN_QUALITY_FOR_AUTO_MERGE = 6.0) |
Honest gap to flag: Gate 4's BLOCKED_PATTERNS list is narrow. It does not include scripts/brain_evaluator.py, scripts/brain_strategist.py, scripts/llm_router.py, requirements.txt, overrides/, or docs/_headers. If the brain ever proposes touching one of those, Gate 4 will let it through. Mitigation for L1: this is the entire point of the human-review covenant in §1.6. Watch the diff yourself for the first 5 PRs. If the pattern holds, consider extending BLOCKED_PATTERNS in a separate manual PR before graduating to L2.
5. Manual-review checklist for the first 5 PRs¶
Run this checklist on every brain PR for the first 5 cycles. Estimated time: ~5 minutes per PR.
- Type sanity: PR title starts with "🧠 Brain Cycle —" and has the
brain-generatedlabel. - Reasoning posted: PR body contains a
### Reasoningsection pulled frombrain-log.json(workflow lines 158-185). - Diff size:
gh pr diff <num> | wc -lreturns < 100 (cap is 500, but L1 should hover well below). - Scope match: Every changed file is in the §3 allowed list. If you see anything in
scripts/,.github/,mkdocs.yml,CLAUDE.md,requirements.txt,overrides/, ordocs/_headers, labelneeds-human-reviewand stop. - No content replacement: For any chapter or existing page touched, run
gh pr diff <num> -- <file>and confirm the diff is purely additive (new lines below existing content). - Synthetic data only: Spot-check any IPs against RFC 5737/1918, hostnames against
*.example.com. Domain validator (line 1864-1873) catches obvious cases but is not exhaustive. - Build check: Wait for the auto-merge workflow's Gate 2 step to report green, OR run
mkdocs build --strictlocally on the PR branch. - Sense check: Would I have made this change if I'd noticed the gap? If the answer is "no, this is filler" or "no, this is wrong", close the PR with a comment and let the circuit breaker count it.
- Cool-down respected: PR is ≥ 1 hour old before auto-merge fires (Gate 5). Don't manually merge inside that window — that's the only review window for late-night reviewers.
- Post-merge spot-check: After merge, run
mkdocs build --strictand the 5 validators onmain. If anything is red, revert immediately and file an issue taggedbrain-regression.
6. Circuit-breaker behavior¶
Verified against .github/workflows/nexus-brain.yml lines 71-106.
The circuit breaker lives in the workflow, not in nexus_brain.py. On every scheduled or manual non-dry-run cycle, before invoking the brain, the workflow:
- Lists the last 3 PRs labelled
brain-generated(any state). - Counts how many of those 3 are in state
CLOSEDand were not merged (mergedAtis null). - If all 3 are closed-without-merge, the breaker is
tripped=true. - When tripped, the workflow:
- Skips the brain run entirely (subsequent steps are gated on
steps.circuit.outputs.tripped != 'true'). - Opens an issue titled "Brain Circuit Breaker Tripped — Quality Review Needed" with the
brain-generatedlabel. - Logs a workflow warning.
Reset mechanism: there is no automatic reset. The breaker re-evaluates from scratch on each run, so it stays tripped only if the most recent 3 brain PRs are still all closed-unmerged. To deliberately reset:
- Merge or close-and-archive the breaker issue (cosmetic only — the breaker logic ignores it).
- Either wait for one of the 3 latest brain PRs to drop out of the window (next cycle creates a 4th), OR manually re-merge a previously-closed brain PR (rare), OR fire
gh api repos/SpaceCadet019/nexus-secops/dispatches -f event_type=brain-reset(thebrain-resetrepository_dispatch is documented in.github/workflows/brain-trigger.ymllines 19-22 but the workflow only dispatches the event — the brain workflow accepts it viarepository_dispatch.types: brain-resetline 30, but the brain workflow does not contain reset logic; thebrain-resetevent simply triggers a normal cycle, which will still hit the same circuit-breaker check).
Honest caveat: I traced the workflow and confirmed brain-reset does not bypass the circuit breaker. The CLAUDE.md note "Reset circuit breaker and re-run" in brain-trigger.yml line 22 is the documented intent, but the implementation is incomplete. To actually clear the breaker you must either manually merge an old brain PR or wait for the 3-PR window to roll forward.
7. Rollback procedure¶
If a brain PR causes production breakage (broken site, validator regression, content corruption):
Immediate (< 5 minutes)¶
- Revert the merged PR:
- Re-gate the brain: delete one or more LLM keys to force the brain back to its no-LLM idle state.
gh secret delete MISTRAL_API_KEY --repo SpaceCadet019/nexus-secops
gh secret delete GEMINI_API_KEY --repo SpaceCadet019/nexus-secops
With no keys, LLMRouter.get_available_providers() returns [], the call chain falls through to _local_fallback_reason, and the brain produces few or no actions. The cron continues to fire (cheap, ~30 sec) but harmless.
- Verify rollback:
mkdocs build --strict+ 5 validators + adversarial test all green on main.
Same-day (< 1 hour)¶
- Disable the workflow if the failure was severe enough to warrant a complete pause:
- File a post-mortem issue tagged
brain-regression,brain-generatedwith: - PR number that caused the regression
- Diff that broke things
- Root cause if known
- Prevention candidates (extend Gate 4 BLOCKED_PATTERNS? add a new validator? lower MAX_ACTIONS_PER_CYCLE?)
Before re-attempting¶
- Read the brain-log.json entry for the failed cycle. The reasoning chain is preserved.
- Re-run §1 pre-flight in full. Do not partial-restart.
- Update this playbook with the failure mode in a new "Known failure modes" appendix at the bottom. Future-you will thank you.
8. What Brain L1 does NOT do¶
Calibrate expectations against verified code, not the README narrative.
- Does NOT modify chapter prose.
update_content(line 2059-2082) only appends content below the existing file. It cannot rewrite. If the brain wants to rewrite a chapter, the LLM will produce that content butact()will only append it as new text below — which will look weird in the diff and trigger your manual review. - Does NOT change validator scripts. Gate 4 of auto-merge blocks
scripts/nexus_brain.pyspecifically, but not the other brain/validator scripts (verified). For L1, the human-review covenant catches this. - Does NOT modify itself. Self-modification capability would be a Level 3+ feature; the brain has prompt-evolution (
_evolve_prompt, line 2434+) but the evolved prompt is stored inbrain-memory.json, NOT inscripts/nexus_brain.py. Code changes to the brain itself are out of scope. - Does NOT rate-limit itself across providers. The LLM router has epsilon-greedy bandit selection and persists per-provider quality scores, but no token-bucket or req-per-sec limiter. Rate limiting is the provider's responsibility (free tiers enforce it server-side with HTTP 429). The router catches HTTP errors and falls through.
- Does NOT remember individual PR outcomes deterministically.
brain_evaluator.pypopulateslearned_patternsafter each cycle (line 367-381). The strategist (brain_strategist.py) reads strategy goals butlearned_patternsconsumption in active reasoning prompts is partial —nexus_brain.pylines 1264-1268 inject the last 10 patterns into the user prompt, but the LLM is free to ignore them. Documented design, not yet verified in production that this loop measurably improves cycle quality. - Does NOT respond to events in real-time. The cron runs Mon/Thu 07:00 UTC.
repository_dispatchtypes (threat-feed,content-update,brain-reset) ARE wired (workflow lines 26-30) but they trigger the same workflow; latency is dominated by GitHub Actions queue time (typically 30s-2min, sometimes 10+ min under load). Do not expect sub-minute reaction. - Does NOT enforce voice quality on its output. The voice quality check is a pre-commit hook only and is bypassed by the bot's commit (the bot-driven push doesn't run pre-commit hooks; those run client-side on developer commits). If the brain writes content that fails the voice check, it will land. Voice check is in the pre-commit config — but the bot doesn't run pre-commit. Manual review or a server-side check action is needed if you care about enforcing voice on brain output.
9. Cost expectations¶
Free-tier capacity (as of 2026-04-24, providers may change limits at any time)¶
| Provider | Free-tier RPS / RPM | Notes |
|---|---|---|
| Mistral | ~5 req/s (best-effort, no SLA) | console.mistral.ai default tier |
| Gemini | 60 req/min | Google AI Studio free tier, gemini-2.0-flash |
| Groq | 30 req/min | console.groq.com, llama-3.3-70b-versatile |
| Cohere | 100 req/min trial | dashboard.cohere.com, command-r-plus |
Brain consumption envelope¶
A single cycle calls LLMs in: - Phase 4 REASON: 1 call (council prompt) - Phase 5 CRITIQUE: 1 call - Phase 4 REASON re-iteration: up to 2 more (capped by MAX_REASON_ITERATIONS = 3, line 1921) - Phase 4 sub-step "produce" content: 1 call per action (capped by MAX_ACTIONS_PER_CYCLE = 3, line 1920) → up to 3 calls - Phase 9 LEARN: occasional _evolve_prompt call every 10 cycles (line 98)
Worst-case per cycle: ~10 LLM calls. Two cycles per week × 10 calls = ~20 calls/week.
This is well within all four free tiers. Even worst-case daily-cron cadence (7×/week) ≈ 70 calls/week is fine.
Honest caveats¶
- Free tiers have no SLA. Mistral has had multi-hour outages in the past; the router will fall through to other providers, but if you've only set Mistral and it's down, the cycle produces nothing.
- Free tiers' rate limits are subject to change without notice. If you migrate to event-driven triggers (
repository_dispatch) and start firing on every CISA KEV update, you can burst past 30 req/min easily. - Production cadence (e.g., daily cron, event-driven on threat-feed updates) at higher rates may need a paid tier on at least one provider. Mistral La Plateforme paid tier starts at $0.20/M tokens, which for ~1M tokens/month brain consumption is well under $1/month.
- Cohere's "trial" tier is time-limited, not per-call-limited. After the trial expires, Cohere stops working and the router routes around it. Set Mistral or Gemini as the primary; treat Cohere as nice-to-have.
10. Success criteria — graduating to "L1 complete"¶
L1 is declared successful when, after 4 consecutive weeks of activated brain operation:
- Volume: ≥ 5 brain PRs merged into main.
- Merge rate: ≥ 90% (PRs merged / PRs created). With 5 merged, allow at most 1 closed-unmerged.
- Zero rollback events: no
git revertwas needed against any merged brain PR. - No production breakage: the live site at
nexus-secops.pages.devstayed green throughout. Cloudflare deploy never failed because of a brain commit. - Validators stay CLEAN: count, chapter_names, cross_reference, content_cascade, voice_quality_check all green on main throughout the 4 weeks.
- Adversarial test stays 13/13: no validator regression introduced indirectly.
When all 6 criteria hold, L1 is complete. You can then:
- Relax the human-review covenant from "every PR" to "weekly spot-check + alert on circuit-breaker trip".
- Begin scoping L2 evaluation — which is OUT of the current Revolutionary Plan scope (the plan stops at L1). L2 would mean: brain proposes its own gate changes (with manual approval), brain proposes its own content scope expansions, brain begins consuming
learned_patternsto bias future plans measurably.
If any criterion fails at the 4-week mark, do not graduate. Run a post-mortem, fix the gap, and re-run a 4-week trial. L1 is a stable plateau, not a stepping stone you must exit.
Appendix A: Verified-vs-claimed matrix¶
| Claim made in this playbook | Verified against | Status |
|---|---|---|
| 6 auto-merge gates exist and are enforced | .github/workflows/brain-auto-merge.yml lines 69-196 | VERIFIED |
| No hard-coded dry-run guard in workflow | .github/workflows/nexus-brain.yml lines 14-25, 74, 110, 123, 138 | VERIFIED |
| Circuit breaker = "last 3 brain PRs all closed-unmerged" | .github/workflows/nexus-brain.yml lines 71-106 | VERIFIED |
| MAX_ACTIONS_PER_CYCLE = 3 | scripts/nexus_brain.py line 1920 | VERIFIED |
| MAX_REASON_ITERATIONS = 3 with 7.0 quality threshold | scripts/nexus_brain.py lines 1921, 2711 | VERIFIED |
MIN_QUALITY_FOR_AUTO_MERGE = 6.0 (writes .brain-quality-flag) | scripts/nexus_brain.py lines 161, 2604-2611 | VERIFIED |
| 5 LLM providers wired (Mistral, Gemini, Groq, Cohere, Anthropic) | scripts/llm_router.py lines 30-69 | VERIFIED (Anthropic added s56) |
| Epsilon-greedy bandit with 0.30 initial, 0.05 minimum | scripts/llm_router.py lines 68-71 | VERIFIED |
update_content only appends, never replaces | scripts/nexus_brain.py lines 2078-2079 | VERIFIED |
| Path-traversal guard on file writes | scripts/nexus_brain.py lines 2046-2052, 2065-2071, 2087-2094 | VERIFIED |
| Domain quality gate enforces RFC IPs + balanced parens + ATT&CK | scripts/nexus_brain.py lines 1848-1912 | VERIFIED |
brain-reset repository_dispatch resets the circuit breaker | NOT IMPLEMENTED in nexus-brain.yml; documented intent only | DOCUMENTED DESIGN, NOT IMPLEMENTED |
learned_patterns consumption measurably improves cycle quality | Patterns are injected into prompts (line 1264-1268) but no telemetry confirms downstream effect | DOCUMENTED DESIGN, NOT YET VERIFIED IN PRODUCTION |
| Voice quality check enforced on brain commits | NO — pre-commit hooks don't run on bot commits | NOT IMPLEMENTED on bot commit path |
| Gate 4 BLOCKED_PATTERNS covers all sensitive paths | Only .github/workflows/, scripts/nexus_brain.py, mkdocs.yml, CLAUDE.md — narrower than scripts/ writ large | PARTIAL — see §4 honest gap |
Appendix B: Quick-reference command list¶
# Pre-flight verification
gh secret list --repo SpaceCadet019/nexus-secops
python scripts/count_validator.py && python scripts/chapter_names_validator.py && python scripts/cross_reference_validator.py && python scripts/content_cascade_validator.py && python scripts/voice_quality_check.py
python scripts/adversarial_validator_test.py
gh api rate_limit
pre-commit run --all-files
# Activate
gh secret set MISTRAL_API_KEY --repo SpaceCadet019/nexus-secops --body "<key>"
gh secret set GEMINI_API_KEY --repo SpaceCadet019/nexus-secops --body "<key>"
# First manual cycle (dry-run)
gh workflow run nexus-brain.yml --field dry_run=true --field trigger_source=manual
# First manual cycle (live)
gh workflow run nexus-brain.yml --field dry_run=false --field trigger_source=manual
# Watch
gh run list --workflow=nexus-brain.yml --limit=5
gh pr list --label brain-generated --state all --limit=10
# Rollback
git revert -m 1 <merge-sha> && git push origin main
gh secret delete MISTRAL_API_KEY --repo SpaceCadet019/nexus-secops
gh secret delete GEMINI_API_KEY --repo SpaceCadet019/nexus-secops
gh workflow disable nexus-brain.yml --repo SpaceCadet019/nexus-secops