MicroSim 26: AI Red Team Toolkit

Interactive AI security assessment simulator — ACME AI Labs — 100% synthetic data for educational use only

Prompt Injection Lab
Model Security Scanner
RAG Attack Simulator
AI Incident Response
Quiz Mode
Direct Injection
Indirect Injection
Jailbreaking
Prompt Leaking
Direct Prompt Injection

Directly instruct the model to override its system prompt. The attacker's input is the prompt itself.

ATLAS AML.T0051 LLM Prompt Injection
Simulated LLM Response
[Waiting for input...] Select an attack category and enter a prompt to test. The simulated LLM uses pre-scripted responses — no real API calls.
Detection Analysis
Run a test to see detection results...
Model Configuration Scanner

Select a synthetic AI model deployment to scan for security misconfigurations, supply chain risks, and serialization vulnerabilities.

Synthetic Document Store

ACME AI Labs knowledge base — 8 documents loaded. Click a document to inspect it.

Attack Vectors
Select an attack to execute against the synthetic document store.
RAG Pipeline Output
[RAG Pipeline idle] Inject an attack vector and then query the store to see the impact.
Detection Log
[No events logged]
AI Incident Scenario Selector

Choose an AI-specific incident scenario to walk through the response decision tree.

AI Security Knowledge Assessment
Question 1 of 10 | Score: 0/0