PROMPT INJECTION
Prompt injection is now a tooling problem
The break lands through retrieval, orchestration, and hidden instructions before it ever looks like a jailbreak.
Subscribe FreeAI SECURITY BRIEF
AIPwn tracks how AI systems break: exploit paths, exposed surfaces, and ship risk.
Weekly research for teams that need signal.
the exploit path, named
the exposed surface, visible
the same evidence, enforced
LATEST RESEARCH
Three lines from this week's brief.
PROMPT INJECTION
The break lands through retrieval, orchestration, and hidden instructions before it ever looks like a jailbreak.
Subscribe FreeTOOL ABUSE
Shell, fetch, subprocess, and connector paths still turn low-friction prompts into high-impact actions.
Read ArchivePUBLIC EXPOSURE
OpenClaw shows which endpoints, docs, and agent interfaces are exposed right now.
Open WatchboardTHREAT SURFACE
The surfaces that turn model risk into reachable risk.
CURRENT TRACKING
Instruction override, hidden tool calls, indirect poisoning, and retrieval chains.
Shell execution, downloader chains, connector misuse, and runtime permissions.
Leaked API keys, unsafe logs, public config artifacts, and long-lived credentials.
Open docs, unauthenticated endpoints, and exposed agent interfaces.
PROOF LAYER
OpenClaw turns findings into a live watchboard: target, issue, and status.
INFRASTRUCTURE
When reading is not enough, the same evidence becomes scanning, policy, and gates.
NEWSLETTER
Weekly AI security research on exploit paths, exposed surfaces, and ship risk.
WeeklyCONTROL PLANE
Policy, scanning, and release decisions for AI systems in one control layer.
AlphaSCANNER
Evidence-first scanning across repos, prompts, connectors, and exposed surfaces.
Research-fedGATE
Release gates that block risky changes with evidence.
Alpha