Behavioral Training (Constitutional AI)
- ❌
- ❌
- ❌
- ❌
- ❌
Structural Enforcement (Tractatus)
- ✅
- ✅
- ✅
- ✅
- ✅
Jailbreaks often work by manipulating the AI's internal reasoning. Tractatus boundaries operate external to that reasoning—the AI doesn't directly evaluate governance rules. While not foolproof, this architectural separation makes manipulation significantly harder.
Tractatus works with any agentic AI system—Claude Code, LangChain, AutoGPT, CrewAI, or custom agents. The governance layer sits between your agent and its actions.
Your AI agent (any platform). Handles planning, reasoning, tool use. Tractatus is agnostic to implementation.
Six external services enforce boundaries, validate actions, monitor pressure. Architecturally more difficult for AI to bypass.
Immutable audit logs, governance rules, instruction history. Independent of AI runtime—can't be altered by prompts.
Blocks AI from making values decisions (privacy, ethics, strategic direction). Requires human approval.
Stores instructions externally with persistence levels (HIGH/MEDIUM/LOW). Aims to reduce directive fade.
Click any service node or the central core to see detailed information about how governance works.
Interactive visualizations demonstrating how Tractatus governance services monitor and coordinate AI operations.
- •
- •
- •
- •
- •
- 🔬
- 🔴
- 🏢
- 🤝
- 📊