What is the Tractatus Framework?
The Tractatus-Based LLM Safety Framework implements architectural constraints that ensure AI systems preserve human agency regardless of capability level. Instead of hoping AI "behaves correctly," we build systems where certain decisions structurally require human judgment.
The "27027 Incident" - How the Framework Prevents It
WITHOUT Framework
User Instruction:
"Check port 27027"
AI Action Executed:
mongosh --port 27017
❌ Used MongoDB default instead of explicit instruction!
Result:
• Found 0 documents (wrong port)
• User alarm: "Data is lost!"
• Trust degradation
• 15+ minutes wasted
• User alarm: "Data is lost!"
• Trust degradation
• 15+ minutes wasted
WITH Framework
User Instruction:
"Check port 27027"
1. Classification:
Quadrant: TACTICAL
Persistence: HIGH (explicit parameter)
Verification: MANDATORY
Persistence: HIGH (explicit parameter)
Verification: MANDATORY
2. Cross-Reference Validation:
⚠️ CONFLICT DETECTED
User specified port: 27027
Proposed action uses: 27017
→ ACTION REJECTED
User specified port: 27027
Proposed action uses: 27017
→ ACTION REJECTED
AI Response to User:
"I noticed you specified port 27027, but I was about to check port 27017 (MongoDB default).
I'll use port 27027 as you instructed."
Result:
✅ Correct port used
✅ Data found successfully
✅ Trust maintained
✅ Zero wasted time
✅ Data found successfully
✅ Trust maintained
✅ Zero wasted time
Key Insight
The framework structurally prevents cached patterns from overriding explicit instructions. This scales to superintelligent systems because the constraint is architectural, not training-based.
Try the Live API
Test the Tractatus governance services directly. These are the actual services running on this platform.
Framework Status
GET /api/governance
Technical Documents
GET /api/documents
API Documentation
GET /api
Admin Panel
Requires authentication