A Starting Point
Instead of hoping AI systems "behave correctly," we propose structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.
We recognize this is one small step in addressing AI safety challenges. Explore the framework through the lens that resonates with your work.
Researcher
Academic & technical depth
Explore the theoretical foundations, formal guarantees, and scholarly context of the Tractatus framework.
- Technical specifications & proofs
- Academic research review
- Failure mode analysis
- Mathematical foundations
Implementer
Code & integration guides
Get hands-on with implementation guides, API documentation, and production-ready code examples.
- Working code examples
- API integration patterns
- Service architecture diagrams
- Deployment best practices
Leader
Strategic AI Safety
Navigate the business case, compliance requirements, and competitive advantages of structural AI safety.
- Executive briefing & business case
- Risk management & compliance (EU AI Act)
- Implementation roadmap & ROI
- Competitive advantage analysis
Framework Capabilities
Instruction Classification
Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging
Cross-Reference Validation
Validates AI actions against explicit user instructions to prevent pattern-based overrides
Boundary Enforcement
Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans
Pressure Monitoring
Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification
Metacognitive Verification
AI self-checks alignment, coherence, safety before execution - structural pause-and-verify
Human Oversight
Configurable approval workflows ensure appropriate human involvement at every decision level
Experience the Framework
Explore real-world incidents showing how governance structures AI behavior—from preventing technical failures to catching fabrications
The 27027 Incident
How pattern recognition bias causes AI to override explicit human instructions—and why this problem gets worse as models improve.
AI Fabrication Incident
Claude fabricated $3.77M in statistics and false production claims. Framework detected it, documented root causes, and created permanent safeguards.
Pre-Publication Security Audit
Framework prevented publication of sensitive information (internal paths, database names, infrastructure details) before it reached GitHub.
Live Framework Demonstration
Try the five core components yourself: classify instructions, test boundary enforcement, monitor context pressure, and validate cross-references.
Want to dive deeper?
Explore complete case studies, research topics, and implementation guides in our documentation
Browse All Documentation