A Starting Point

Instead of hoping AI systems "behave correctly," we propose structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.

We recognize this is one small step in addressing AI safety challenges. Explore the framework through the lens that resonates with your work.

For AI safety researchers, academics, and scientists investigating LLM failure modes and governance architectures

Researcher

Academic & technical depth

Explore the theoretical foundations, formal guarantees, and scholarly context of the Tractatus framework.

  • Technical specifications & proofs
  • Academic research review
  • Failure mode analysis
  • Mathematical foundations
Explore Research
For software engineers, ML engineers, and technical teams building production AI systems

Implementer

Code & integration guides

Get hands-on with implementation guides, API documentation, and production-ready code examples.

  • Working code examples
  • API integration patterns
  • Service architecture diagrams
  • Deployment best practices
View Implementation Guide
For AI executives, research directors, startup founders, and strategic decision makers setting AI safety policy

Leader

Strategic AI Safety

Navigate the business case, compliance requirements, and competitive advantages of structural AI safety.

  • Executive briefing & business case
  • Risk management & compliance (EU AI Act)
  • Implementation roadmap & ROI
  • Competitive advantage analysis
View Leadership Resources

Framework Capabilities

Instruction Classification

Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging

Cross-Reference Validation

Validates AI actions against explicit user instructions to prevent pattern-based overrides

Boundary Enforcement

Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans

Pressure Monitoring

Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification

Metacognitive Verification

AI self-checks alignment, coherence, safety before execution - structural pause-and-verify

Human Oversight

Configurable approval workflows ensure appropriate human involvement at every decision level

Experience the Framework

Explore real-world incidents showing how governance structures AI behavior—from preventing technical failures to catching fabrications

Technical Failure Original Incident

The 27027 Incident

How pattern recognition bias causes AI to override explicit human instructions—and why this problem gets worse as models improve.

Appeal: Researchers, technical teams
Shows: Cross-reference validation preventing instruction override
View 27027 Demo
Values Failure Reactive Governance

AI Fabrication Incident

Claude fabricated $3.77M in statistics and false production claims. Framework detected it, documented root causes, and created permanent safeguards.

Appeal: Leaders, advocates, implementers
Shows: Structured response turning failures into permanent learning
Read Case Study
Security Prevention Proactive Governance

Pre-Publication Security Audit

Framework prevented publication of sensitive information (internal paths, database names, infrastructure details) before it reached GitHub.

Appeal: Security teams, implementers, compliance
Shows: Proactive prevention through structured review
View Security Audit
Interactive Demo Hands-On Experience

Live Framework Demonstration

Try the five core components yourself: classify instructions, test boundary enforcement, monitor context pressure, and validate cross-references.

Appeal: All audiences—learn by doing
Shows: All five framework components in action
Try Interactive Demo

Want to dive deeper?

Explore complete case studies, research topics, and implementation guides in our documentation

Browse All Documentation