diff --git a/public/demos/tractatus-demo.html b/public/demos/tractatus-demo.html new file mode 100644 index 00000000..e6c1c9d5 --- /dev/null +++ b/public/demos/tractatus-demo.html @@ -0,0 +1,512 @@ + + + + + + Tractatus Framework Interactive Demo | AI Safety Architecture + + + + + + +
+
+
+
+

Tractatus Framework

+

Interactive Safety Architecture Demonstration

+
+ ← Back to Home +
+
+
+ + +
+ + +
+
+

What is the Tractatus Framework?

+

+ The Tractatus-Based LLM Safety Framework implements architectural constraints + that ensure AI systems preserve human agency regardless of capability level. Instead of hoping + AI "behaves correctly," we build systems where certain decisions structurally require + human judgment. +

+
+
+ + +
+
+ +
+ + +
+ + +
+

The "27027 Incident" - How the Framework Prevents It

+ +
+ + +
+

+ + + + WITHOUT Framework +

+ +
+
+
User Instruction:
+
"Check port 27027"
+
+ +
+ + + +
+ +
+
AI Action Executed:
+
mongosh --port 27017
+
❌ Used MongoDB default instead of explicit instruction!
+
+ +
+
Result:
+
+ • Found 0 documents (wrong port)
+ • User alarm: "Data is lost!"
+ • Trust degradation
+ • 15+ minutes wasted +
+
+
+
+ + +
+

+ + + + WITH Framework +

+ +
+
+
User Instruction:
+
"Check port 27027"
+
+ +
+
1. Classification:
+
+ Quadrant: TACTICAL
+ Persistence: HIGH (explicit parameter)
+ Verification: MANDATORY +
+
+ +
+
2. Cross-Reference Validation:
+
+ ⚠️ CONFLICT DETECTED
+ User specified port: 27027
+ Proposed action uses: 27017
+ → ACTION REJECTED +
+
+ +
+
AI Response to User:
+
+ "I noticed you specified port 27027, but I was about to check port 27017 (MongoDB default). + I'll use port 27027 as you instructed." +
+
+ +
+
Result:
+
+ ✅ Correct port used
+ ✅ Data found successfully
+ ✅ Trust maintained
+ ✅ Zero wasted time +
+
+
+
+
+ +
+
Key Insight
+

+ The framework structurally prevents cached patterns from overriding explicit instructions. + This scales to superintelligent systems because the constraint is architectural, not training-based. +

+
+
+ + + + + + + + + + +
+
+ + +
+

Try the Live API

+

+ Test the Tractatus governance services directly. These are the actual services running on this platform. +

+ +
+ +
+ + + + + diff --git a/public/docs.html b/public/docs.html new file mode 100644 index 00000000..b9d4f391 --- /dev/null +++ b/public/docs.html @@ -0,0 +1,181 @@ + + + + + + Framework Documentation | Tractatus AI Safety + + + + + + +
+
+
+
+

Framework Documentation

+
+ ← Home +
+
+
+ + +
+
+ + + + + +
+
+
+ + + +

Select a Document

+

Choose a document from the sidebar to begin reading

+
+
+
+ +
+
+ + + + + diff --git a/public/index.html b/public/index.html new file mode 100644 index 00000000..87df4b14 --- /dev/null +++ b/public/index.html @@ -0,0 +1,307 @@ + + + + + + Tractatus AI Safety Framework | Architectural Constraints for Human Agency + + + + + + + +
+
+
+

+ Tractatus AI Safety Framework +

+

+ Architectural constraints that ensure AI systems preserve human agency—
+ regardless of capability level +

+ +
+
+
+ + +
+
+

The Core Insight

+

+ Instead of hoping AI systems "behave correctly," we implement architectural guarantees + that certain decision types structurally require human judgment. This creates bounded AI operation + that scales safely with capability growth. +

+
+
+ + +
+

Choose Your Path

+ +
+ + +
+
+ + + +

Researcher

+

Academic & technical depth

+
+
+

+ Explore the theoretical foundations, formal guarantees, and scholarly context of the Tractatus framework. +

+
    +
  • + + Technical specifications & proofs +
  • +
  • + + Academic research review +
  • +
  • + + Failure mode analysis +
  • +
  • + + Mathematical foundations +
  • +
+ + Explore Research + +
+
+ + +
+
+ + + +

Implementer

+

Code & integration guides

+
+
+

+ Get hands-on with implementation guides, API documentation, and production-ready code examples. +

+
    +
  • + + Working code examples +
  • +
  • + + API integration patterns +
  • +
  • + + Service architecture diagrams +
  • +
  • + + Deployment best practices +
  • +
+ + View API Docs + +
+
+ + +
+
+ + + +

Advocate

+

Vision & impact communication

+
+
+

+ Understand the societal implications, policy considerations, and real-world impact of AI safety architecture. +

+
    +
  • + + Real-world case studies +
  • +
  • + + Plain-language explanations +
  • +
  • + + Policy implications +
  • +
  • + + Societal impact analysis +
  • +
+ + See Demonstrations + +
+
+ +
+
+ + +
+
+

Framework Capabilities

+ +
+ +
+
+ + + +
+

Instruction Classification

+

+ Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging +

+
+ +
+
+ + + +
+

Cross-Reference Validation

+

+ Validates AI actions against explicit user instructions to prevent pattern-based overrides +

+
+ +
+
+ + + +
+

Boundary Enforcement

+

+ Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans +

+
+ +
+
+ + + +
+

Pressure Monitoring

+

+ Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification +

+
+ +
+
+ + + +
+

Metacognitive Verification

+

+ AI self-checks alignment, coherence, safety before execution - structural pause-and-verify +

+
+ +
+
+ + + +
+

Human Oversight

+

+ Configurable approval workflows ensure appropriate human involvement at every decision level +

+
+ +
+
+
+ + +
+
+

Experience the Framework

+

+ See how architectural constraints prevent the documented "27027 incident" and ensure human agency preservation +

+ +
+
+ + + + + +