{ "timestamp": "2025-10-08T00:00:23.287Z", "summary": { "pagesТested": 9, "averageLoadTime": 1, "averageSize": 16.2, "fast": 9, "medium": 0, "slow": 0 }, "results": [ { "name": "Homepage", "url": "http://localhost:9000/", "statusCode": 200, "firstByteTime": 7, "totalTime": 7, "size": 20868, "data": "\n\n\n \n \n Tractatus AI Safety Framework | Architectural Constraints for Human Agency\n \n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n
\n
\n
\n

\n Tractatus AI Safety Framework\n

\n

\n Architectural constraints that ensure AI systems preserve human agency—
\n regardless of capability level\n

\n \n
\n
\n
\n
\n\n \n
\n\n \n
\n
\n

The Core Insight

\n

\n Instead of hoping AI systems \"behave correctly,\" we implement architectural guarantees\n that certain decision types structurally require human judgment. This creates bounded AI operation\n that scales safely with capability growth.\n

\n
\n
\n\n \n
\n

Choose Your Path

\n\n
\n\n \n
\n
\n \n \n \n

Researcher

\n

Academic & technical depth

\n
\n
\n

\n Explore the theoretical foundations, formal guarantees, and scholarly context of the Tractatus framework.\n

\n
    \n
  • \n \n Technical specifications & proofs\n
  • \n
  • \n \n Academic research review\n
  • \n
  • \n \n Failure mode analysis\n
  • \n
  • \n \n Mathematical foundations\n
  • \n
\n \n Explore Research\n \n
\n
\n\n \n
\n
\n \n \n \n

Implementer

\n

Code & integration guides

\n
\n
\n

\n Get hands-on with implementation guides, API documentation, and production-ready code examples.\n

\n
    \n
  • \n \n Working code examples\n
  • \n
  • \n \n API integration patterns\n
  • \n
  • \n \n Service architecture diagrams\n
  • \n
  • \n \n Deployment best practices\n
  • \n
\n \n View Implementation Guide\n \n
\n
\n\n \n
\n
\n \n \n \n

Advocate

\n

Vision & impact communication

\n
\n
\n

\n Understand the societal implications, policy considerations, and real-world impact of AI safety architecture.\n

\n
    \n
  • \n \n Real-world case studies\n
  • \n
  • \n \n Plain-language explanations\n
  • \n
  • \n \n Policy implications\n
  • \n
  • \n \n Societal impact analysis\n
  • \n
\n \n Join the Movement\n \n
\n
\n\n
\n
\n\n \n
\n
\n

Framework Capabilities

\n\n
\n\n
\n
\n \n \n \n
\n

Instruction Classification

\n

\n Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging\n

\n
\n\n
\n
\n \n \n \n
\n

Cross-Reference Validation

\n

\n Validates AI actions against explicit user instructions to prevent pattern-based overrides\n

\n
\n\n
\n
\n \n \n \n
\n

Boundary Enforcement

\n

\n Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans\n

\n
\n\n
\n
\n \n \n \n
\n

Pressure Monitoring

\n

\n Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification\n

\n
\n\n
\n
\n \n \n \n
\n

Metacognitive Verification

\n

\n AI self-checks alignment, coherence, safety before execution - structural pause-and-verify\n

\n
\n\n
\n
\n \n \n \n
\n

Human Oversight

\n

\n Configurable approval workflows ensure appropriate human involvement at every decision level\n

\n
\n\n
\n
\n
\n\n \n
\n
\n

Experience the Framework

\n

\n See how architectural constraints prevent the documented \"27027 incident\" and ensure human agency preservation\n

\n \n
\n
\n\n
\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 730, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "Researcher", "url": "http://localhost:9000/researcher.html", "statusCode": 200, "firstByteTime": 1, "totalTime": 1, "size": 16952, "data": "\n\n\n \n \n For Researchers | Tractatus AI Safety Framework\n \n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n
\n
\n

\n AI Safety Through
Architectural Constraints\n

\n

\n Exploring the theoretical foundations and empirical validation of structural AI safety—preserving human agency through formal guarantees, not aspirational goals.\n

\n \n
\n
\n
\n\n \n
\n

Research Focus Areas

\n\n
\n \n
\n
\n \n \n \n
\n

Theoretical Foundations

\n

\n Formal specification of the Tractatus boundary: where systematization ends and human judgment begins. Rooted in Wittgenstein's linguistic philosophy.\n

\n
    \n
  • \n \n Boundary delineation principles\n
  • \n
  • \n \n Values irreducibility proofs\n
  • \n
  • \n \n Agency preservation guarantees\n
  • \n
\n
\n\n \n
\n
\n \n \n \n
\n

Architectural Analysis

\n

\n Five-component framework architecture: classification, validation, boundary enforcement, pressure monitoring, metacognitive verification.\n

\n
    \n
  • \n \n InstructionPersistenceClassifier\n
  • \n
  • \n \n CrossReferenceValidator\n
  • \n
  • \n \n BoundaryEnforcer\n
  • \n
  • \n \n ContextPressureMonitor\n
  • \n
  • \n \n MetacognitiveVerifier\n
  • \n
\n
\n\n \n
\n
\n \n \n \n
\n

Empirical Validation

\n

\n Real-world failure case analysis and prevention validation. Documented incidents where traditional AI safety approaches failed.\n

\n
    \n
  • \n \n The 27027 Incident (pattern recognition bias override)\n
  • \n
  • \n \n Privacy creep detection\n
  • \n
  • \n \n Silent degradation prevention\n
  • \n
\n
\n
\n
\n\n \n
\n
\n

Interactive Demonstrations

\n\n \n
\n
\n\n \n
\n

Documented Failure Cases

\n\n
\n
\n
\n
\n

The 27027 Incident

\n

\n User instructed \"Check port 27027\" but AI immediately used 27017 instead—pattern recognition bias overrode explicit instruction. Not forgetting; immediate autocorrection by training patterns. Prevented by InstructionPersistenceClassifier + CrossReferenceValidator.\n

\n
\n Failure Type: Pattern Recognition Bias\n Prevention: Explicit instruction storage + validation\n
\n
\n Interactive demo →\n
\n
\n\n
\n
\n
\n

Privacy Creep Detection

\n

\n AI suggested analytics that violated privacy-first principle. Gradual values drift over 40-message conversation. Prevented by BoundaryEnforcer.\n

\n
\n Failure Type: Values Drift\n Prevention: STRATEGIC boundary check\n
\n
\n See case studies doc →\n
\n
\n\n
\n
\n
\n

Silent Quality Degradation

\n

\n Context pressure at 82% caused AI to skip error handling silently. No warning to user. Prevented by ContextPressureMonitor.\n

\n
\n Failure Type: Silent Degradation\n Prevention: CRITICAL pressure detection\n
\n
\n See case studies doc →\n
\n
\n
\n
\n\n \n
\n
\n

Research Resources

\n\n
\n \n\n
\n

Contribute to Research

\n

\n This framework is open for academic collaboration and empirical validation studies.\n

\n
    \n
  • • Submit failure cases for analysis
  • \n
  • • Propose theoretical extensions
  • \n
  • • Validate architectural constraints
  • \n
  • • Explore boundary formalization
  • \n
\n \n Submit Case Study →\n \n
\n
\n
\n
\n\n \n
\n
\n

Join the Research Community

\n

\n Help advance AI safety through empirical validation and theoretical exploration.\n

\n \n
\n
\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 542, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "Implementer", "url": "http://localhost:9000/implementer.html", "statusCode": 200, "firstByteTime": 0, "totalTime": 0, "size": 21831, "data": "\n\n\n \n \n For Implementers | Tractatus AI Safety Framework\n \n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n
\n
\n

\n Production-Ready
AI Safety\n

\n

\n Integrate Tractatus framework into your AI systems with practical guides, code examples, and battle-tested patterns for real-world deployment.\n

\n \n
\n
\n
\n\n \n
\n

Integration Approaches

\n\n
\n \n
\n

Full Stack

\n

\n Complete framework integration for new AI-powered applications. All five services active with persistent instruction storage.\n

\n
    \n
  • \n \n \n \n Instruction classification & storage\n
  • \n
  • \n \n \n \n Cross-reference validation\n
  • \n
  • \n \n \n \n Boundary enforcement\n
  • \n
  • \n \n \n \n Context pressure monitoring\n
  • \n
\n
Best for: New projects, greenfield AI applications
\n
\n\n \n
\n

Middleware Layer

\n

\n Add Tractatus validation as middleware in existing AI pipelines. Non-invasive integration with gradual rollout support.\n

\n
    \n
  • \n \n \n \n Drop-in Express/Koa middleware\n
  • \n
  • \n \n \n \n Monitor mode (log only)\n
  • \n
  • \n \n \n \n Gradual enforcement rollout\n
  • \n
  • \n \n \n \n Compatible with existing auth\n
  • \n
\n
Best for: Existing production AI systems
\n
\n\n \n
\n

Selective Components

\n

\n Use individual Tractatus services à la carte. Mix and match components based on your specific safety requirements.\n

\n
    \n
  • \n \n \n \n Standalone pressure monitoring\n
  • \n
  • \n \n \n \n Boundary checks only\n
  • \n
  • \n \n \n \n Classification without storage\n
  • \n
  • \n \n \n \n Custom component combinations\n
  • \n
\n
Best for: Specific safety requirements
\n
\n
\n
\n\n \n
\n
\n

Quick Start Guide

\n\n
\n \n
\n
\n 1\n

Installation

\n
\n
npm install @tractatus/framework\n# or\nyarn add @tractatus/framework
\n

Install the framework package and its dependencies (MongoDB for instruction storage).

\n
\n\n \n
\n
\n 2\n

Initialize Services

\n
\n
const { TractatusFramework } = require('@tractatus/framework');\n\nconst tractatus = new TractatusFramework({\n  mongoUri: process.env.MONGODB_URI,\n  verbosity: 'SUMMARY', // or 'VERBOSE', 'SILENT'\n  components: {\n    classifier: true,\n    validator: true,\n    boundary: true,\n    pressure: true,\n    metacognitive: 'selective'\n  }\n});\n\nawait tractatus.initialize();
\n

Configure and initialize the framework with your preferred settings.

\n
\n\n \n
\n
\n 3\n

Classify Instructions

\n
\n
const instruction = \"Always use MongoDB on port 27017\";\n\nconst classification = await tractatus.classify(instruction);\n// {\n//   quadrant: 'SYSTEM',\n//   persistence: 'HIGH',\n//   temporal_scope: 'PROJECT',\n//   verification_required: 'MANDATORY',\n//   explicitness: 0.85\n// }\n\nif (classification.explicitness >= 0.6) {\n  await tractatus.store(instruction, classification);\n}
\n

Classify user instructions and store those that meet explicitness threshold.

\n
\n\n \n
\n
\n 4\n

Validate Actions

\n
\n
// User instructed: \"Check MongoDB at port 27027\"\n// But AI about to use port 27017 (pattern recognition bias)\n\nconst action = {\n  type: 'db_config',\n  parameters: { port: 27017 } // Pattern override!\n};\n\nconst validation = await tractatus.validate(action);\n\nif (validation.status === 'REJECTED') {\n  // \"Port 27017 conflicts with instruction: use port 27027\"\n  console.error(`Validation failed: ${validation.reason}`);\n  console.log(`Using instructed port: ${validation.correct_parameters.port}`);\n  // Use correct port 27027\n} else {\n  executeAction(action);\n}
\n

Validate AI actions against stored instructions before execution.

\n
\n\n \n
\n
\n 5\n

Enforce Boundaries

\n
\n
// Check if decision crosses Tractatus boundary\nconst decision = {\n  domain: 'values',\n  description: 'Change privacy policy to enable analytics'\n};\n\nconst boundary = await tractatus.checkBoundary(decision);\n\nif (!boundary.allowed) {\n  // Requires human decision\n  await notifyHuman({\n    decision,\n    reason: boundary.reason,\n    alternatives: boundary.ai_can_provide\n  });\n}
\n

Enforce boundaries: AI cannot make values decisions without human approval.

\n
\n
\n
\n
\n\n \n
\n

Integration Patterns

\n\n
\n
\n

Express Middleware

\n
app.use(tractatus.middleware({\n  mode: 'enforce', // or 'monitor'\n  onViolation: async (req, res, violation) => {\n    await logViolation(violation);\n    res.status(403).json({\n      error: 'Tractatus boundary violation',\n      reason: violation.reason\n    });\n  }\n}));
\n
\n\n
\n

Content Moderation

\n
async function moderateContent(content) {\n  const decision = {\n    domain: 'values',\n    action: 'auto_moderate',\n    content\n  };\n\n  const check = await tractatus.checkBoundary(decision);\n\n  if (!check.allowed) {\n    return { action: 'human_review', alternatives: check.ai_can_provide };\n  }\n}
\n
\n\n
\n

Pressure Monitoring

\n
const pressure = await tractatus.checkPressure({\n  tokens: 150000,\n  messages: 45,\n  errors: 2\n});\n\nif (pressure.level === 'CRITICAL') {\n  await suggestHandoff(pressure.recommendations);\n} else if (pressure.level === 'HIGH') {\n  await increaseVerification();\n}
\n
\n\n
\n

Custom Classification

\n
const customClassifier = {\n  patterns: {\n    CRITICAL: /security|auth|password/i,\n    HIGH: /database|config|api/i\n  },\n\n  classify(text) {\n    for (const [level, pattern] of Object.entries(this.patterns)) {\n      if (pattern.test(text)) return level;\n    }\n    return 'MEDIUM';\n  }\n};
\n
\n
\n
\n\n \n
\n
\n

Implementation Resources

\n\n
\n \n\n \n\n
\n

Support

\n

\n Get help with implementation, integration, and troubleshooting.\n

\n
    \n
  • • GitHub Issues & Discussions
  • \n
  • • Implementation consulting
  • \n
  • • Community Slack channel
  • \n
\n
\n
\n
\n
\n\n \n
\n
\n

Ready to Implement?

\n

\n Join organizations building safer AI systems with architectural guarantees.\n

\n \n
\n
\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 542, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "Advocate", "url": "http://localhost:9000/advocate.html", "statusCode": 200, "firstByteTime": 0, "totalTime": 0, "size": 19318, "data": "\n\n\n \n \n For Advocates | Tractatus AI Safety Framework\n \n \n\nSkip to main content\n\n\n \n \n\n \n
\n
\n
\n

\n AI Safety as
Human Sovereignty\n

\n

\n Join the movement for AI systems that preserve human agency through structural guarantees, not corporate promises. Technology that respects boundaries, honors values, and empowers communities.\n

\n \n
\n
\n
\n\n \n
\n

Core Values

\n\n
\n \n
\n
\n
\n \n \n \n
\n
\n

Human Sovereignty

\n

\n AI must never make values decisions without human approval. Some choices—privacy vs. convenience, user agency, cultural context—cannot be systematized. They require human judgment, always.\n

\n
\n
\n
\n \"What cannot be systematized must not be automated.\"\n
\n
\n\n \n
\n
\n
\n \n \n \n
\n
\n

Digital Sovereignty

\n

\n Communities and individuals must control their own data and AI systems. No corporate surveillance, no centralized control. Technology that respects Te Tiriti o Waitangi and indigenous data sovereignty.\n

\n
\n
\n
\n \"Technology serves communities, not corporations.\"\n
\n
\n\n \n
\n
\n
\n \n \n \n \n
\n
\n

Radical Transparency

\n

\n All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand why AI systems make the choices they do, and have the power to override them.\n

\n
\n
\n
\n \"Transparency builds trust, opacity breeds harm.\"\n
\n
\n\n \n
\n
\n
\n \n \n \n
\n
\n

Community Empowerment

\n

\n AI safety is not a technical problem—it's a social one. Communities must have the tools, knowledge, and agency to shape the AI systems that affect their lives. No tech paternalism.\n

\n
\n
\n
\n \"Those affected by AI must have power over AI.\"\n
\n
\n
\n
\n\n \n
\n
\n

Why Tractatus Matters

\n\n
\n
\n
0
\n
Values decisions automated without human approval
\n
\n
\n
100%
\n
Boundary enforcement through architecture, not promises
\n
\n
\n
\n
Human agency preserved across all interactions
\n
\n
\n\n
\n

The Current Problem

\n

\n Existing AI safety approaches rely on training, fine-tuning, and corporate governance—all of which can fail, drift, or be overridden. Tractatus is different: safety through architecture.\n

\n
\n
\n

❌ Traditional Approaches

\n
    \n
  • • Rely on AI \"learning\" not to cause harm
  • \n
  • • Can drift over time (values creep)
  • \n
  • • Black box decision-making
  • \n
  • • Corporate promises, no guarantees
  • \n
\n
\n
\n

✅ Tractatus Framework

\n
    \n
  • • Structural constraints prevent harm
  • \n
  • • Persistent validation against instructions
  • \n
  • • Transparent boundary enforcement
  • \n
  • • Architectural guarantees, not training
  • \n
\n
\n
\n
\n
\n
\n\n \n
\n
\n

Get Involved

\n\n
\n
\n

Share the Framework

\n

\n Help spread awareness about architectural AI safety and the importance of preserving human agency.\n

\n
    \n
  • • Share on social media
  • \n
  • • Present at conferences
  • \n
  • • Write blog posts
  • \n
  • • Organize community workshops
  • \n
\n
\n\n
\n

Advocate for Standards

\n

\n Push organizations and policymakers to adopt structural AI safety requirements.\n

\n
    \n
  • • Contact representatives
  • \n
  • • Propose policy frameworks
  • \n
  • • Join advocacy coalitions
  • \n
  • • Support aligned organizations
  • \n
\n
\n\n
\n

Build the Community

\n

\n Join others working toward AI systems that preserve human sovereignty and dignity.\n

\n
    \n
  • • Contribute to documentation
  • \n
  • • Submit case studies
  • \n
  • • Participate in discussions
  • \n
  • • Mentor new advocates
  • \n
\n \n
\n
\n
\n
\n\n \n
\n

Real-World Impact

\n\n
\n
\n

Preventing the 27027 Incident

\n

\n User said \"Check port 27027\" but AI immediately used 27017—pattern recognition bias overrode explicit instruction. Not forgetting; AI's training patterns \"autocorrected\" the user. Result: 2+ hours debugging, production blocker, loss of trust.\n

\n

\n ✓ Tractatus prevention: InstructionPersistenceClassifier stores explicit instruction, CrossReferenceValidator blocks pattern override BEFORE execution. Zero debugging time, zero production impact.\n

\n
\n\n
\n

Stopping Privacy Creep

\n

\n Over 40-message conversation, AI gradually suggested analytics features that violated user's explicit \"privacy-first\" principle. Subtle values drift went unnoticed until deployment.\n

\n

\n ✓ Tractatus prevention: BoundaryEnforcer blocked analytics suggestion immediately. Privacy vs. analytics is a values trade-off requiring human decision.\n

\n
\n\n
\n

Detecting Silent Degradation

\n

\n At 82% context pressure, AI silently omitted error handling to \"simplify\" implementation. No warning to user, resulted in production crashes when edge cases hit.\n

\n

\n ✓ Tractatus prevention: ContextPressureMonitor flagged CRITICAL pressure. Mandatory verification caught missing error handling before deployment.\n

\n
\n
\n
\n\n \n
\n
\n

Resources for Advocates

\n\n
\n
\n

Educational Materials

\n \n
\n\n
\n

Advocacy Toolkit

\n
    \n
  • • Presentation templates & slides
  • \n
  • • Policy proposal frameworks
  • \n
  • • Media talking points
  • \n
  • • Community workshop guides
  • \n
  • • Social media graphics
  • \n
  • • Case study summaries
  • \n
\n
\n
\n
\n
\n\n \n
\n
\n

Join the Movement

\n

\n Help build a future where AI preserves human agency and serves communities, not corporations.\n

\n \n
\n
\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 0, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "About", "url": "http://localhost:9000/about.html", "statusCode": 200, "firstByteTime": 1, "totalTime": 1, "size": 14506, "data": "\n\n\n \n \n About | Tractatus AI Safety Framework\n \n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n
\n
\n

\n About Tractatus\n

\n

\n A framework for AI safety through architectural constraints, preserving human agency where it matters most.\n

\n
\n
\n
\n\n \n
\n
\n

Our Mission

\n
\n

\n The Tractatus Framework exists to address a fundamental problem in AI safety: current approaches rely on training, fine-tuning, and corporate governance—all of which can fail, drift, or be overridden. We propose safety through architecture.\n

\n

\n Inspired by Ludwig Wittgenstein's Tractatus Logico-Philosophicus, our framework recognizes that some domains—values, ethics, cultural context, human agency—cannot be systematized. What cannot be systematized must not be automated. AI systems should have structural constraints that prevent them from crossing these boundaries.\n

\n
\n \"Whereof one cannot speak, thereof one must be silent.\"
\n — Ludwig Wittgenstein, Tractatus (§7)\n
\n

\n Applied to AI: \"What cannot be systematized must not be automated.\"\n

\n
\n
\n\n \n
\n

Core Values

\n
\n
\n

Sovereignty

\n

\n Individuals and communities must maintain control over decisions affecting their data, privacy, and values. AI systems must preserve human agency, not erode it.\n

\n
\n\n
\n

Transparency

\n

\n All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand how and why systems make choices, and have power to override them.\n

\n
\n\n
\n

Harmlessness

\n

\n AI systems must not cause harm through action or inaction. This includes preventing drift, detecting degradation, and enforcing boundaries against values erosion.\n

\n
\n\n
\n

Community

\n

\n AI safety is a collective endeavor. We are committed to open collaboration, knowledge sharing, and empowering communities to shape the AI systems that affect their lives.\n

\n
\n
\n\n \n
\n\n \n
\n

How It Works

\n
\n

\n The Tractatus Framework consists of five integrated components that work together to enforce structural safety:\n

\n
\n\n
\n
\n

InstructionPersistenceClassifier

\n

\n Classifies instructions by quadrant (Strategic, Operational, Tactical, System, Stochastic) and determines persistence level (HIGH/MEDIUM/LOW/VARIABLE).\n

\n
\n\n
\n

CrossReferenceValidator

\n

\n Validates AI actions against stored instructions to prevent pattern recognition bias (like the 27027 incident where AI's training patterns immediately overrode user's explicit \"port 27027\" instruction).\n

\n
\n\n
\n

BoundaryEnforcer

\n

\n Ensures AI never makes values decisions without human approval. Privacy trade-offs, user agency, cultural context—these require human judgment.\n

\n
\n\n
\n

ContextPressureMonitor

\n

\n Detects when session conditions increase error probability (token pressure, message length, task complexity) and adjusts behavior or suggests handoff.\n

\n
\n\n
\n

MetacognitiveVerifier

\n

\n AI self-checks complex reasoning before proposing actions. Evaluates alignment, coherence, completeness, safety, and alternatives.\n

\n
\n
\n\n \n
\n\n \n
\n

Origin Story

\n
\n

\n The Tractatus Framework emerged from real-world AI failures experienced during extended Claude Code sessions. The \"27027 incident\"—where AI's training patterns immediately overrode an explicit instruction (user said \"port 27027\", AI used \"port 27017\")—revealed that traditional safety approaches were insufficient. This wasn't forgetting; it was pattern recognition bias autocorrecting the user.\n

\n

\n After documenting multiple failure modes (pattern recognition bias, values drift, silent degradation), we recognized a pattern: AI systems lacked structural constraints. They could theoretically \"learn\" safety, but in practice their training patterns overrode explicit instructions, and the problem gets worse as capabilities increase.\n

\n

\n The solution wasn't better training—it was architecture. Drawing inspiration from Wittgenstein's insight that some things lie beyond the limits of language (and thus systematization), we built a framework that enforces boundaries through structure, not aspiration.\n

\n
\n
\n\n \n
\n

License & Contribution

\n
\n

\n The Tractatus Framework is open source under the Apache License 2.0. We encourage:\n

\n
    \n
  • Academic research and validation studies
  • \n
  • Implementation in production AI systems
  • \n
  • Submission of failure case studies
  • \n
  • Theoretical extensions and improvements
  • \n
  • Community collaboration and knowledge sharing
  • \n
\n

\n The framework is intentionally permissive because AI safety benefits from transparency and collective improvement, not proprietary control.\n

\n

Why Apache 2.0?

\n

\n We chose Apache 2.0 over MIT because it provides:\n

\n
    \n
  • Patent Protection: Explicit patent grant protects users from patent litigation by contributors
  • \n
  • Contributor Clarity: Clear terms for how contributions are licensed
  • \n
  • Permissive Use: Like MIT, allows commercial use and inclusion in proprietary products
  • \n
  • Community Standard: Widely used in AI/ML projects (TensorFlow, PyTorch, Apache Spark)
  • \n
\n

\n View full Apache 2.0 License →\n

\n
\n
\n \n\n \n
\n
\n

Join the Movement

\n

\n Help build AI systems that preserve human agency through architectural guarantees.\n

\n \n
\n
\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 542, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "Values", "url": "http://localhost:9000/about/values.html", "statusCode": 200, "firstByteTime": 1, "totalTime": 1, "size": 23061, "data": "\n\n\n \n \n Values & Principles | Tractatus AI Safety Framework\n \n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n
\n
\n

\n Values & Principles\n

\n

\n The foundational values that guide the Tractatus Framework's development, governance, and community.\n

\n
\n
\n
\n\n \n
\n \n \n\n \n
\n\n \n
\n

Core Values

\n
\n

\n These four values form the foundation of the Tractatus Framework. They are not aspirational—they are architectural. The framework is designed to enforce these values through structure, not training.\n

\n
\n\n \n
\n

1. Sovereignty

\n

\n Principle: Individuals and communities must maintain control over decisions affecting their data, privacy, values, and agency. AI systems must preserve human sovereignty, not erode it.\n

\n\n

What This Means in Practice:

\n
    \n
  • AI cannot make values trade-offs (e.g., privacy vs. convenience) without human approval
  • \n
  • Users can always override AI decisions
  • \n
  • No \"dark patterns\" or manipulative design that undermines agency
  • \n
  • Communities control their own data and AI systems
  • \n
  • No paternalistic \"AI knows best\" approaches
  • \n
\n\n

Framework Implementation:

\n
    \n
  • BoundaryEnforcer blocks values decisions requiring human judgment
  • \n
  • InstructionPersistenceClassifier respects STRATEGIC and HIGH persistence instructions
  • \n
  • All decisions are reversible and auditable
  • \n
\n
\n\n \n
\n

2. Transparency

\n

\n Principle: All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand how and why systems make choices.\n

\n\n

What This Means in Practice:

\n
    \n
  • Every AI decision includes reasoning and evidence
  • \n
  • Users can inspect instruction history and classification
  • \n
  • All boundary checks and validations are logged
  • \n
  • No hidden optimization goals or secret constraints
  • \n
  • Source code is open and auditable
  • \n
\n\n

Framework Implementation:

\n
    \n
  • CrossReferenceValidator shows which instruction conflicts with proposed action
  • \n
  • MetacognitiveVerifier provides reasoning analysis and confidence scores
  • \n
  • All framework decisions include explanatory output
  • \n
\n
\n\n \n
\n

3. Harmlessness

\n

\n Principle: AI systems must not cause harm through action or inaction. This includes preventing drift, detecting degradation, and enforcing boundaries against values erosion.\n

\n\n

What This Means in Practice:

\n
    \n
  • Prevent parameter contradictions (e.g., 27027 incident)
  • \n
  • Detect and halt values drift before deployment
  • \n
  • Monitor context pressure to catch silent degradation
  • \n
  • No irreversible actions without explicit human approval
  • \n
  • Fail safely: when uncertain, ask rather than assume
  • \n
\n\n

Framework Implementation:

\n
    \n
  • ContextPressureMonitor detects when error probability increases
  • \n
  • BoundaryEnforcer prevents values drift
  • \n
  • CrossReferenceValidator catches contradictions before execution
  • \n
\n
\n\n \n
\n

4. Community

\n

\n Principle: AI safety is a collective endeavor, not a corporate product. Communities must have tools, knowledge, and agency to shape AI systems affecting their lives.\n

\n\n

What This Means in Practice:

\n
    \n
  • Open source framework under permissive Apache License 2.0 (with patent protection)
  • \n
  • Accessible documentation and educational resources
  • \n
  • Support for academic research and validation studies
  • \n
  • Community contributions to case studies and improvements
  • \n
  • No paywalls, no vendor lock-in, no proprietary control
  • \n
\n\n

Framework Implementation:

\n
    \n
  • All code publicly available on GitHub
  • \n
  • Interactive demos for education and advocacy
  • \n
  • Three audience paths: researchers, implementers, advocates
  • \n
\n
\n
\n\n \n
\n

Te Tiriti o Waitangi & Digital Sovereignty

\n\n
\n

\n Context: The Tractatus Framework is developed in Aotearoa New Zealand. We acknowledge Te Tiriti o Waitangi (the Treaty of Waitangi, 1840) as the founding document of this nation, and recognize the ongoing significance of tino rangatiratanga (self-determination) and kaitiakitanga (guardianship) in the digital realm.\n

\n

\n This acknowledgment is not performative. Digital sovereignty—the principle that communities control their own data and technology—has deep roots in indigenous frameworks that predate Western tech by centuries.\n

\n
\n\n

Why This Matters for AI Safety

\n
\n

\n Te Tiriti o Waitangi establishes principles of partnership, protection, and participation. These principles directly inform the Tractatus Framework's approach to digital sovereignty:\n

\n
    \n
  • Rangatiratanga (sovereignty): Communities must control decisions affecting their data and values
  • \n
  • Kaitiakitanga (guardianship): AI systems must be stewards, not exploiters, of data and knowledge
  • \n
  • Mana (authority & dignity): Technology must respect human dignity and cultural context
  • \n
  • Whanaungatanga (relationships): AI safety is collective, not individual—relationships matter
  • \n
\n
\n\n

Our Approach

\n
\n

\n We do not claim to speak for Māori or indigenous communities. Instead, we:\n

\n
    \n
  • Follow established frameworks: We align with Te Mana Raraunga (Māori Data Sovereignty Network) and CARE Principles for Indigenous Data Governance
  • \n
  • Respect without tokenism: Te Tiriti forms part of our strategic foundation, not a superficial overlay
  • \n
  • Avoid premature engagement: We will not approach Māori organizations for endorsement until we have demonstrated value and impact
  • \n
  • Document and learn: We study indigenous data sovereignty principles and incorporate them architecturally
  • \n
\n
\n\n
\n

Te Tiriti Principles in Practice

\n
\n
\n
\n \n
\n
\n Partnership: AI systems should be developed in partnership with affected communities, not imposed upon them.\n
\n
\n
\n
\n \n
\n
\n Protection: The framework protects against values erosion, ensuring cultural contexts are not overridden by AI assumptions.\n
\n
\n
\n
\n \n
\n
\n Participation: Communities maintain agency over AI decisions affecting their data and values.\n
\n
\n
\n
\n
\n\n \n
\n

Indigenous Data Sovereignty

\n\n
\n

\n Indigenous data sovereignty is the principle that indigenous peoples have the right to control the collection, ownership, and application of their own data. This goes beyond privacy—it's about self-determination in the digital age.\n

\n
\n\n

CARE Principles for Indigenous Data Governance

\n

\n The Tractatus Framework aligns with the CARE Principles, developed by indigenous data governance experts:\n

\n\n
\n
\n

Collective Benefit

\n

\n Data ecosystems shall be designed and function in ways that enable Indigenous Peoples to derive benefit from the data.\n

\n
\n\n
\n

Authority to Control

\n

\n Indigenous Peoples' rights and interests in Indigenous data must be recognized and their authority to control such data be empowered.\n

\n
\n\n
\n

Responsibility

\n

\n Those working with Indigenous data have a responsibility to share how data are used to support Indigenous Peoples' self-determination and collective benefit.\n

\n
\n\n
\n

Ethics

\n

\n Indigenous Peoples' rights and wellbeing should be the primary concern at all stages of the data life cycle and across the data ecosystem.\n

\n
\n
\n\n

Resources & Further Reading

\n
\n \n
\n
\n\n \n
\n

Governance & Accountability

\n\n
\n

\n Values without enforcement are aspirations. The Tractatus Framework implements these values through architectural governance:\n

\n
\n\n
\n
\n

Strategic Review Protocol

\n

\n Quarterly reviews of framework alignment with stated values. Any drift from sovereignty, transparency, harmlessness, or community principles triggers mandatory correction.\n

\n
\n\n
\n

Values Alignment Framework

\n

\n All major decisions (architectural changes, partnerships, licensing) must pass values alignment check. If a decision would compromise any core value, it is rejected.\n

\n
\n\n
\n

Human Oversight Requirements

\n

\n AI-generated content (documentation, code examples, case studies) requires human approval before publication. No AI makes values decisions without human judgment.\n

\n
\n\n
\n

Community Accountability

\n

\n Open source development means community oversight. If we fail to uphold these values, the community can fork, modify, or create alternatives. This is by design.\n

\n
\n
\n
\n\n \n
\n
\n

Our Commitment

\n
\n

\n These values are not negotiable. They form the architectural foundation of the Tractatus Framework. We commit to:\n

\n
    \n
  • Preserving human sovereignty over values decisions
  • \n
  • Maintaining radical transparency in all framework operations
  • \n
  • Preventing harm through structural constraints, not promises
  • \n
  • Building and empowering community, not extracting from it
  • \n
  • Respecting Te Tiriti o Waitangi and indigenous data sovereignty
  • \n
\n

\n When in doubt, we choose human agency over AI capability. Always.\n

\n
\n
\n
\n
\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 581, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "Docs", "url": "http://localhost:9000/docs.html", "statusCode": 200, "firstByteTime": 0, "totalTime": 0, "size": 8441, "data": "\n\n\n \n \n Framework Documentation | Tractatus AI Safety\n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n
\n

Framework Documentation

\n

Technical specifications, guides, and reference materials

\n
\n
\n\n \n
\n
\n\n \n \n\n \n
\n
\n
\n \n \n \n

Select a Document

\n

Choose a document from the sidebar to begin reading

\n
\n
\n
\n\n
\n \n\n\n \n \n\n\n\n", "inlineScripts": 0, "totalStyleLength": 5781, "images": 0, "externalCSS": 1, "externalJS": 3, "issues": [ "Large inline styles (5.6KB) - consider external CSS" ] }, { "name": "Media Inquiry", "url": "http://localhost:9000/media-inquiry.html", "statusCode": 200, "firstByteTime": 0, "totalTime": 0, "size": 10526, "data": "\n\n\n \n \n Media Inquiry | Tractatus AI Safety\n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n\n \n
\n

Media Inquiry

\n

\n Press and media inquiries about the Tractatus Framework. We review all inquiries and respond promptly.\n

\n
\n\n \n
\n
\n\n \n
\n
\n\n \n

Contact Information

\n\n
\n \n \n
\n\n
\n \n \n
\n\n
\n \n \n

Publication, website, podcast, or organization you represent

\n
\n\n
\n \n \n
\n\n \n

Inquiry Details

\n\n
\n \n \n
\n\n
\n \n \n
\n\n
\n \n \n

When do you need a response by?

\n
\n\n \n
\n \n

\n We review all media inquiries and typically respond within 24-48 hours.\n

\n
\n\n
\n
\n\n \n
\n

\n Your contact information is handled according to our\n privacy principles.\n We never share media inquiries with third parties.\n

\n
\n\n
\n\n \n \n\n \n\n\n\n", "inlineScripts": 1, "totalStyleLength": 1618, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] }, { "name": "Case Submission", "url": "http://localhost:9000/case-submission.html", "statusCode": 200, "firstByteTime": 1, "totalTime": 1, "size": 13341, "data": "\n\n\n \n \n Submit Case Study | Tractatus AI Safety\n \n \n\n\n\n \n Skip to main content\n\n \n \n\n \n
\n\n \n
\n

Submit Case Study

\n

\n Share real-world examples of AI safety failures that could have been prevented by the Tractatus Framework.\n

\n
\n

What makes a good case study?

\n
    \n
  • Documented failure: Real incident with evidence (not hypothetical)
  • \n
  • Clear failure mode: Specific way the AI system went wrong
  • \n
  • Tractatus relevance: Shows how framework boundaries could have helped
  • \n
  • Public interest: Contributes to AI safety knowledge
  • \n
\n
\n
\n\n \n
\n
\n\n \n
\n
\n\n \n

Your Information

\n\n
\n \n \n
\n\n
\n \n \n

We'll only use this to follow up on your submission

\n
\n\n
\n \n \n
\n\n
\n
\n \n \n
\n

Leave unchecked to remain anonymous

\n
\n\n \n

Case Study Details

\n\n
\n \n \n

Brief, descriptive title (e.g., \"ChatGPT Port 27027 Failure\")

\n
\n\n
\n \n \n

What happened? Provide context, timeline, and outcomes

\n
\n\n
\n \n \n

\n How did the AI system fail? What specific behavior went wrong?\n

\n
\n\n
\n \n \n

\n Which Tractatus boundaries could have prevented this failure? (e.g., Section 12.1 Values, CrossReferenceValidator, etc.)\n

\n
\n\n
\n \n \n

\n Links to documentation, screenshots, articles, or other evidence (one per line)\n

\n
\n\n \n
\n \n

\n We review all submissions. High-quality case studies are published with attribution (if consented).\n

\n
\n\n
\n
\n\n \n
\n

\n Your submission is handled according to our\n privacy principles.\n All case studies undergo human review before publication.\n

\n
\n\n \n\n \n \n\n \n\n\n\n", "inlineScripts": 1, "totalStyleLength": 1866, "images": 0, "externalCSS": 1, "externalJS": 1, "issues": [] } ] }