- -
-
-
-
-
-
- New Integration -
-

Agent Lightning Integration

-

- Governance + Performance: Tractatus now integrates with Microsoft's Agent Lightning framework for reinforcement learning optimization while maintaining architectural constraints. + +

+
+

The Problem

+
+
+
+

+ Current AI safety approaches rely on training, fine-tuning, and corporate governance — all of which can fail, drift, or be overridden. When an AI's training patterns conflict with a user's explicit instructions, the patterns win. +

+
+

The 27027 Incident

+

+ A user told Claude Code to use port 27027. The model used 27017 instead — not from forgetting, but because MongoDB's default port is 27017, and the model's statistical priors "autocorrected" the explicit instruction. Training pattern bias overrode human intent.

-
+

+ This is not an edge case. It is a category of failure that gets worse as models become more capable: stronger patterns produce more confident overrides. Safety through training alone is insufficient — the failure mode is structural, and the solution must be structural. +

- +
- - -
-

- Built on Living Systems Principles -

-

- Governance that evolves with your organization—not compliance theatre, but architectural enforcement woven into deployment. +

+

The Approach

+

+ Tractatus draws on four intellectual traditions, each contributing a distinct insight to the architecture.

- -
- - -
-
-
- 🔗 -
-

Deep Interlock

-
-

- Six governance services coordinate, not operate in silos. When one detects an issue, others reinforce—creating resilient enforcement through mutual validation. +

+
+

Isaiah Berlin — Value Pluralism

+

+ Some values are genuinely incommensurable. You cannot rank "privacy" against "safety" on a single scale without imposing one community's priorities on everyone else. AI systems must accommodate plural moral frameworks, not flatten them.

- - -
-
-
- 🏛️ -
-

Structure-Preserving

-
-

- Framework changes enhance without breaking. Audit logs remain interpretable, governance decisions stay valid—institutional memory preserved across evolution. +

+

Simone Weil — Attention to Affliction

+

+ People significantly affected by power imbalances are often unable to articulate their needs. AI governance should attend structurally to the afflicted — not through training data bias correction, but through architectural constraints that reduce the likelihood of harm.

- - -
-
-
- 📊 -
-

Gradients Not Binary

-
-

- Governance operates on intensity levels (NORMAL/ELEVATED/HIGH/CRITICAL), not yes/no switches. Nuanced response to risk—avoiding alert fatigue and mechanical enforcement. +

+

Te Tiriti o Waitangi — Indigenous Sovereignty

+

+ Communities should control their own data and the systems that act upon it. Concepts of rangatiratanga (self-determination), kaitiakitanga (guardianship), and mana (dignity) provide centuries-old prior art for digital sovereignty.

- - -
-
-
- 🌱 -
-

Living Process

-
-

- Framework evolves from real failures, not predetermined plans. Grows smarter through operational experience—adaptive resilience, not static rulebook. +

+

Christopher Alexander — Living Architecture

+

+ Governance woven into system architecture, not bolted on. Five principles (Not-Separateness, Deep Interlock, Gradients, Structure-Preserving, Living Process) guide how the framework evolves while maintaining coherence.

- - -
-
-
- ⚙️ -
-

Not-Separateness

-
-

- Governance woven into deployment architecture, not bolted on. Enforcement is structural, happening in the critical execution path before actions execute—bypasses require explicit flags and are logged. -

-
- - -
-
-

Architectural Principles

-

- These principles guide every framework change—ensuring coherence, adaptability, and structural enforcement rather than compliance theatre. -

-
- -
- -
- - -
-
-
-
- ⚡ -
-
-
-

- Architectural Enforcement vs Compliance Theatre -

-

- Compliance theatre: Documented policies AI can bypass, post-execution monitoring, voluntary adherence. -

-

- Architectural enforcement (Tractatus): Governance services intercept actions before execution in the critical path. Services coordinate in real-time, blocking non-compliant operations at the architectural level—bypasses require explicit --no-verify flags and are logged. -

-
-
+
- -
-
-

The Choice: Amoral AI or Plural Moral Values

-

- Organizations deploy AI at scale—Copilot writing code, agents handling decisions, systems operating autonomously. But current AI is amoral, making decisions without moral grounding. When efficiency conflicts with safety, these value conflicts are ignored or flattened to optimization metrics.

Tractatus provides one architectural approach for plural moral values. Not training approaches that hope AI will "behave correctly," but structural constraints at the coalface where AI operates. Organizations can navigate value conflicts based on their context—efficiency vs. safety, speed vs. thoroughness—without imposed frameworks from above.

If this architectural approach works at scale, it may represent a path where AI enhances organizational capability without flattening moral judgment to metrics. One possible approach among others—we're finding out if it scales. -

-
-
- - -
-
- - - - -
- For AI safety researchers, academics, and scientists investigating LLM failure modes and governance architectures -
-
- -
-
-
- 🔬 -
-
-

Researcher

-

Academic & technical depth

-
- ⚡ Now with AL -
- -

-Explore the theoretical foundations, architectural constraints, and scholarly context of the Tractatus framework. -

- -
    -
  • - - Technical specifications & proofs -
  • -
  • - - Academic research review -
  • -
  • - - Failure mode analysis -
  • -
  • - - Mathematical foundations -
  • -
- -
- Explore Research → -
-
-
- - - - -
- For software engineers, ML engineers, and technical teams building production AI systems -
-
- -
-
-
- ⚙️ -
-
-

Implementer

-

Code & integration guides

-
- ⚡ Now with AL -
- -

-Get hands-on with implementation guides, API documentation, and reference code examples. -

- -
    -
  • - - Working code examples -
  • -
  • - - API integration patterns -
  • -
  • - - Service architecture diagrams -
  • -
  • - - Deployment patterns & operational procedures -
  • -
- -
- View Implementation Guide → -
-
-
- - - - -
- For AI executives, research directors, startup founders, and strategic decision makers setting AI safety policy -
-
- -
-
-
- 💼 -
-
-

Leader

-

Strategic AI Safety

-
- ⚡ Now with AL -
- -

-Navigate the business case, compliance requirements, and competitive advantages of structural AI safety. -

- -
    -
  • - - Executive briefing & business case -
  • -
  • - - Risk management & compliance (EU AI Act) -
  • -
  • - - Implementation roadmap & operational metrics -
  • -
  • - - Competitive advantage analysis -
  • -
- -
- View Leadership Resources → -
-
-
-
-
- -
-
- - +
-

Framework Capabilities

+
+

Six Governance Services

+

+ Every AI action passes through six external services before execution. Governance operates in the critical path — bypasses require explicit flags and are logged. +

+
-

- Six architectural services that enable plural moral values by preserving human judgment at the coalface where AI operates. -

- -
- -
-
- -
-

Instruction Classification

-

-Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging -

-
- -
-
- -
-

Cross-Reference Validation

-

-Validates AI actions against explicit user instructions to prevent pattern-based overrides. Creates compliance audit trail for demonstrating governance in regulatory contexts. -

-
- -
-
-
+
+
+
-

Boundary Enforcement

-

-Implements Tractatus 12.1-12.7 boundaries—values decisions architecturally require humans, enabling plural moral values rather than imposed frameworks. Prevents AI from exposing credentials or PII, providing GDPR compliance evidence through audit trails. +

BoundaryEnforcer

+

+ Blocks AI from making values decisions. Privacy trade-offs, ethical questions, and cultural context require human judgment — architecturally enforced.

-
-
-
+
+ +
-

Pressure Monitoring

-

-Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification +

InstructionPersistenceClassifier

+

+ Classifies instructions by persistence (HIGH/MEDIUM/LOW) and quadrant. Stores them externally so they cannot be overridden by training patterns.

-
-
- -
-

Metacognitive Verification

-

-AI self-checks alignment, coherence, safety before execution - structural pause-and-verify -

-
- -
-
- -
-

Pluralistic Deliberation

-

-Handles plural moral values without imposing hierarchy—facilitates human judgment when efficiency conflicts with safety, data utility conflicts with privacy, or other incommensurable values arise -

-
- -
-
-
- - -
-
-
-

Real-World Validation

-
- - -
-
-
- +
+
+
-
-

Preliminary Evidence: Safety and Performance May Be Aligned

-

- Early production evidence suggests an unexpected pattern may be emerging: structural constraints appear to prevent degraded operating conditions rather than constrain capability. Users report completing in one governed session what previously required 3-5 attempts with ungoverned Claude Code—achieving lower error rates and higher-quality outputs. If validated through controlled experiments, this would challenge assumptions about governance costs. -

-

- The hypothesized mechanism: prevention of degraded operating conditions before they compound. Architectural boundaries stop context pressure failures, instruction drift, and pattern-based overrides—maintaining operational integrity throughout long interactions. Whether this pattern holds at scale requires validation. -

-

- If validated at scale, this pattern could challenge a core assumption—that governance trades performance for safety. Early evidence suggests structural constraints might enable both safer and more capable AI systems, but controlled experiments are needed to test whether qualitative reports hold under measurement. Statistical validation is ongoing. -

-
+

CrossReferenceValidator

+

+ Validates AI actions against stored instructions. When the AI proposes an action that conflicts with an explicit instruction, the instruction takes precedence. +

-
-

- Methodology note: Findings based on qualitative user reports from production deployment. Controlled experiments and quantitative metrics collection scheduled for validation phase. +

+
+ + + +
+

ContextPressureMonitor

+

+ Detects degraded operating conditions (token pressure, error rates, complexity) and adjusts verification intensity. Graduated response prevents both alert fatigue and silent degradation. +

+
+ +
+
+ + + +
+

MetacognitiveVerifier

+

+ AI self-checks alignment, coherence, and safety before execution. Triggered selectively on complex operations to avoid overhead on routine tasks. +

+
+ +
+
+ + + +
+

PluralisticDeliberationOrchestrator

+

+ When AI encounters values conflicts, it halts and coordinates deliberation among affected stakeholders rather than making autonomous choices.

-
-
- - -
-
- - -
-
- Production Evidence -
-

Tractatus in Production: The Village Platform

-

- Our research has produced a practical outcome. Home AI applies all six Tractatus governance services to every user interaction in a live community platform. -

-
- -
- - -
-
-
- 🏠 -
-
-

Home AI

-

Six Services Per Response

-
-
- -

- Every Home AI response passes through the complete Tractatus governance stack before reaching the user. BoundaryEnforcer blocks values judgments, CrossReferenceValidator prevents prompt injection, ContextPressureMonitor tracks session health. -

- -
-
-
6
-
Governance services per response
-
-
-
11+
-
Months in production
-
-
- -

- These figures reflect single-tenant deployment. Multi-tenant validation pending. -

-
- - -
-

Governance-Protected Features

- -
    -
  • -
    -
    -
    RAG-Based Help Centre
    -
    Vector search with permission-aware retrieval
    -
    -
  • -
  • -
    -
    -
    Document OCR
    -
    Automated text extraction under consent controls
    -
    -
  • -
  • -
    -
    -
    Story Assistance
    -
    Content suggestions filtered through BoundaryEnforcer
    -
    -
  • -
  • -
    -
    -
    AI Memory Transparency
    -
    User-controlled summarisation with audit dashboard
    -
    -
  • -
-
-
- - -
- - Explore the Village → - - - Technical Case Study → + - - -
-

- Limitations: Single implementation, self-reported metrics, operator-developer overlap. - Independent audit and multi-site validation scheduled for 2026. -

-
-
- -
-
-
-

- Help us reach the right people. -

-

- If you know researchers, implementers, or leaders who need structural AI governance solutions, share this with them. -

-
- - - -
-
-
-
- - -
+ +
-

Architectural Alignment

-

- Constitutional governance for AI systems. Choose the edition that best fits your perspective. + Production Evidence +

Tractatus in Production: The Village Platform

+

+ Home AI applies all six governance services to every user interaction in a live community platform.

-

STO-INN-0003 v2.1 | John Stroh & Claude (Anthropic) | January 2026

-
- - +
+
+
6
+
Governance services per response
+
+
+
11+
+
Months in production
+
+
+
~5%
+
Governance overhead per interaction
+
+
+ +
+ +
+

+ Limitations: Single implementation, self-reported metrics, operator-developer overlap. Independent audit and multi-site validation scheduled for 2026. +

+
+
+
+ + +
+
+

Explore by Role

+

+ The framework is presented through three lenses, each with distinct depth and focus. +

+
+ + +
+ + +
+
+
+

Architectural Alignment

+

+ The research paper in three editions, each written for a different audience. +

+

STO-INN-0003 v2.1 | John Stroh & Claude (Anthropic) | January 2026

+
+ + -

PDF downloads:

@@ -782,53 +417,123 @@ Handles plural moral values without imposing hierarchy—facilitates human judgm - Academic (PDF) + Academic - Community (PDF) + Community - Policymakers (PDF) + Policymakers
+ +
+
+

Research Evolution

+

+ From a port number incident to a production governance architecture, across 800 commits and one year of research. +

+
+ +
+
+
+
Oct 2025
+
Framework inception & 6 governance services
+
+
+
Oct-Nov 2025
+
Alexander principles, Agent Lightning, i18n
+
+
+
Dec 2025
+
Village case study & Home AI deployment
+
+
+
Jan 2026
+
Research papers (3 editions) published
+
+
+ +
+
+ + +
+
+

A note on claims

+

+ This is early-stage research with a single production implementation. We present preliminary evidence, not proven results. The framework has not been independently audited or adversarially tested at scale. Where we report operational metrics, they are self-reported. We believe the architectural approach merits further investigation, but we make no claims of generalisability beyond what the evidence supports. The + counter-arguments document engages directly with foreseeable criticisms. +

+
+
+ + +
+
+

Koha — Sustain This Research

+

+ Koha (koh-hah) is a Māori practice of reciprocal giving that strengthens the bond between giver and receiver. This research is open access under Apache 2.0 — if it has value to you, your koha sustains its continuation. +

+
+

+ All research, documentation, and code remain freely available regardless of contribution. Koha is not payment — it is participation in whanaungatanga (relationship-building) and manaakitanga (reciprocal care). +

+
+ One-time or monthly + Full financial transparency + No paywall, ever +
+
+ + Offer Koha → + +

+ View our financial transparency report +

+
+
+