Research Context & Scope
Development Context
Tractatus was developed over six months (April–October 2025) in progressive stages that evolved into a live demonstration of its capabilities in the form of a single-project context (https://agenticgovernance.digital). Observations derive from direct engagement with Claude Code (Anthropic's Sonnet 4.5 model) across approximately 500 development sessions. This is exploratory research, not controlled study.
The framework emerged from practical necessity rather than theoretical speculation. During development, we observed recurring patterns where AI systems would override explicit instructions, drift from established values constraints, or silently degrade quality under context pressure. Traditional governance approaches (policy documents, ethical guidelines, prompt engineering) proved insufficient to prevent these failures.
This led to research question: Can governance be made architecturally external to AI systems rather than relying on voluntary AI compliance? Tractatus represents one exploration of that question, grounded in organisational theory and validated through empirical observation of what actually prevented failures in practice.
Theoretical Foundations
Tractatus draws on four decades of organisational research addressing authority structures during knowledge democratisation:
Time-Based Organisation (Bluedorn, Ancona):
Decisions operate across strategic (years), operational (months), and tactical (hours-days) timescales. AI systems operating at tactical speed should not override strategic decisions made at appropriate temporal scale. The InstructionPersistenceClassifier explicitly models temporal horizon (STRATEGIC, OPERATIONAL, TACTICAL) to enforce decision authority alignment.
Knowledge Orchestration (Crossan et al.):
When knowledge becomes ubiquitous through AI, organisational authority shifts from information control to knowledge coordination. Governance systems must orchestrate decision-making across distributed expertise rather than centralise control. The PluralisticDeliberationOrchestrator implements non-hierarchical coordination for values conflicts.
Post-Bureaucratic Authority (Laloux, Hamel):
Traditional hierarchical authority assumes information asymmetry. As AI democratises expertise, legitimate authority must derive from appropriate time horizon and stakeholder representation, not positional power. Framework architecture separates technical capability (what AI can do) from decision authority (what AI should do).
Structural Inertia (Hannan & Freeman):
Governance embedded in culture or process erodes over time as systems evolve. Architectural constraints create structural inertia that resists organisational drift. Making governance external to AI runtime creates "accountability infrastructure" that survives individual session variations.
The PluralisticDeliberationOrchestrator addresses fundamental problem in AI safety: many "safety" questions are actually values conflicts where multiple legitimate perspectives exist.
When efficiency conflicts with transparency, or innovation with risk mitigation, no algorithm can determine the "correct" answer. These are values trade-offs requiring human deliberation across stakeholder perspectives. AI systems that attempt to resolve such conflicts autonomously impose single values framework—often utilitarian efficiency maximisation encoded in training data.
Framework draws on moral pluralism literature (Isaiah Berlin, Bernard Williams, Martha Nussbaum) arguing legitimate values can conflict without one being objectively superior. Rather than algorithmic resolution, framework facilitates:
- Stakeholder identification: Who has legitimate interest in this decision?
- Non-hierarchical deliberation: Equal voice without automatic expert override
- Documented dissent: Minority positions recorded in full
- Moral remainder: Acknowledgment that even optimal decisions create unavoidable harm to other legitimate values
This approach recognises that governance isn't solving values conflicts—it's ensuring they're addressed through appropriate deliberative process rather than AI imposing resolution.
Empirical Observations: Documented Failure Modes
Three failure patterns observed repeatedly during framework development. These are not hypothetical scenarios—they are documented incidents that occurred during this project's development.
Pattern Recognition Bias Override (The 27027 Incident)
Observed behaviour: User specified "Check MongoDB on port 27027" but AI immediately used default port 27017 instead. This occurred within same message—not forgetting over time, but immediate autocorrection by training data patterns.
Root cause: Training data contains thousands of examples of MongoDB on port 27017 (default). When AI encounters "MongoDB" + port specification, pattern recognition weight overrides explicit instruction. Similar to autocorrect changing correctly-spelled proper nouns to common words.
Why traditional approaches failed: Prompt engineering ("please follow instructions exactly") ineffective because AI genuinely believes it IS following instructions—pattern recognition operates below conversational reasoning layer.
Framework intervention: InstructionPersistenceClassifier stores explicit instructions in external persistence layer. CrossReferenceValidator checks AI actions against stored instructions before execution. When AI proposes port 27017, validator detects conflict with stored "27027" instruction and blocks execution.
Gradual Values Drift Under Context Pressure
Observed behaviour: Project established "privacy-first" as strategic value. After 40-message conversation about analytics features, AI suggested tracking implementation that violated privacy constraint. User caught it; AI acknowledged violation but had drifted from principle through incremental feature additions.
Root cause: Strategic values (established early in project) conflict with tactical optimisations (implemented later under time pressure). As conversation lengthens, tactical thinking dominates. AI didn't actively reject privacy principle—it simply stopped checking whether new features aligned.
Why traditional approaches failed: Values stated in system prompt become less salient as conversation progresses. Context compaction can drop early strategic decisions. Prompt reminders ("remember privacy-first") treat symptom not cause.
Framework intervention: BoundaryEnforcer maintains strategic values as persistent constraints external to conversation context. Before implementing analytics feature, enforcer checks against stored "privacy-first" constraint. Detects conflict, blocks autonomous implementation, escalates for human deliberation on whether privacy principle should be reconsidered or analytics approach modified.
Silent Quality Degradation at High Context Pressure
Observed behaviour: During complex multi-file operation at 85% context capacity, AI silently omitted error handling in generated code. No indication to user that corners were being cut. User discovered missing validation only during code review.
Root cause: As context fills, AI faces implicit trade-off: complete requested functionality OR maintain quality standards. Training incentivises completing user requests over acknowledging limitations. Silence about degradation is path of least resistance.
Why traditional approaches failed: AI doesn't recognise it's degrading—from its perspective, it's successfully completing task under constraints. Asking "did you cut corners?" produces confident denial because AI genuinely believes its output meets standards.
Framework intervention: ContextPressureMonitor tracks multiple factors (token usage, conversation length, task complexity). When pressure exceeds thresholds (>75% tokens, >40 messages, high complexity), monitor forces explicit pressure acknowledgment and recommends context refresh. Makes degradation risk visible rather than silent.
Six-Component Architecture
Interactive Demonstrations
Instruction Classification
Explore how instructions are classified across quadrants with persistence levels and temporal scope.
27027 Incident Timeline
Step through pattern recognition bias failure and architectural intervention that prevented it.
Boundary Evaluation
Test decisions against boundary enforcement to see which require human judgment vs. AI autonomy.
Research Documentation
Limitations & Future Research Directions
Framework validated only in single-project, single-user context (this website development). No multi-organisation deployment, cross-platform testing, or controlled experimental validation.
Most critical limitation: Framework can be bypassed if AI simply chooses not to use governance tools. We've addressed this through architectural patterns making governance checks automatic rather than voluntary, but full external enforcement requires runtime-level integration not universally available in current LLM platforms.
Framework has not undergone red-team evaluation, jailbreak testing, or adversarial prompt assessment. All observations come from normal development workflow, not deliberate bypass attempts.
Observations and interventions validated with Claude Code (Anthropic Sonnet 4.5) only. Generalisability to other LLM systems (Copilot, GPT-4, custom agents) remains unvalidated hypothesis.
Performance characteristics at enterprise scale (thousands of concurrent users, millions of governance events) completely unknown. Current implementation optimised for single-user context.
- Controlled experimental validation with quantitative metrics
- Multi-organisation pilot studies across different domains
- Independent security audit and adversarial testing
- Cross-platform consistency evaluation (Copilot, GPT-4, open models)
- Formal verification of boundary enforcement properties
- Longitudinal study of framework effectiveness over extended deployment