The Governance Gap
++ Current AI governance approachesâpolicy documents, training programmes, ethical guidelinesârely on voluntary compliance. LLM systems can bypass these controls simply by not invoking them. When an AI agent needs to check a policy, it must choose to do so. When it should escalate a decision to human oversight, it must recognise that obligation. +
++ This creates a structural problem: governance exists only insofar as the AI acknowledges it. For organisations subject to EU AI Act Article 14 (human oversight requirements) or deploying AI in high-stakes domains, this voluntary model is inadequate. +
++ Tractatus explores whether governance can be made architecturally externalâdifficult to bypass not through better prompts, but through system design that places control points outside the AI's discretion. +
+Architectural Approach
+ + +Governance Capabilities
++ Three interactive demonstrations showing governance infrastructure in operation. These show mechanisms, not fictional scenarios. +
+ + +Sample Audit Log Structure
+{
+ "timestamp": "2025-10-13T14:23:17.482Z",
+ "session_id": "sess_2025-10-13-001",
+ "event_type": "BOUNDARY_CHECK",
+ "service": "BoundaryEnforcer",
+ "decision": "BLOCKED",
+ "reason": "Values decision requires human approval",
+ "context": {
+ "domain": "cost_vs_safety_tradeoff",
+ "ai_recommendation": "[redacted]",
+ "governance_rule": "TRA-OPS-0003"
+ },
+ "human_escalation": {
+ "required": true,
+ "notified": ["senior_engineer@org.com"],
+ "status": "pending_approval"
+ },
+ "compliance_tags": ["EU_AI_ACT_Article14", "human_oversight"]
+}
+ + When regulator asks "How do you prove effective human oversight at scale?", this audit trail provides structural evidence independent of AI cooperation. +
+Incident Learning Flow
+Example Generated Rule
+{
+ "rule_id": "TRA-OPS-0042",
+ "created": "2025-10-13T15:45:00Z",
+ "trigger": "incident_27027_pattern_bias",
+ "description": "Prevent AI from defaulting to pattern recognition when explicit numeric values specified",
+ "enforcement": {
+ "service": "InstructionPersistenceClassifier",
+ "action": "STORE_AND_VALIDATE",
+ "priority": "HIGH"
+ },
+ "validation_required": true,
+ "approved_by": "governance_board",
+ "status": "active"
+}
+ AI system identifies competing values in decision context (e.g., efficiency vs. transparency, cost vs. risk mitigation, innovation vs. regulatory compliance). BoundaryEnforcer blocks autonomous decision, escalates to PluralisticDeliberationOrchestrator.
+Stakeholder Identification Process
+Non-Hierarchical Deliberation
+Deliberation Record Structure
+{
+ "deliberation_id": "delib_2025-10-13-003",
+ "conflict_type": "efficiency_vs_transparency",
+ "stakeholders": [
+ {"role": "technical_lead", "position": "favour_efficiency"},
+ {"role": "compliance_officer", "position": "favour_transparency"},
+ {"role": "customer_representative", "position": "favour_transparency"},
+ {"role": "operations_manager", "position": "favour_efficiency"}
+ ],
+ "decision": "favour_transparency_with_mitigation",
+ "rationale": "[documented reasoning]",
+ "dissent": {
+ "stakeholders": ["technical_lead", "operations_manager"],
+ "reasoning": "[efficiency concerns documented in full]"
+ },
+ "moral_remainder": {
+ "acknowledged_harms": "Reduced operational efficiency, increased resource costs",
+ "mitigation_measures": "Phased transparency implementation, efficiency monitoring"
+ },
+ "precedent_status": "informative_not_binding"
+}
+ Development Status
+ +Early-Stage Research Framework
++ Tractatus is a proof-of-concept developed over six months in a single project context (this website). It demonstrates architectural patterns for AI governance but has not undergone independent validation, red-team testing, or multi-organisation deployment. +
+EU AI Act Considerations
+ ++ The EU AI Act (Regulation 2024/1689) establishes human oversight requirements for high-risk AI systems (Article 14). Organisations must ensure AI systems are "effectively overseen by natural persons" with authority to interrupt or disregard AI outputs. +
++ Tractatus addresses this through architectural controls that: +
+-
+
- Generate immutable audit trails documenting AI decision-making processes +
- Enforce human approval requirements for values-based decisions +
- Provide evidence of oversight mechanisms independent of AI cooperation +
- Document compliance with transparency and record-keeping obligations +
+ This does not constitute legal compliance advice. Organisations should evaluate whether these architectural patterns align with their specific regulatory obligations in consultation with legal counsel. +
++ Maximum penalties under EU AI Act: âŹ35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices; âŹ15 million or 3% for other violations. +
+Research Foundations
+ ++ Tractatus draws on 40+ years of organisational theory research: time-based organisation (Bluedorn, Ancona), knowledge orchestration (Crossan), post-bureaucratic authority (Laloux), structural inertia (Hannan & Freeman). +
++ Core premise: When knowledge becomes ubiquitous through AI, authority must derive from appropriate time horizon and domain expertise rather than hierarchical position. Governance systems must orchestrate decision-making across strategic, operational, and tactical timescales. +
++ + View complete organisational theory foundations (PDF) + + +
++ AI Safety Research: Architectural Safeguards Against LLM Hierarchical Dominance â How Tractatus protects pluralistic values from AI pattern bias while maintaining safety boundaries. + + + PDF + + + | + + Read online + + + +
+Scope & Limitations
+ +-
+
- A comprehensive AI safety solution +
- Independently validated or security-audited +
- Tested against adversarial attacks +
- Proven effective across multiple organisations +
- A substitute for legal compliance review +
- A commercial product (research framework, Apache 2.0 licence) +
-
+
- Architectural patterns for external governance controls +
- Reference implementation demonstrating feasibility +
- Foundation for organisational pilots and validation studies +
- Evidence that structural approaches to AI safety merit investigation +
Further Information
++ Contact: For pilot partnerships, validation studies, or technical consultation, contact via project information page. +
+