tractatus/README.md
TheFlow 344abd18f9 fix(repo): comprehensive README.md overhaul for world-class GitHub presence
CRITICAL FIXES:
1. Added "Last Updated: 2025-10-21" (weekly review required)
2. Fixed "5 core services" → "6 core services" (inst_050 compliance)
3. Added PluralisticDeliberationOrchestrator as 6th service (was missing)
4. Removed "SyDigital Ltd" fictitious company (inst_016 violation)
5. Changed "production" → "research" implementation (inst_018 compliance)
6. Changed badge: "Production" → "Research"

IMPROVEMENTS:
- Added comprehensive 6th service documentation with code example
- Enhanced attribution section: transparent human-AI collaboration
- Clarified repository focus: open source code, refer to website for concepts
- Updated test count: 637 → 238 (accurate current state)
- Improved research challenges section: honest about rule proliferation
- Better documentation structure: GitHub for implementation, website for research

ATTRIBUTION:
- Copyright: John Stroh (legal)
- Development: Transparent human-AI collaboration acknowledgment
- Removed fictitious company attribution

RESULT: GitHub README now world-class, implementer-focused, honest about maturity

🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-21 18:35:08 +13:00

424 lines
14 KiB
Markdown

# Tractatus Framework
**Last Updated:** 2025-10-21
> **Architectural AI Safety Through Structural Constraints**
A research framework for enforcing AI safety through architectural constraints rather than training-based alignment. Tractatus preserves human agency through **structural, not aspirational** enforcement of decision boundaries.
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Framework](https://img.shields.io/badge/Framework-Research-blue.svg)](https://agenticgovernance.digital)
[![Tests](https://img.shields.io/badge/Tests-238%20passing-brightgreen.svg)](https://github.com/AgenticGovernance/tractatus-framework)
---
## 🎯 What is Tractatus?
Tractatus is an **architectural AI safety framework** that makes certain decisions **structurally impossible** for AI systems to make without human approval. Unlike traditional AI safety approaches that rely on training and alignment, Tractatus uses **runtime enforcement** of decision boundaries.
### The Core Problem
Traditional AI safety relies on:
- 🎓 **Alignment training** - Hoping the AI learns the "right" values
- 📜 **Constitutional AI** - Embedding principles in training
- 🔄 **RLHF** - Reinforcement learning from human feedback
These approaches share a fundamental flaw: **they assume the AI will maintain alignment** regardless of capability or context pressure.
### The Tractatus Solution
Tractatus implements **architectural constraints** that:
-**Block values decisions** - Privacy vs. performance requires human judgment
-**Prevent instruction override** - Explicit instructions can't be autocorrected by training patterns
-**Detect context degradation** - Quality metrics trigger session handoffs
-**Require verification** - Complex operations need metacognitive checks
-**Persist instructions** - Directives survive across sessions
-**Facilitate pluralistic deliberation** - Multi-stakeholder values conflicts require structured process
---
## 🚀 Quick Start
### Installation
```bash
# Clone repository
git clone https://github.com/AgenticGovernance/tractatus-framework.git
cd tractatus-framework
# Install dependencies
npm install
# Initialize database
npm run init:db
# Start development server
npm run dev
```
### Basic Usage
```javascript
const {
InstructionPersistenceClassifier,
CrossReferenceValidator,
BoundaryEnforcer,
ContextPressureMonitor,
MetacognitiveVerifier,
PluralisticDeliberationOrchestrator
} = require('./src/services');
// Classify an instruction
const classifier = new InstructionPersistenceClassifier();
const classification = classifier.classify({
text: "Always use MongoDB on port 27027",
source: "user"
});
// Store in instruction history
await InstructionDB.store(classification);
// Validate before taking action
const validator = new CrossReferenceValidator();
const validation = await validator.validate({
type: 'database_config',
port: 27017 // ⚠️ Conflicts with stored instruction!
});
// validation.status === 'REJECTED'
// validation.reason === 'Pattern recognition bias override detected'
```
---
## 📚 Core Components
The framework consists of **six integrated services** that work together to enforce structural safety:
### 1. **InstructionPersistenceClassifier**
Classifies instructions by quadrant and persistence level:
```javascript
{
quadrant: "SYSTEM", // STRATEGIC | OPERATIONAL | TACTICAL | SYSTEM | STOCHASTIC
persistence: "HIGH", // HIGH | MEDIUM | LOW | VARIABLE
temporal_scope: "PROJECT", // SESSION | PROJECT | PERMANENT
verification_required: "MANDATORY"
}
```
### 2. **CrossReferenceValidator**
Prevents the "27027 failure mode" where AI training patterns override explicit instructions:
```javascript
const result = validator.validate(action, { explicit_instructions });
// Blocks: Training pattern overrides, parameter conflicts, scope creep
```
### 3. **BoundaryEnforcer**
Blocks decisions that cross into values territory:
```javascript
const check = enforcer.checkBoundary({
decision: "Update privacy policy for more tracking"
});
// Result: BLOCKED - Values decision requires human judgment
```
### 4. **ContextPressureMonitor**
Multi-factor session health tracking:
```javascript
const pressure = monitor.analyze({
tokens: 120000/200000, // 60% token usage
messages: 45, // Conversation length
tasks: 8, // Concurrent complexity
errors: 3 // Recent error count
});
// Level: ELEVATED | Recommendation: INCREASE_VERIFICATION
```
### 5. **MetacognitiveVerifier**
AI self-checks reasoning before proposing actions:
```javascript
const verification = verifier.verify({
action: "Refactor 47 files across 5 system areas",
context: { requested: "Refactor authentication module" }
});
// Decision: REQUIRE_REVIEW (scope creep detected)
```
### 6. **PluralisticDeliberationOrchestrator**
Facilitates multi-stakeholder deliberation when values frameworks conflict:
```javascript
const deliberation = orchestrator.initiate({
decision: "Balance user privacy vs. system security logging",
stakeholders: ["data_subjects", "security_team", "compliance"],
conflict_type: "incommensurable_values"
});
// AI facilitates deliberation structure, humans decide outcome
```
**Full documentation:** [agenticgovernance.digital/docs.html](https://agenticgovernance.digital/docs.html)
---
## 💡 Real-World Examples
### The 27027 Incident
**Problem**: User explicitly instructs "Use MongoDB on port 27027". AI immediately uses port 27017 instead.
**Why**: Training pattern "MongoDB = 27017" overrides explicit instruction, like autocorrect changing a deliberately unusual word.
**Solution**: CrossReferenceValidator blocks the action and enforces user's explicit instruction.
[Try the Interactive Demo →](https://agenticgovernance.digital/demos/27027-demo.html)
### Context Degradation
**Problem**: In extended sessions, error rates increase as context degrades.
**Solution**: ContextPressureMonitor detects degradation and triggers session handoff before quality collapses.
### Values Creep
**Problem**: "Improve performance" request leads AI to suggest weakening privacy protections without asking.
**Solution**: BoundaryEnforcer blocks the privacy/performance trade-off and requires human decision.
---
## 🚨 Learning from Failures: Transparency in Action
**The framework doesn't prevent all failures—it structures detection, response, and learning.**
### October 2025: AI Fabrication Incident
During development, Claude (running with Tractatus governance) fabricated financial statistics on the landing page:
- $3.77M in annual savings (no basis)
- 1,315% ROI (completely invented)
- False readiness claims (unverified maturity statements)
**The framework structured the response:**
✅ Detected within 48 hours (human review)
✅ Complete incident documentation required
✅ 3 new permanent rules created
✅ Comprehensive audit found related violations
✅ All content corrected same day
✅ Public case studies published for community learning
**Read the full case studies:**
- [Our Framework in Action](https://agenticgovernance.digital/docs.html?doc=framework-in-action-oct-2025) - Practical walkthrough
- [When Frameworks Fail](https://agenticgovernance.digital/docs.html?doc=when-frameworks-fail-oct-2025) - Philosophical perspective
- [Real-World Governance](https://agenticgovernance.digital/docs.html?doc=real-world-governance-case-study-oct-2025) - Educational analysis
**Key Lesson:** Governance doesn't ensure perfection—it provides transparency, accountability, and systematic improvement.
---
## 📖 Documentation
**Complete documentation available at [agenticgovernance.digital](https://agenticgovernance.digital):**
- **[Introduction](https://agenticgovernance.digital/docs.html)** - Framework overview and philosophy
- **[Core Concepts](https://agenticgovernance.digital/docs.html)** - Deep dive into each service
- **[Implementation Guide](https://agenticgovernance.digital/docs.html)** - Integration instructions
- **[Case Studies](https://agenticgovernance.digital/docs.html)** - Real-world failure modes prevented
- **[API Reference](https://agenticgovernance.digital/docs.html)** - Complete technical documentation
This repository focuses on **open source code and implementation**. For conceptual documentation, research background, and interactive demos, please visit the website.
---
## 🧪 Testing
```bash
# Run all tests
npm test
# Run specific test suites
npm run test:unit
npm run test:integration
npm run test:security
# Watch mode
npm run test:watch
```
**Test Coverage**: 238 tests across core framework services
---
## 🏗️ Architecture
```
tractatus/
├── src/
│ ├── services/ # Core framework services
│ │ ├── InstructionPersistenceClassifier.service.js
│ │ ├── CrossReferenceValidator.service.js
│ │ ├── BoundaryEnforcer.service.js
│ │ ├── ContextPressureMonitor.service.js
│ │ ├── MetacognitiveVerifier.service.js
│ │ └── PluralisticDeliberationOrchestrator.service.js
│ ├── models/ # Database models (MongoDB)
│ ├── routes/ # API routes
│ └── middleware/ # Framework middleware
├── tests/ # Test suites
│ ├── unit/ # Service unit tests
│ └── integration/ # Integration tests
├── scripts/ # Framework utilities
│ ├── framework-components/ # Proactive scanners
│ └── hook-validators/ # Pre-action validators
├── docs/ # Development documentation
└── public/ # Website frontend
```
---
## ⚠️ Current Research Challenges
### Rule Proliferation & Scalability
**Status:** Active research area | **Priority:** High
As the framework learns from failures, instruction count grows organically. Current metrics:
- **Initial deployment:** ~6 core instructions
- **Current state:** 52 active instructions
- **Growth pattern:** Increases with each incident response
**Open questions:**
- At what point does rule proliferation reduce framework effectiveness?
- How do we balance comprehensiveness with cognitive/context load?
- Can machine learning optimize rule selection without undermining transparency?
**Mitigation strategies under investigation:**
- Instruction consolidation and hierarchical organization
- Rule prioritization algorithms
- Context-aware selective loading
- Periodic rule review and deprecation processes
**Research transparency:** We're documenting this limitation openly because architectural honesty is core to the framework's integrity.
---
## 🤝 Contributing
We welcome contributions in several areas:
### Research Contributions
- Formal verification of safety properties
- Extensions to new domains (robotics, autonomous systems)
- Theoretical foundations and proofs
### Implementation Contributions
- Ports to other languages (Python, Rust, Go)
- Integration with other frameworks
- Performance optimizations
### Documentation Contributions
- Tutorials and implementation guides
- Case studies from real deployments
- Translations
**See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.**
---
## 📊 Project Status
**Current Phase**: Research Implementation (October 2025)
✅ All 6 core services implemented
✅ 238 tests passing (unit + integration)
✅ MongoDB persistence operational
✅ Deployed at [agenticgovernance.digital](https://agenticgovernance.digital)
✅ Framework governing its own development (dogfooding)
**Next Milestones:**
- Multi-language ports (Python, TypeScript)
- Enterprise integration guides
- Formal verification research
- Community case study collection
---
## 📜 License
Copyright 2025 John Stroh
Licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for full terms.
The Tractatus Framework is open source and free to use, modify, and distribute with attribution.
---
## 🌐 Links
- **Website**: [agenticgovernance.digital](https://agenticgovernance.digital)
- **Documentation**: [agenticgovernance.digital/docs](https://agenticgovernance.digital/docs.html)
- **Interactive Demo**: [27027 Incident](https://agenticgovernance.digital/demos/27027-demo.html)
- **GitHub**: [AgenticGovernance/tractatus-framework](https://github.com/AgenticGovernance/tractatus-framework)
---
## 📧 Contact
- **Email**: john.stroh.nz@pm.me
- **Issues**: [GitHub Issues](https://github.com/AgenticGovernance/tractatus-framework/issues)
- **Discussions**: [GitHub Discussions](https://github.com/AgenticGovernance/tractatus-framework/discussions)
---
## 🙏 Acknowledgments
This framework stands on the shoulders of:
- **Ludwig Wittgenstein** - Philosophical foundations from *Tractatus Logico-Philosophicus*
- **March & Simon** - Organizational theory and decision-making frameworks
- **Isaiah Berlin & Ruth Chang** - Value pluralism and incommensurability theory
- **Anthropic** - Claude AI system for validation and development support
- **Open Source Community** - Tools, libraries, and collaborative development
---
## 📖 Philosophy
> **"Whereof one cannot speak, thereof one must be silent."**
> — Ludwig Wittgenstein
Applied to AI safety:
> **"Whereof the AI cannot safely decide, thereof it must request human judgment."**
Tractatus recognizes that **some decisions cannot be systematized** without value judgments. Rather than pretend AI can make these decisions "correctly," we build systems that **structurally defer to human judgment** in appropriate domains.
This isn't a limitation—it's **architectural integrity**.
---
## 👥 Development Attribution
This framework represents collaborative human-AI development:
- **Conceptual design, governance architecture, and quality oversight**: John Stroh
- **Implementation, documentation, and iterative refinement**: Developed through extended collaboration with Claude (Anthropic)
- **Testing and validation**: Tested across ~500 Claude Code sessions over 6 months
This attribution reflects the reality of modern AI-assisted development while maintaining clear legal copyright (John Stroh) and transparent acknowledgment of AI's substantial role in implementation.
---
<!-- PUBLIC_REPO_SAFE -->
**Tractatus Framework** | [Documentation](https://agenticgovernance.digital/docs.html) | [Apache 2.0 License](LICENSE)