#!/usr/bin/env node /** * Seed Blog Posts * Creates initial blog content for the Tractatus site. * * Usage: node scripts/seed-blog-posts.js * * Uses MONGODB_URI from environment or defaults to local dev database. */ const { MongoClient } = require('mongodb'); const uri = process.env.MONGODB_URI || 'mongodb://localhost:27017/tractatus_dev'; const dbName = process.env.MONGODB_DB || 'tractatus_dev'; const posts = [ { title: 'Why We Built Tractatus: The 27027 Incident and the Case for Architectural AI Safety', slug: 'why-we-built-tractatus-27027-incident', author: { type: 'human', name: 'John Stroh' }, content: `
In October 2025, during an extended Claude Code session building the Village community platform, something unexpected happened. I specified port 27027 for a MongoDB connection — a deliberate choice, documented in the project instructions. Claude Code used port 27017 instead.
This wasn't a hallucination. It wasn't forgetting. It was something more fundamental: pattern recognition bias overriding explicit user instructions. The AI's training data overwhelmingly associated MongoDB with port 27017, and that statistical weight was strong enough to override a direct, unambiguous instruction from a human.
A wrong port number is easily caught and easily fixed. But the underlying failure mode — AI training patterns silently overriding human intent — scales in concerning ways. If an AI system can override "use port 27027" because its training says otherwise, what happens when the instruction is "prioritise user privacy over engagement metrics" and the training data overwhelmingly associates success with engagement?
Current AI safety approaches address this through better training, fine-tuning, and RLHF. These are valuable contributions. But they share a structural limitation: they operate within the AI's own reasoning process. The AI must choose to follow its safety training, and that choice is vulnerable to the same pattern-override dynamics that caused the 27027 incident.
The Tractatus Framework emerged from a different question: what if safety constraints operated outside the AI's reasoning entirely? Not better training, but external architectural enforcement — governance services that validate AI actions before they execute, independent of whether the AI "wants" to comply.
This is not a novel idea in engineering. Building codes don't rely on architects choosing to build safe structures — they enforce structural requirements through inspection. Financial regulations don't rely on banks choosing ethical behaviour — they enforce compliance through external audit. The question is whether this principle can be applied to AI governance.
After more than a year of development and 800+ commits, we've built a framework with six governance services operating in the critical execution path of AI operations. The evidence is preliminary — single implementation, self-reported metrics, no independent validation — but the architectural approach has proven feasible.
Key observations:
These are early findings, not conclusions. The honest limitations are substantial: single implementation context, no adversarial testing, no multi-organisation validation. We document these limitations prominently because intellectual honesty is more important than marketing.
The Tractatus Framework is open source under Apache 2.0 because AI safety benefits from transparency and collective improvement, not proprietary control. We're looking for research collaborators, pilot organisations, and critical engagement from the safety community.
If you're working on related problems — architectural AI safety, governance persistence, multi-agent coordination — we'd like to hear from you. The research is early, the questions are open, and the problem is too important for any single team to solve alone.
Learn more about the project | See the architecture | View on GitHub
`, excerpt: 'How a wrong port number revealed a fundamental gap in AI safety — and why we built an architectural framework to address it.', status: 'published', published_at: new Date('2026-02-07T10:00:00Z'), tags: ['origin-story', 'ai-safety', 'architecture', 'pattern-bias'], moderation: { human_reviewer: 'John Stroh', approved_at: new Date('2026-02-07T10:00:00Z') }, tractatus_classification: { quadrant: 'STRATEGIC', values_sensitive: false, requires_strategic_review: false }, view_count: 0, engagement: { shares: 0, comments: 0 } }, { title: 'Research Update: From Five Principles to Three Research Papers — What Changed in Year One', slug: 'research-update-year-one-2025-2026', author: { type: 'human', name: 'John Stroh' }, content: `The Tractatus Framework began in October 2025 as a practical response to AI governance failures observed during software development. Twelve months later, it has evolved from a set of hook scripts into a research framework with published papers, a production case study, and a community of interested researchers and implementers. This post summarises what changed, what we learned, and what questions remain open.
Early versions of Tractatus were purely technical — governance services enforcing boundaries. But production experience revealed that the harder questions weren't technical. They were philosophical: whose values should an AI system enforce? How do you handle genuinely conflicting legitimate interests? What does "human oversight" mean when decisions affect communities with different moral frameworks?
These questions led us to four intellectual traditions that now form the framework's philosophical basis:
The six governance services — BoundaryEnforcer, InstructionPersistenceClassifier, CrossReferenceValidator, ContextPressureMonitor, MetacognitiveVerifier, and PluralisticDeliberationOrchestrator — remain the core technical contribution. They operate in the critical execution path, external to the AI's reasoning, creating architectural enforcement rather than voluntary compliance.
The most significant architectural evolution was recognising that these services must coordinate through mutual validation ("Deep Interlock" in Alexander's terms). A single-service bypass doesn't compromise the whole system — multiple services must be circumvented simultaneously, which is exponentially harder.
The Village platform — a community platform with a sovereign locally-trained language model (SLL) called Home AI — became the primary production test of the framework. Every user interaction with Home AI passes through all six governance services before a response is generated.
Observed metrics from this deployment:
These are self-reported metrics from a single implementation by the framework developer. We state this limitation clearly and repeatedly. Independent validation is needed before these results can be generalised.
In January 2026, we published "Architectural Alignment: Bridging AI Safety and Deployment Architecture" in three editions tailored to different audiences:
The decision to write three editions reflects a core conviction: AI safety research must be accessible to the communities it affects. A paper readable only by researchers cannot inform the policy decisions and implementation choices that determine real-world outcomes.
The honest assessment is that we have more questions than answers:
These questions are not rhetorical disclaimers — they represent genuine gaps in our understanding that require collaborative research to address.
Tractatus is open source (Apache 2.0) because we believe AI safety benefits from collective improvement. We're actively seeking:
The research is early. The claims are modest. The questions are substantial. But the underlying problem — how to maintain human agency over AI systems that increasingly make consequential decisions — is urgent enough to warrant exploration from multiple angles.
View the full research timeline | Research foundations | Read the academic paper
`, excerpt: 'A summary of the Tractatus Framework\'s evolution from October 2025 to February 2026: philosophical foundations, architectural changes, the Village case study, and the open questions that remain.', status: 'published', published_at: new Date('2026-02-07T12:00:00Z'), tags: ['research-update', 'philosophy', 'architecture', 'village', 'year-in-review'], moderation: { human_reviewer: 'John Stroh', approved_at: new Date('2026-02-07T12:00:00Z') }, tractatus_classification: { quadrant: 'STRATEGIC', values_sensitive: false, requires_strategic_review: false }, view_count: 0, engagement: { shares: 0, comments: 0 } } ]; async function seed() { const client = new MongoClient(uri); try { await client.connect(); const db = client.db(dbName); const collection = db.collection('blog_posts'); // Check for duplicate slugs const slugs = posts.map(p => p.slug); const existing = await collection.find({ slug: { $in: slugs } }).toArray(); const existingSlugs = new Set(existing.map(p => p.slug)); const newPosts = posts.filter(p => !existingSlugs.has(p.slug)); if (newPosts.length === 0) { console.log('All posts already exist (matched by slug). Nothing to insert.'); return; } if (existingSlugs.size > 0) { console.log(`Skipping ${existingSlugs.size} existing post(s): ${[...existingSlugs].join(', ')}`); } // Insert new posts const result = await collection.insertMany(newPosts); console.log(`Inserted ${result.insertedCount} blog posts:`); posts.forEach(p => console.log(` - "${p.title}" (${p.slug})`)); } catch (error) { console.error('Error seeding blog posts:', error.message); process.exit(1); } finally { await client.close(); } } seed().then(() => { console.log('Done.'); process.exit(0); });