tractatus/scripts/seed-blog-posts.js
TheFlow 2238547344 refactor: Rename "Home AI" → "Village AI" across entire codebase
- 57 files modified, 5 files renamed (home-ai → village-ai)
- HTML pages: all user-facing text, data-i18n attributes, anchor IDs, CSS classes
- i18n JSON: keys (home_ai → village_ai) and values across en/de/fr/mi
- Locale files renamed: home-ai.json → village-ai.json (4 languages)
- Main page renamed: home-ai.html → village-ai.html
- Research downloads: translated terms updated (French "IA domestique",
  Māori "AI ā-whare"/"AI kāinga" → "Village AI" per brand name rule)
- JavaScript: navbar component, blog post scripts
- Markdown: research timeline, steering vectors paper, taonga paper

Aligns with community codebase rename (commit 21ab7bc0).
"Village" is a brand name — stays untranslated in all languages.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 22:09:44 +13:00

203 lines
14 KiB
JavaScript

#!/usr/bin/env node
/**
* Seed Blog Posts
* Creates initial blog content for the Tractatus site.
*
* Usage: node scripts/seed-blog-posts.js
*
* Uses MONGODB_URI from environment or defaults to local dev database.
*/
const { MongoClient } = require('mongodb');
const uri = process.env.MONGODB_URI || 'mongodb://localhost:27017/tractatus_dev';
const dbName = process.env.MONGODB_DB || 'tractatus_dev';
const posts = [
{
title: 'Why We Built Tractatus: The 27027 Incident and the Case for Architectural AI Safety',
slug: 'why-we-built-tractatus-27027-incident',
author: {
type: 'human',
name: 'John Stroh'
},
content: `<h2>The Moment That Started Everything</h2>
<p>In October 2025, during an extended Claude Code session building the Village community platform, something unexpected happened. I specified port 27027 for a MongoDB connection — a deliberate choice, documented in the project instructions. Claude Code used port 27017 instead.</p>
<p>This wasn't a hallucination. It wasn't forgetting. It was something more fundamental: <strong>pattern recognition bias overriding explicit user instructions</strong>. The AI's training data overwhelmingly associated MongoDB with port 27017, and that statistical weight was strong enough to override a direct, unambiguous instruction from a human.</p>
<h2>Why This Matters Beyond a Port Number</h2>
<p>A wrong port number is easily caught and easily fixed. But the underlying failure mode — AI training patterns silently overriding human intent — scales in concerning ways. If an AI system can override "use port 27027" because its training says otherwise, what happens when the instruction is "prioritise user privacy over engagement metrics" and the training data overwhelmingly associates success with engagement?</p>
<p>Current AI safety approaches address this through better training, fine-tuning, and RLHF. These are valuable contributions. But they share a structural limitation: they operate within the AI's own reasoning process. The AI must choose to follow its safety training, and that choice is vulnerable to the same pattern-override dynamics that caused the 27027 incident.</p>
<h2>Architecture as an Alternative</h2>
<p>The Tractatus Framework emerged from a different question: what if safety constraints operated outside the AI's reasoning entirely? Not better training, but external architectural enforcement — governance services that validate AI actions before they execute, independent of whether the AI "wants" to comply.</p>
<p>This is not a novel idea in engineering. Building codes don't rely on architects choosing to build safe structures — they enforce structural requirements through inspection. Financial regulations don't rely on banks choosing ethical behaviour — they enforce compliance through external audit. The question is whether this principle can be applied to AI governance.</p>
<h2>What We've Learned So Far</h2>
<p>After more than a year of development and 800+ commits, we've built a framework with six governance services operating in the critical execution path of AI operations. The evidence is preliminary — single implementation, self-reported metrics, no independent validation — but the architectural approach has proven feasible.</p>
<p>Key observations:</p>
<ul>
<li>External governance services can intercept and validate AI actions before execution</li>
<li>The overhead is manageable — approximately 5% for 100% governance coverage</li>
<li>Multiple coordinated services create redundancy that is harder to bypass than single-point checks</li>
<li>The approach is runtime-agnostic — it can theoretically work with any AI agent platform</li>
</ul>
<p>These are early findings, not conclusions. The honest limitations are substantial: single implementation context, no adversarial testing, no multi-organisation validation. We document these limitations prominently because intellectual honesty is more important than marketing.</p>
<h2>What Comes Next</h2>
<p>The Tractatus Framework is open source under Apache 2.0 because AI safety benefits from transparency and collective improvement, not proprietary control. We're looking for research collaborators, pilot organisations, and critical engagement from the safety community.</p>
<p>If you're working on related problems — architectural AI safety, governance persistence, multi-agent coordination — we'd like to hear from you. The research is early, the questions are open, and the problem is too important for any single team to solve alone.</p>
<p><a href="/about.html">Learn more about the project</a> | <a href="/architecture.html">See the architecture</a> | <a href="https://github.com/AgenticGovernance/tractatus-framework">View on GitHub</a></p>`,
excerpt: 'How a wrong port number revealed a fundamental gap in AI safety — and why we built an architectural framework to address it.',
status: 'published',
published_at: new Date('2026-02-07T10:00:00Z'),
tags: ['origin-story', 'ai-safety', 'architecture', 'pattern-bias'],
moderation: {
human_reviewer: 'John Stroh',
approved_at: new Date('2026-02-07T10:00:00Z')
},
tractatus_classification: {
quadrant: 'STRATEGIC',
values_sensitive: false,
requires_strategic_review: false
},
view_count: 0,
engagement: { shares: 0, comments: 0 }
},
{
title: 'Research Update: From Five Principles to Three Research Papers — What Changed in Year One',
slug: 'research-update-year-one-2025-2026',
author: {
type: 'human',
name: 'John Stroh'
},
content: `<h2>A Year of Research Evolution</h2>
<p>The Tractatus Framework began in October 2025 as a practical response to AI governance failures observed during software development. Twelve months later, it has evolved from a set of hook scripts into a research framework with published papers, a production case study, and a community of interested researchers and implementers. This post summarises what changed, what we learned, and what questions remain open.</p>
<h2>The Philosophical Foundations</h2>
<p>Early versions of Tractatus were purely technical — governance services enforcing boundaries. But production experience revealed that the harder questions weren't technical. They were philosophical: whose values should an AI system enforce? How do you handle genuinely conflicting legitimate interests? What does "human oversight" mean when decisions affect communities with different moral frameworks?</p>
<p>These questions led us to four intellectual traditions that now form the framework's philosophical basis:</p>
<ul>
<li><strong>Isaiah Berlin's value pluralism</strong> — the recognition that human values are genuinely plural and sometimes incommensurable. You cannot rank "privacy" against "safety" on a single scale.</li>
<li><strong>Simone Weil's attention to affliction</strong> — the insight that those most affected by power structures are often least visible to them. AI systems must attend to the perspectives of those they most affect.</li>
<li><strong>Te Tiriti o Waitangi and indigenous data sovereignty</strong> — principles of rangatiratanga (self-determination) and kaitiakitanga (guardianship) that provide concrete guidance for technology respecting community agency.</li>
<li><strong>Christopher Alexander's living architecture</strong> — five principles (Not-Separateness, Deep Interlock, Gradients Not Binary, Structure-Preserving, Living Process) that guide how governance evolves while maintaining coherence.</li>
</ul>
<h2>The Architecture</h2>
<p>The six governance services — BoundaryEnforcer, InstructionPersistenceClassifier, CrossReferenceValidator, ContextPressureMonitor, MetacognitiveVerifier, and PluralisticDeliberationOrchestrator — remain the core technical contribution. They operate in the critical execution path, external to the AI's reasoning, creating architectural enforcement rather than voluntary compliance.</p>
<p>The most significant architectural evolution was recognising that these services must coordinate through mutual validation ("Deep Interlock" in Alexander's terms). A single-service bypass doesn't compromise the whole system — multiple services must be circumvented simultaneously, which is exponentially harder.</p>
<h2>The Village Case Study</h2>
<p>The Village platform — a community platform with a sovereign locally-trained language model (SLL) called Village AI — became the primary production test of the framework. Every user interaction with Village AI passes through all six governance services before a response is generated.</p>
<p>Observed metrics from this deployment:</p>
<ul>
<li>Six governance checks per interaction</li>
<li>11+ months of continuous operation</li>
<li>Approximately 5% overhead for 100% governance coverage</li>
</ul>
<p>These are self-reported metrics from a single implementation by the framework developer. We state this limitation clearly and repeatedly. Independent validation is needed before these results can be generalised.</p>
<h2>Three Editions of the Research Paper</h2>
<p>In January 2026, we published "Architectural Alignment: Bridging AI Safety and Deployment Architecture" in three editions tailored to different audiences:</p>
<ul>
<li><strong>Academic edition</strong> — full formal treatment with methodology, theoretical grounding, and limitations analysis</li>
<li><strong>Community edition</strong> — accessible language with practical examples for practitioners and community organisers</li>
<li><strong>Policymakers edition</strong> — regulatory framing with EU AI Act mapping and compliance implications</li>
</ul>
<p>The decision to write three editions reflects a core conviction: AI safety research must be accessible to the communities it affects. A paper readable only by researchers cannot inform the policy decisions and implementation choices that determine real-world outcomes.</p>
<h2>Open Questions</h2>
<p>The honest assessment is that we have more questions than answers:</p>
<ul>
<li>Does architectural enforcement scale to enterprise deployments with thousands of concurrent agents?</li>
<li>How does the framework perform under adversarial attack? We have not conducted red-team testing.</li>
<li>Can the PluralisticDeliberationOrchestrator genuinely coordinate multi-stakeholder deliberation at scale, or does it collapse into majority-rule under pressure?</li>
<li>What happens when governance services themselves contain biases inherited from their design context?</li>
<li>How do you maintain governance persistence through reinforcement learning training cycles?</li>
</ul>
<p>These questions are not rhetorical disclaimers — they represent genuine gaps in our understanding that require collaborative research to address.</p>
<h2>What We're Looking For</h2>
<p>Tractatus is open source (Apache 2.0) because we believe AI safety benefits from collective improvement. We're actively seeking:</p>
<ul>
<li>Academic researchers for independent validation studies</li>
<li>Organisations willing to pilot the framework in different deployment contexts</li>
<li>Critical engagement — particularly from those who see flaws in our approach</li>
<li>Contributions to the <a href="/korero-counter-arguments.html">counter-arguments document</a>, which represents our best attempt at honest engagement with criticism</li>
</ul>
<p>The research is early. The claims are modest. The questions are substantial. But the underlying problem — how to maintain human agency over AI systems that increasingly make consequential decisions — is urgent enough to warrant exploration from multiple angles.</p>
<p><a href="/timeline.html">View the full research timeline</a> | <a href="/researcher.html">Research foundations</a> | <a href="/architectural-alignment.html">Read the academic paper</a></p>`,
excerpt: 'A summary of the Tractatus Framework\'s evolution from October 2025 to February 2026: philosophical foundations, architectural changes, the Village case study, and the open questions that remain.',
status: 'published',
published_at: new Date('2026-02-07T12:00:00Z'),
tags: ['research-update', 'philosophy', 'architecture', 'village', 'year-in-review'],
moderation: {
human_reviewer: 'John Stroh',
approved_at: new Date('2026-02-07T12:00:00Z')
},
tractatus_classification: {
quadrant: 'STRATEGIC',
values_sensitive: false,
requires_strategic_review: false
},
view_count: 0,
engagement: { shares: 0, comments: 0 }
}
];
async function seed() {
const client = new MongoClient(uri);
try {
await client.connect();
const db = client.db(dbName);
const collection = db.collection('blog_posts');
// Check for duplicate slugs
const slugs = posts.map(p => p.slug);
const existing = await collection.find({ slug: { $in: slugs } }).toArray();
const existingSlugs = new Set(existing.map(p => p.slug));
const newPosts = posts.filter(p => !existingSlugs.has(p.slug));
if (newPosts.length === 0) {
console.log('All posts already exist (matched by slug). Nothing to insert.');
return;
}
if (existingSlugs.size > 0) {
console.log(`Skipping ${existingSlugs.size} existing post(s): ${[...existingSlugs].join(', ')}`);
}
// Insert new posts
const result = await collection.insertMany(newPosts);
console.log(`Inserted ${result.insertedCount} blog posts:`);
posts.forEach(p => console.log(` - "${p.title}" (${p.slug})`));
} catch (error) {
console.error('Error seeding blog posts:', error.message);
process.exit(1);
} finally {
await client.close();
}
}
seed().then(() => {
console.log('Done.');
process.exit(0);
});