The third "What's New" card incorrectly linked to /blog.html with product pricing — wrong site, wrong audience. Replaced with Village case study link appropriate for the research site. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
182 lines
14 KiB
JSON
182 lines
14 KiB
JSON
{
|
|
"whats_new": {
|
|
"badge": "March 2026",
|
|
"heading": "What’s New",
|
|
"card1_label": "New Research",
|
|
"card1_title": "Guardian Agents and the Philosophy of AI Accountability",
|
|
"card1_desc": "How Wittgenstein, Berlin, Ostrom, and Te Ao Māori converge on the same architectural requirements for governing AI in community contexts.",
|
|
"card2_label": "Deployed",
|
|
"card2_title": "Guardian Agents in Production",
|
|
"card2_desc": "Four-phase verification using mathematical similarity, not generative checking. Confidence badges, claim-level analysis, and adaptive learning — all tenant-scoped.",
|
|
"card3_label": "Case Study",
|
|
"card3_title": "Village: Tractatus in Production",
|
|
"card3_desc": "The first deployment of constitutional AI governance in a live community platform. Production metrics, honest limitations, and evidence from 17 months of operation."
|
|
},
|
|
"hero": {
|
|
"title": "Architectural Governance for AI Systems",
|
|
"subtitle": "Some decisions require human judgment — architecturally enforced, not left to AI discretion, however well trained.",
|
|
"cta_research": "Read the Research",
|
|
"cta_production": "See It in Production"
|
|
},
|
|
"problem": {
|
|
"heading": "The Problem",
|
|
"intro": "Current AI safety approaches rely on training, fine-tuning, and corporate governance — all of which can fail, drift, or be overridden. When an AI’s training patterns conflict with a user’s explicit instructions, the patterns win.",
|
|
"incident_title": "The 27027 Incident",
|
|
"incident_text": "A user told Claude Code to use port 27027. The model used 27017 instead — not from forgetting, but because MongoDB’s default port is 27017, and the model’s statistical priors “autocorrected” the explicit instruction. Training pattern bias overrode human intent.",
|
|
"corollary_title": "From Code to Conversation: The Same Mechanism",
|
|
"corollary_p1": "In code, this bias produces measurable failures — wrong port, connection refused, incident logged in 14.7ms. But the same architectural flaw operates in every AI conversation, where it is far harder to detect.",
|
|
"corollary_p2": "When a user from a collectivist culture asks for family advice, the model defaults to Western individualist framing — because that is what 95% of the training data reflects. When a Māori user asks about data guardianship, the model offers property-rights language instead of <em>kaitiakitanga</em>. When someone asks about end-of-life decisions, the model defaults to utilitarian calculus rather than the user’s religious or cultural framework.",
|
|
"corollary_p3": "The mechanism is identical: training data distributions override the user’s actual context. In code, the failure is binary and detectable. In conversation, it is gradient and invisible — culturally inappropriate advice looks like “good advice” to the system, and often to the user. There is no CrossReferenceValidator catching it in 14.7ms.",
|
|
"corollary_summary": "The same mechanism operates in every AI conversation. When a user from a collectivist culture asks for family advice, the model defaults to Western individualist framing. When a Māori user asks about data guardianship, the model offers property-rights language. Training data distributions override user context — in code the failure is binary and detectable, in conversation it is gradient and invisible.",
|
|
"corollary_link": "Read the full analysis →",
|
|
"closing": "This is not an edge case, and it is not limited to code. It is a category of failure that gets worse as models become more capable: stronger patterns produce more confident overrides — whether the override substitutes a port number or a value system. Safety through training alone is insufficient. The failure mode is structural, it operates across every domain where AI acts, and the solution must be structural."
|
|
},
|
|
"approach": {
|
|
"heading": "The Approach",
|
|
"subtitle": "Tractatus draws on four intellectual traditions, each contributing a distinct insight to the architecture.",
|
|
"berlin_title": "Isaiah Berlin — Value Pluralism",
|
|
"berlin_text": "Some values are genuinely incommensurable. You cannot rank “privacy” against “safety” on a single scale without imposing one community’s priorities on everyone else. AI systems must accommodate plural moral frameworks, not flatten them.",
|
|
"wittgenstein_title": "Ludwig Wittgenstein — The Limits of the Sayable",
|
|
"wittgenstein_text": "Some decisions can be systematised and delegated to AI; others — involving values, ethics, cultural context — fundamentally cannot. The boundary between the “sayable” (what can be specified, measured, verified) and what lies beyond it is the framework’s foundational constraint. What cannot be systematised must not be automated.",
|
|
"tiriti_title": "Te Tiriti o Waitangi — Indigenous Sovereignty",
|
|
"tiriti_text": "Communities should control their own data and the systems that act upon it. Concepts of <em>rangatiratanga</em> (self-determination), <em>kaitiakitanga</em> (guardianship), and <em>mana</em> (dignity) provide centuries-old prior art for digital sovereignty.",
|
|
"alexander_title": "Christopher Alexander — Living Architecture",
|
|
"alexander_text": "Governance woven into system architecture, not bolted on. Five principles (Not-Separateness, Deep Interlock, Gradients, Structure-Preserving, Living Process) guide how the framework evolves while maintaining coherence.",
|
|
"download_pdf": "Download: The Philosophical Foundations of the Village Project (PDF)"
|
|
},
|
|
"services": {
|
|
"heading": "Governance Architecture",
|
|
"subtitle": "Six governance services in the critical path, plus Guardian Agents verifying every AI response. Bypasses require explicit flags and are logged.",
|
|
"guardian_title": "Guardian Agents",
|
|
"guardian_badge": "NEW — March 2026",
|
|
"guardian_desc": "Four-phase verification using embedding cosine similarity — mathematical measurement, not generative checking. The watcher operates in a fundamentally different epistemic domain from the system it watches, avoiding common-mode failure.",
|
|
"guardian_p1": "Response Verification",
|
|
"guardian_p2": "Claim-Level Analysis",
|
|
"guardian_p3": "Anomaly Detection",
|
|
"guardian_p4": "Adaptive Learning",
|
|
"guardian_cta": "Full Guardian Agents architecture →",
|
|
"boundary_desc": "Blocks AI from making values decisions. Privacy trade-offs, ethical questions, and cultural context require human judgment — architecturally enforced.",
|
|
"instruction_desc": "Classifies instructions by persistence (HIGH/MEDIUM/LOW) and quadrant. Stores them externally so they cannot be overridden by training patterns.",
|
|
"validator_desc": "Validates AI actions against stored instructions. When the AI proposes an action that conflicts with an explicit instruction, the instruction takes precedence.",
|
|
"pressure_desc": "Detects degraded operating conditions (token pressure, error rates, complexity) and adjusts verification intensity. Graduated response prevents both alert fatigue and silent degradation.",
|
|
"metacognitive_desc": "AI self-checks alignment, coherence, and safety before execution. Triggered selectively on complex operations to avoid overhead on routine tasks.",
|
|
"deliberation_desc": "When AI encounters values conflicts, it halts and coordinates deliberation among affected stakeholders rather than making autonomous choices.",
|
|
"cta": "See the full architecture →"
|
|
},
|
|
"evidence": {
|
|
"badge": "Production Evidence",
|
|
"heading": "Tractatus in Production: The Village Platform",
|
|
"subtitle": "Village AI applies all six governance services to every user interaction in a live community platform.",
|
|
"stat_guardian": "Guardian verification phases per response",
|
|
"stat_services": "Governance services in the critical path",
|
|
"stat_months": "Months in production",
|
|
"stat_overhead": "Governance overhead per interaction",
|
|
"cta_case_study": "Technical Case Study →",
|
|
"cta_village_ai": "About Village AI →",
|
|
"limitations_label": "Limitations:",
|
|
"limitations_text": "Early-stage deployment across four federated tenants, self-reported metrics, operator-developer overlap. Independent audit and broader validation scheduled for 2026."
|
|
},
|
|
"roles": {
|
|
"heading": "Explore by Role",
|
|
"subtitle": "The framework is presented through three lenses, each with distinct depth and focus.",
|
|
"researcher_title": "For Researchers",
|
|
"researcher_subtitle": "Academic and technical depth",
|
|
"researcher_f1": "Formal foundations and proofs",
|
|
"researcher_f2": "Failure mode analysis",
|
|
"researcher_f3": "Open research questions",
|
|
"researcher_f4": "171,800+ audit decisions analysed",
|
|
"researcher_cta": "Explore research →",
|
|
"implementer_title": "For Implementers",
|
|
"implementer_subtitle": "Code and integration guides",
|
|
"implementer_f1": "Working code examples",
|
|
"implementer_f2": "API integration patterns",
|
|
"implementer_f3": "Service architecture diagrams",
|
|
"implementer_f4": "Deployment patterns",
|
|
"implementer_cta": "View implementation guide →",
|
|
"leader_title": "For Leaders",
|
|
"leader_subtitle": "Strategic AI governance",
|
|
"leader_f1": "Executive briefing and business case",
|
|
"leader_f2": "Regulatory alignment (EU AI Act)",
|
|
"leader_f3": "Implementation roadmap",
|
|
"leader_f4": "Risk management framework",
|
|
"leader_cta": "View leadership resources →"
|
|
},
|
|
"papers": {
|
|
"heading": "Architectural Alignment",
|
|
"subtitle": "The research paper in three editions, each written for a different audience.",
|
|
"academic_title": "Academic",
|
|
"academic_desc": "Full academic treatment with formal proofs, existential risk context, and comprehensive citations.",
|
|
"community_title": "Community",
|
|
"community_desc": "Practical guide for organisations evaluating the framework for adoption.",
|
|
"policymakers_title": "Policymakers",
|
|
"policymakers_desc": "Regulatory strategy, certification infrastructure, and policy recommendations.",
|
|
"pdf_label": "PDF downloads:",
|
|
"read_cta": "Read →",
|
|
"pdf_academic": "Academic",
|
|
"pdf_community": "Community",
|
|
"pdf_policymakers": "Policymakers"
|
|
},
|
|
"timeline": {
|
|
"heading": "Research Evolution",
|
|
"subtitle": "From a port number incident to Guardian Agents in production — 17 months, 1,000+ commits.",
|
|
"oct_2025": "Framework inception & 6 governance services",
|
|
"oct_nov_2025": "Alexander principles, Agent Lightning, i18n",
|
|
"dec_2025": "Village case study & Village AI deployment",
|
|
"jan_2026": "Research papers (3 editions) published",
|
|
"cta": "View the full research timeline →",
|
|
"date_oct_2025": "Oct 2025",
|
|
"date_oct_nov_2025": "Oct-Nov 2025",
|
|
"date_dec_2025": "Dec 2025",
|
|
"date_jan_2026": "Jan 2026",
|
|
"date_feb_2026": "Feb 2026",
|
|
"feb_2026": "Sovereign training, steering vectors research",
|
|
"date_mar_2026": "Mar 2026",
|
|
"mar_2026": "Guardian Agents deployed, beta pilot open"
|
|
},
|
|
"claims": {
|
|
"heading": "A note on claims",
|
|
"text": "This is early-stage research with a small-scale federated deployment across four tenants. We present preliminary evidence, not proven results. The framework has not been independently audited or adversarially tested at scale. Where we report operational metrics, they are self-reported. We believe the architectural approach merits further investigation, but we make no claims of generalisability beyond what the evidence supports. The",
|
|
"counter_link": "counter-arguments document",
|
|
"counter_suffix": "engages directly with foreseeable criticisms."
|
|
},
|
|
"koha": {
|
|
"heading": "Koha — Sustain This Research",
|
|
"intro": "<strong>Koha</strong> (koh-hah) is a Māori practice of reciprocal giving that strengthens the bond between giver and receiver. This research is open access under Apache 2.0 — if it has value to you, your koha sustains its continuation.",
|
|
"explanation": "All research, documentation, and code remain freely available regardless of contribution. Koha is not payment — it is participation in <em>whanaungatanga</em> (relationship-building) and <em>manaakitanga</em> (reciprocal care).",
|
|
"option_1": "One-time or monthly",
|
|
"option_2": "Full financial transparency",
|
|
"option_3": "No paywall, ever",
|
|
"cta": "Offer Koha →",
|
|
"transparency_link": "View our financial transparency report"
|
|
},
|
|
"footer": {
|
|
"about_heading": "Tractatus Framework",
|
|
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, enforcement.",
|
|
"documentation_heading": "Documentation",
|
|
"documentation_links": {
|
|
"framework_docs": "Framework Docs",
|
|
"about": "About",
|
|
"core_values": "Core Values",
|
|
"interactive_demo": "Interactive Demo"
|
|
},
|
|
"support_heading": "Support",
|
|
"support_links": {
|
|
"koha": "Support (Koha)",
|
|
"transparency": "Transparency",
|
|
"media_inquiries": "Media Inquiries",
|
|
"submit_case": "Submit Case Study"
|
|
},
|
|
"legal_heading": "Legal",
|
|
"legal_links": {
|
|
"privacy": "Privacy Policy",
|
|
"contact": "Contact Us",
|
|
"github": "GitHub"
|
|
},
|
|
"te_tiriti_label": "Te Tiriti o Waitangi:",
|
|
"te_tiriti_text": "We acknowledge Te Tiriti o Waitangi and our commitment to partnership, protection, and participation. This project respects Māori data sovereignty (rangatiratanga) and collective guardianship (kaitiakitanga).",
|
|
"copyright": "John G Stroh. Licensed under",
|
|
"license": "Apache 2.0",
|
|
"location": "Made in Aotearoa New Zealand 🇳🇿"
|
|
},
|
|
"skip_to_content": "Skip to main content"
|
|
}
|