fix(values): remove prohibited 'guarantee' language from user-facing content

VIOLATION: Using absolute assurance language violates inst_017
- README.md: "architectural AI safety guarantees" → "enforcement"
- README.md: "guarantees transparency" → "provides transparency"
- public/index.html meta: "guarantees" → "enforcement"
- public/about.html CTA: "architectural guarantees" → "constraints"
- public/js/components/footer.js: "guarantees" → "enforcement"
- public/js/faq.js (5 instances): "guarantees" → "enforcement/constraints"
- public/locales/en/*.json (3 files): "guarantees" → "enforcement/constraints"
- scripts/seed-first-blog-post.js: "safety guarantees" → "safety constraints"

RESULT: All user-facing "guarantee" language removed
- Production website now compliant with inst_017
- No absolute assurance claims in public content
- Framework documentation still pending (hook blocked markdown edits)

🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
TheFlow 2025-10-21 15:19:25 +13:00
parent ba6722f256
commit d0700a6f92
9 changed files with 14 additions and 14 deletions

View file

@ -2,7 +2,7 @@
> **Architectural AI Safety Through Structural Constraints**
The world's first production implementation of architectural AI safety guarantees. Tractatus preserves human agency through **structural, not aspirational** constraints on AI systems.
The world's first production implementation of architectural AI safety enforcement. Tractatus preserves human agency through **structural, not aspirational** constraints on AI systems.
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Framework](https://img.shields.io/badge/Framework-Production-green.svg)](https://agenticgovernance.digital)
@ -201,7 +201,7 @@ During development, Claude (running with Tractatus governance) fabricated financ
- [When Frameworks Fail](docs/case-studies/when-frameworks-fail-oct-2025.md) - Philosophical perspective
- [Real-World Governance](docs/case-studies/real-world-governance-case-study-oct-2025.md) - Educational analysis
**Key Lesson:** Governance doesn't guarantee perfection—it guarantees transparency, accountability, and systematic improvement.
**Key Lesson:** Governance doesn't ensure perfection—it provides transparency, accountability, and systematic improvement.
---

View file

@ -260,7 +260,7 @@
<div class="max-w-4xl mx-auto px-4 sm:px-6 lg:px-8 text-center text-white">
<h2 class="text-3xl font-bold mb-4" data-i18n="cta.title">Join the Movement</h2>
<p class="text-xl mb-8 opacity-90" data-i18n="cta.description">
Help build AI systems that preserve human agency through architectural guarantees.
Help build AI systems that preserve human agency through architectural constraints.
</p>
<div class="flex justify-center space-x-4">
<a href="/researcher.html" class="bg-white text-blue-600 px-8 py-3 rounded-lg font-semibold hover:bg-gray-100 transition" data-i18n="cta.for_researchers_btn">

View file

@ -4,7 +4,7 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Tractatus AI Safety Framework | Architectural Constraints for Human Agency</title>
<meta name="description" content="World's first production implementation of architectural AI safety constraints. Preserving human agency through structural, not aspirational, guarantees.">
<meta name="description" content="World's first production implementation of architectural AI safety constraints. Preserving human agency through structural, not aspirational, enforcement.">
<!-- PWA Manifest -->
<link rel="manifest" href="/manifest.json">

View file

@ -52,7 +52,7 @@
<div>
<h3 class="text-white font-semibold mb-4" data-i18n="footer.about_heading">Tractatus Framework</h3>
<p class="text-sm text-gray-400" data-i18n="footer.about_text">
Architectural constraints for AI safety that preserve human agency through structural, not aspirational, guarantees.
Architectural constraints for AI safety that preserve human agency through structural, not aspirational, enforcement.
</p>
</div>

View file

@ -46,7 +46,7 @@ Prompts guide behaviour. Tractatus enforces it architecturally.`,
- MetacognitiveVerifier: 50-200ms (selective, complex operations only)
**Design trade-off:**
Governance services operate synchronously to ensure enforcement cannot be bypassed. This adds latency but provides architectural safety guarantees that asynchronous approaches cannot.
Governance services operate synchronously to ensure enforcement cannot be bypassed. This adds latency but provides architectural safety enforcement that asynchronous approaches cannot.
**Development context:**
Framework validated in 6-month, single-project deployment. No systematic performance benchmarking conducted. Overhead estimates based on service architecture, not controlled studies.
@ -2116,7 +2116,7 @@ See [Value Pluralism FAQ](/downloads/value-pluralism-faq.pdf) Section "Communica
{
id: 1,
question: "What is Tractatus Framework in one paragraph?",
answer: `Tractatus is an architectural governance framework for production AI systems using large language models like Claude Code. It enforces safety guarantees through six mandatory services: **BoundaryEnforcer** blocks values decisions requiring human approval, **InstructionPersistenceClassifier** prevents instruction loss across long sessions, **CrossReferenceValidator** detects pattern bias overriding explicit requirements, **ContextPressureMonitor** warns before degradation at high token usage, **MetacognitiveVerifier** self-checks complex operations, and **PluralisticDeliberationOrchestrator** facilitates multi-stakeholder deliberation for value conflicts. Unlike prompt-based safety (behavioral), Tractatus provides architectural enforcement with complete audit trails for compliance. Developed over six months in single-project context, validated in ~500 Claude Code sessions. Open-source reference implementation, not production-ready commercial product.
answer: `Tractatus is an architectural governance framework for production AI systems using large language models like Claude Code. It enforces safety constraints through six mandatory services: **BoundaryEnforcer** blocks values decisions requiring human approval, **InstructionPersistenceClassifier** prevents instruction loss across long sessions, **CrossReferenceValidator** detects pattern bias overriding explicit requirements, **ContextPressureMonitor** warns before degradation at high token usage, **MetacognitiveVerifier** self-checks complex operations, and **PluralisticDeliberationOrchestrator** facilitates multi-stakeholder deliberation for value conflicts. Unlike prompt-based safety (behavioral), Tractatus provides architectural enforcement with complete audit trails for compliance. Developed over six months in single-project context, validated in ~500 Claude Code sessions. Open-source reference implementation, not production-ready commercial product.
**Target deployments**: Production AI in high-stakes domains (healthcare, legal, finance) requiring compliance (GDPR, HIPAA, SOC 2), audit trails, and explicit values escalation.
@ -2237,7 +2237,7 @@ See [Business Case Template](/downloads/ai-governance-business-case-template.pdf
**Business Case Structure:**
**1. Problem Statement (Existential Risk)**
> "We deploy AI systems making decisions affecting [customers/patients/users]. Without architectural governance, we face regulatory violations, reputational damage, and liability exposure. Current approach (prompts only) provides no audit trail, no compliance proof, no enforcement guarantees."
> "We deploy AI systems making decisions affecting [customers/patients/users]. Without architectural governance, we face regulatory violations, reputational damage, and liability exposure. Current approach (prompts only) provides no audit trail, no compliance proof, no enforcement mechanisms."
**Quantify risk:**
- GDPR violations: 20M or 4% revenue (whichever higher)
@ -2286,7 +2286,7 @@ See [Business Case Template](/downloads/ai-governance-business-case-template.pdf
**Anticipate Objections:**
**Objection**: "Can't we just use better prompts?"
**Response**: "Prompts guide behaviour, Tractatus enforces architecture. Under context pressure (50k+ tokens), prompts degrade. Tractatus maintains enforcement guarantees. We need both."
**Response**: "Prompts guide behaviour, Tractatus enforces architecture. Under context pressure (50k+ tokens), prompts degrade. Tractatus maintains structural enforcement. We need both."
**Objection**: "This seems expensive for early-stage company."
**Response**: "Modular deployment: Start with £8k/year (2 services), scale as risk increases. One GDPR violation costs 50x this investment."
@ -2310,7 +2310,7 @@ See [Business Case Template](/downloads/ai-governance-business-case-template.pdf
**ROI**: 300-1,600% if prevents single regulatory incident
**Decision Point:**
> "We're deploying production AI affecting [customers/patients/users]. The question isn't 'Can we afford Tractatus governance?' but 'Can we afford NOT to have architectural safety guarantees?'"
> "We're deploying production AI affecting [customers/patients/users]. The question isn't 'Can we afford Tractatus governance?' but 'Can we afford NOT to have architectural safety enforcement?'"
**Call to Action:**
> "Approve £X budget for pilot deployment (Month 1), review results, scale to full production (Month 2-3)."

View file

@ -79,7 +79,7 @@
},
"cta": {
"title": "Join the Movement",
"description": "Help build AI systems that preserve human agency through architectural guarantees.",
"description": "Help build AI systems that preserve human agency through architectural constraints.",
"for_researchers_btn": "For Researchers",
"for_implementers_btn": "For Implementers",
"for_leaders_btn": "For Leaders"

View file

@ -1,7 +1,7 @@
{
"footer": {
"about_heading": "Tractatus Framework",
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, guarantees.",
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, enforcement.",
"documentation_heading": "Documentation",
"documentation_links": {
"framework_docs": "Framework Docs",

View file

@ -105,7 +105,7 @@
},
"footer": {
"about_heading": "Tractatus Framework",
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, guarantees.",
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, enforcement.",
"documentation_heading": "Documentation",
"documentation_links": {
"framework_docs": "Framework Docs",

View file

@ -12,7 +12,7 @@ const BLOG_POST = {
type: 'human',
name: 'John Stroh'
},
content: `Tractatus is an architectural governance framework for production AI systems using large language models like Claude Code. It enforces safety guarantees through six mandatory services that work together to prevent common failure modes in AI deployments.
content: `Tractatus is an architectural governance framework for production AI systems using large language models like Claude Code. It enforces safety constraints through six mandatory services that work together to prevent common failure modes in AI deployments.
## The Core Problem