tractatus/public/locales/en/homepage.json
TheFlow ca04622243 fix(i18n): add German and French translations for performance evidence section
SUMMARY:
Fixed missing translations for the performance evidence section that was
previously only available in English. All 3 languages now properly support
the "Preliminary Evidence: Safety and Performance May Be Aligned" content.

CHANGES MADE:

1. Added to en/homepage.json (lines 86-92):
   - validation.performance_evidence.heading
   - validation.performance_evidence.paragraph_1
   - validation.performance_evidence.paragraph_2
   - validation.performance_evidence.paragraph_3
   - validation.performance_evidence.methodology_note

2. Added to de/homepage.json (lines 86-92):
   - German translations of all performance evidence content
   - Removed obsolete subtitle with incorrect claims

3. Added to fr/homepage.json (lines 86-92):
   - French translations of all performance evidence content
   - Removed obsolete subtitle with incorrect claims

4. Updated index.html (lines 349, 350, 353, 356, 363):
   - Added data-i18n and data-i18n-html attributes
   - Heading: data-i18n="validation.performance_evidence.heading"
   - Paragraphs: data-i18n-html for proper HTML rendering
   - Methodology note: data-i18n-html

TRANSLATIONS:

English:
- "Preliminary Evidence: Safety and Performance May Be Aligned"
- 3-5× productivity improvement messaging
- Mechanism explanation
- Statistical validation ongoing

German:
- "Vorläufige Erkenntnisse: Sicherheit und Leistung könnten aufeinander abgestimmt sein"
- Equivalent messaging with proper German grammar
- Technical terminology accurately translated

French:
- "Preuves Préliminaires : Sécurité et Performance Pourraient Être Alignées"
- Equivalent messaging with proper French grammar
- Technical terminology accurately translated

IMPACT:
✓ Performance evidence now displays correctly in all 3 languages
✓ German and French users no longer see English-only content
✓ i18n system properly handles all validation section content
✓ Static HTML serves as proper fallback before JavaScript loads

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 21:59:59 +13:00

135 lines
7.6 KiB
JSON

{
"hero": {
"title": "Tractatus AI Safety Framework",
"subtitle": "Structural constraints that require AI systems to preserve human agency for values decisions—tested on Claude Code",
"cta_architecture": "System Architecture",
"cta_docs": "Read Documentation",
"cta_faq": "FAQ"
},
"value_prop": {
"heading": "A Starting Point",
"text": "Aligning advanced AI with human values is among the most consequential challenges we face. As capability growth accelerates under big tech momentum, we confront a categorical imperative: preserve human agency over values decisions, or risk ceding control entirely.<br><br>Instead of hoping AI systems \"behave correctly,\" we propose structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.<br><br>If this approach can work at scale, Tractatus may represent a turning point—a path where AI enhances human capability without compromising human sovereignty. Explore the framework through the lens that resonates with your work."
},
"paths": {
"intro": "",
"researcher": {
"title": "Researcher",
"subtitle": "Academic & technical depth",
"tooltip": "For AI safety researchers, academics, and scientists investigating LLM failure modes and governance architectures",
"description": "Explore the theoretical foundations, architectural constraints, and scholarly context of the Tractatus framework.",
"features": [
"Technical specifications & proofs",
"Academic research review",
"Failure mode analysis",
"Mathematical foundations"
],
"cta": "Explore Research"
},
"implementer": {
"title": "Implementer",
"subtitle": "Code & integration guides",
"tooltip": "For software engineers, ML engineers, and technical teams building production AI systems",
"description": "Get hands-on with implementation guides, API documentation, and reference code examples.",
"features": [
"Working code examples",
"API integration patterns",
"Service architecture diagrams",
"Deployment best practices"
],
"cta": "View Implementation Guide"
},
"leader": {
"title": "Leader",
"subtitle": "Strategic AI Safety",
"tooltip": "For AI executives, research directors, startup founders, and strategic decision makers setting AI safety policy",
"description": "Navigate the business case, compliance requirements, and competitive advantages of structural AI safety.",
"features": [
"Executive briefing & business case",
"Risk management & compliance (EU AI Act)",
"Implementation roadmap & ROI",
"Competitive advantage analysis"
],
"cta": "View Leadership Resources"
}
},
"capabilities": {
"heading": "Framework Capabilities",
"items": [
{
"title": "Instruction Classification",
"description": "Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging"
},
{
"title": "Cross-Reference Validation",
"description": "Validates AI actions against explicit user instructions to prevent pattern-based overrides"
},
{
"title": "Boundary Enforcement",
"description": "Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans"
},
{
"title": "Pressure Monitoring",
"description": "Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification"
},
{
"title": "Metacognitive Verification",
"description": "AI self-checks alignment, coherence, safety before execution - structural pause-and-verify"
},
{
"title": "Pluralistic Deliberation",
"description": "Multi-stakeholder values deliberation without hierarchy - facilitates human decision-making for incommensurable values"
}
]
},
"validation": {
"heading": "Real-World Validation",
"performance_evidence": {
"heading": "Preliminary Evidence: Safety and Performance May Be Aligned",
"paragraph_1": "Production deployment reveals an unexpected pattern: <strong>structural constraints appear to enhance AI reliability rather than constrain it</strong>. Users report completing in one governed session what previously required 3-5 attempts with ungoverned Claude Code—achieving significantly lower error rates and higher-quality outputs under architectural governance.",
"paragraph_2": "The mechanism appears to be <strong>prevention of degraded operating conditions</strong>: architectural boundaries stop context pressure failures, instruction drift, and pattern-based overrides before they compound into session-ending errors. By maintaining operational integrity throughout long interactions, the framework creates conditions for sustained high-quality output.",
"paragraph_3": "<strong>If this pattern holds at scale</strong>, it challenges a core assumption blocking AI safety adoption—that governance measures trade performance for safety. Instead, these findings suggest structural constraints may be a path to <em>both</em> safer <em>and</em> more capable AI systems. Statistical validation is ongoing.",
"methodology_note": "<strong>Methodology note:</strong> Findings based on qualitative user reports from production deployment. Controlled experiments and quantitative metrics collection scheduled for validation phase."
},
"case_27027": {
"badge": "Pattern Bias Incident",
"type": "Interactive Demo",
"title": "The 27027 Incident",
"description": "Real production incident where Claude Code defaulted to port 27017 (training pattern) despite explicit user instruction to use port 27027. CrossReferenceValidator detected the conflict and blocked execution—demonstrating how pattern recognition can override instructions under context pressure.",
"why_matters": "Why this matters: This failure mode gets worse as models improve—stronger pattern recognition means stronger override tendency. Architectural constraints remain necessary regardless of capability level.",
"cta": "View Interactive Demo"
},
"resources": {
"text": "Additional case studies and research findings documented in technical papers",
"cta": "Browse Case Studies →"
}
},
"footer": {
"about_heading": "Tractatus Framework",
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, guarantees.",
"documentation_heading": "Documentation",
"documentation_links": {
"framework_docs": "Framework Docs",
"about": "About",
"core_values": "Core Values",
"interactive_demo": "Interactive Demo"
},
"support_heading": "Support",
"support_links": {
"koha": "Support (Koha)",
"transparency": "Transparency",
"media_inquiries": "Media Inquiries",
"submit_case": "Submit Case Study"
},
"legal_heading": "Legal",
"legal_links": {
"privacy": "Privacy Policy",
"contact": "Contact Us",
"github": "GitHub"
},
"te_tiriti_label": "Te Tiriti o Waitangi:",
"te_tiriti_text": "We acknowledge Te Tiriti o Waitangi and our commitment to partnership, protection, and participation. This project respects Māori data sovereignty (rangatiratanga) and collective guardianship (kaitiakitanga).",
"copyright": "John G Stroh. Licensed under",
"license": "Apache 2.0",
"location": "Made in Aotearoa New Zealand 🇳🇿"
}
}