tractatus/public/locales/en/architecture.json
TheFlow 7115bd9fd8 feat(i18n): complete architecture.html internationalization with P0/P1/P2 fixes
## P0 - Launch Blockers
 Created comprehensive translation files (EN, DE, FR)
  - /locales/en/architecture.json (31 translatable sections)
  - /locales/de/architecture.json (complete German translations)
  - /locales/fr/architecture.json (complete French translations)

 Added data-i18n attributes throughout HTML
  - Breadcrumb navigation
  - Hero section (badge, title, subtitle, challenge, approach, CTAs)
  - Comparison section (headings, titles)
  - Architecture diagram (titles, descriptions for all 3 layers)
  - Six Governance Services (all service names, descriptions, promises)
  - Interactive section (titles, instructions, tooltips)
  - Data visualizations heading
  - Production section (titles, results, disclaimers)
  - Limitations section (headings, limitations list, quote)
  - CTA section (heading, subtitle, buttons)
  - Total: 31 data-i18n attributes added

 Fixed card overflow on Six Governance Services cards
  - Added min-w-0 max-w-full overflow-hidden to all 6 service cards
  - Added break-words overflow-wrap-anywhere to card titles
  - Added break-words to service descriptions
  - Prevents cards from breaking container boundaries

## P1 - Should Fix Before Launch
 Added touch event handling to interactive diagram
  - Added touchstart listener with passive:false
  - Prevents default behavior for better mobile UX
  - Complements existing click handlers

## P2 - Nice to Have
 Improved mobile diagram sizing
  - Increased from w-48 sm:w-56 lg:w-64 to w-64 sm:w-72 lg:w-80
  - ~33% larger on all breakpoints for better mobile visibility

 Added soft hyphens to long service names
  - BoundaryEnforcer → Boundary­Enforcer
  - InstructionPersistenceClassifier → Instruction­Persistence­Classifier
  - CrossReferenceValidator → Cross­Reference­Validator
  - ContextPressureMonitor → Context­Pressure­Monitor
  - MetacognitiveVerifier → Metacognitive­Verifier
  - PluralisticDeliberationOrchestrator → Pluralistic­Deliberation­Orchestrator
  - Enables intelligent line breaking for long CamelCase service names

## Changes Summary
- 3 new translation files created (1,866 lines total)
- architecture.html: 31 data-i18n attributes, 6 overflow-protected cards, 6 soft hyphens
- interactive-diagram.js: Added touch event support for mobile

## Impact
- architecture.html now fully internationalized (EN, DE, FR)
- Cards respect boundaries on all screen sizes
- Interactive diagram works on touch devices
- Long service names wrap intelligently
- Matches quality level of docs.html

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 11:33:04 +13:00

135 lines
9.7 KiB
JSON

{
"breadcrumb": {
"home": "Home",
"current": "Architecture"
},
"hero": {
"badge": "🔬 EARLY-STAGE RESEARCH • PROMISING APPROACH",
"title": "Exploring Structural AI Safety",
"subtitle": "Tractatus explores <strong>external governance</strong>—architectural boundaries operating outside the AI runtime that may be more resistant to adversarial manipulation than behavioral training alone.",
"challenge_label": "The Challenge:",
"challenge_text": "Behavioral training (Constitutional AI, RLHF) shows promise but can degrade under adversarial prompting, context pressure, or distribution shift.",
"approach_label": "Our Approach:",
"approach_text": "External architectural enforcement that operates independently of the AI's internal reasoning—making it structurally more difficult (though not impossible) to bypass through prompting.",
"cta_architecture": "View Architecture",
"cta_docs": "Read Documentation"
},
"comparison": {
"heading": "Why External Enforcement May Help",
"behavioral_title": "Behavioral Training (Constitutional AI)",
"behavioral_items": [
"Lives <strong>inside</strong> the AI model—accessible to adversarial prompts",
"Degrades under context pressure and long conversations",
"Can be manipulated by jailbreak techniques (DAN, roleplaying, hypotheticals)",
"Depends on AI's willingness to follow guidance",
"No verifiable audit trail independent of AI"
],
"structural_title": "Structural Enforcement (Tractatus)",
"structural_items": [
"Lives <strong>outside</strong> the AI model—not directly accessible to prompts",
"External services aim for consistent enforcement regardless of context",
"<em>More difficult</em> to bypass—AI actions pass through governance layer first",
"Architecturally resistant to manipulation via AI's internal state",
"Immutable audit trail stored independently of AI runtime"
],
"hypothesis_title": "The Core Hypothesis",
"hypothesis_text": "<strong>Jailbreaks often work by manipulating the AI's internal reasoning.</strong> Tractatus boundaries operate <em>external</em> to that reasoning—the AI doesn't directly evaluate governance rules. While not foolproof, this architectural separation makes manipulation significantly harder."
},
"architecture_diagram": {
"title": "Runtime-Agnostic Architecture",
"subtitle": "Tractatus works with <strong>any agentic AI system</strong>—Claude Code, LangChain, AutoGPT, CrewAI, or custom agents. The governance layer sits between your agent and its actions.",
"layer1_title": "1. Agent Runtime Layer",
"layer1_desc": "Your AI agent (any platform). Handles planning, reasoning, tool use. Tractatus is agnostic to implementation.",
"layer2_title": "2. Governance Layer",
"layer2_desc": "Six external services enforce boundaries, validate actions, monitor pressure. <strong>Architecturally more difficult for AI to bypass.</strong>",
"layer3_title": "3. Persistent Storage",
"layer3_desc": "Immutable audit logs, governance rules, instruction history. Independent of AI runtime—can't be altered by prompts."
},
"services": {
"heading": "Six Governance Services",
"boundary": {
"name": "BoundaryEnforcer",
"description": "Blocks AI from making values decisions (privacy, ethics, strategic direction). Requires human approval.",
"promise": "<strong>Early Promise:</strong> Values boundaries enforced externally—harder to manipulate through prompting."
},
"instruction": {
"name": "InstructionPersistenceClassifier",
"description": "Stores instructions externally with persistence levels (HIGH/MEDIUM/LOW). Aims to reduce directive fade.",
"promise": "<strong>Early Promise:</strong> Instructions stored outside AI—more resistant to context manipulation."
},
"validator": {
"name": "CrossReferenceValidator",
"description": "Validates AI actions against instruction history. Aims to prevent pattern bias overriding explicit directives.",
"promise": "<strong>Early Promise:</strong> Independent verification—AI claims checked against external source."
},
"pressure": {
"name": "ContextPressureMonitor",
"description": "Monitors AI performance degradation. Escalates when context pressure threatens quality.",
"promise": "<strong>Early Promise:</strong> Objective metrics may detect manipulation attempts early."
},
"metacognitive": {
"name": "MetacognitiveVerifier",
"description": "Requires AI to pause and verify complex operations before execution. Structural safety check.",
"promise": "<strong>Early Promise:</strong> Architectural gates aim to enforce verification steps."
},
"deliberation": {
"name": "PluralisticDeliberationOrchestrator",
"description": "Facilitates multi-stakeholder deliberation for values conflicts. AI provides facilitation, not authority.",
"promise": "<strong>Early Promise:</strong> Human judgment required—architecturally enforced escalation for values."
}
},
"interactive": {
"title": "Explore the Architecture Interactively",
"subtitle": "Click any service node or the central core to see detailed information about how governance works.",
"tip_label": "Tip:",
"tip_text": "Click the central <span class=\"font-semibold text-cyan-600\">\"T\"</span> to see how all services work together",
"panel_default_title": "Explore the Governance Services",
"panel_default_text": "Click any service node in the diagram (colored circles) or the central \"T\" to learn more about how Tractatus enforces AI safety."
},
"data_viz": {
"heading": "Framework in Action",
"subtitle": "Interactive visualizations demonstrating how Tractatus governance services monitor and coordinate AI operations."
},
"production": {
"heading": "Production Reference Implementation",
"subtitle": "Tractatus is deployed in production using <strong>Claude Code</strong> as the agent runtime. This demonstrates the framework's real-world viability.",
"implementation_title": "Claude Code + Tractatus",
"implementation_intro": "Our production deployment uses Claude Code as the agent runtime with Tractatus governance middleware. This combination provides:",
"implementation_results_intro": "Results from 6-month production deployment:",
"result1": "<strong>95% instruction persistence</strong> across session boundaries",
"result2": "<strong>Zero values boundary violations</strong> in 127 test scenarios",
"result3": "<strong>100% detection rate</strong> for pattern bias failures",
"result4": "<strong>&lt;10ms performance overhead</strong> for governance layer",
"disclaimer": "*Single-agent deployment. Independent validation and multi-organization replication needed.",
"testing_title": "Real-World Testing",
"testing_text1": "<strong>This isn't just theory.</strong> Tractatus is running in production, handling real workloads and detecting real failure patterns.",
"testing_text2": "Early results are <strong>promising</strong>—with documented incident prevention—but this needs independent validation and much wider testing.",
"diagram_link": "View Claude Code Implementation Diagram →"
},
"limitations": {
"heading": "Limitations and Reality Check",
"intro": "<strong>This is early-stage work.</strong> While we've seen promising results in our production deployment, Tractatus has not been subjected to rigorous adversarial testing or red-team evaluation.",
"quote": "We have real promise but this is still in early development stage. This sounds like we have the complete issue resolved, we do not. We have a long way to go and it will require a mammoth effort by developers in every part of the industry to tame AI effectively. This is just a start.",
"quote_attribution": "— Project Lead, Tractatus Framework",
"known_heading": "Known Limitations:",
"limitation1": "<strong>No dedicated red-team testing:</strong> We don't know how well these boundaries hold up against determined adversarial attacks.",
"limitation2": "<strong>Small-scale validation:</strong> Six months of production use on a single project. Needs multi-organization replication.",
"limitation3": "<strong>Integration challenges:</strong> Retrofitting governance into existing systems requires significant engineering effort.",
"limitation4": "<strong>Performance at scale unknown:</strong> Testing limited to single-agent deployments. Multi-agent coordination untested.",
"limitation5": "<strong>Evolving threat landscape:</strong> As AI capabilities grow, new failure modes will emerge that current architecture may not address.",
"needs_heading": "What We Need:",
"need1": "Independent researchers to validate (or refute) our findings",
"need2": "Red-team evaluation to find weaknesses and bypass techniques",
"need3": "Multi-organization pilot deployments across different domains",
"need4": "Industry-wide collaboration on governance standards and patterns",
"need5": "Quantitative studies measuring incident reduction and cost-benefit analysis",
"conclusion": "This framework is a starting point for exploration, not a finished solution. Taming AI will require sustained effort from the entire industry—researchers, practitioners, regulators, and ethicists working together."
},
"cta": {
"heading": "Explore a Promising Approach to AI Safety",
"subtitle": "Tractatus demonstrates how structural enforcement may complement behavioral training. We invite researchers and practitioners to evaluate, critique, and build upon this work.",
"btn_docs": "Read Documentation",
"btn_research": "View Research",
"btn_implementation": "Implementation Guide"
}
}