Translation Infrastructure: - Created 15 new translation files (en/de/fr) for 5 pages - Enhanced i18n-simple.js to auto-detect page names - Added page detection logic mapping URLs to translation files - Supports researcher, leader, implementer, about, faq pages Translation Files Created: English (en/): - researcher.json (research foundations, empirical observations) - leader.json (governance gap, architectural approach, EU AI Act) - implementer.json (integration approaches, quick start, deployment) - about.json (mission, values, origin story, license) - faq.json (search modal, browse by audience, tips) German (de/): - researcher.json (Forschungsgrundlagen, Empirische Beobachtungen) - leader.json (Governance-Lücke, Architektonischer Ansatz) - implementer.json (Integrationsansätze, Schnellstart) - about.json (Mission, Werte, Ursprungsgeschichte) - faq.json (Häufig gestellte Fragen) French (fr/): - researcher.json (Fondements de Recherche, Observations Empiriques) - leader.json (Lacune de Gouvernance, Approche Architecturale) - implementer.json (Approches d'Intégration, Démarrage Rapide) - about.json (Mission, Valeurs, Histoire d'Origine) - faq.json (Questions Fréquemment Posées) Technical Changes: - i18n-simple.js: Added detectPageName() method - Maps URL paths to translation file names - Loads page-specific translations automatically - researcher.html: Added data-i18n attributes to header section Language Selector: - Already deployed on all 6 pages (mobile icon-based design) - Now backed by full translation infrastructure - Switching languages loads correct page-specific translations Implementation Status: ✅ Translation files: Complete (15 files, ~350 translation keys) ✅ i18n system: Enhanced with page detection ✅ Proof of concept: Working on researcher.html ⏳ Full implementation: data-i18n attributes needed on remaining pages Next Steps for Full i18n: - Add data-i18n attributes to leader.html (~60 elements) - Add data-i18n attributes to implementer.html (~70 elements) - Add data-i18n attributes to about.html (~40 elements) - Add data-i18n attributes to faq.html (~30 elements) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
75 lines
6.3 KiB
JSON
75 lines
6.3 KiB
JSON
{
|
|
"page": {
|
|
"title": "About | Tractatus AI Safety Framework",
|
|
"description": "Learn about the Tractatus Framework: our mission, values, team, and commitment to preserving human agency through structural AI safety."
|
|
},
|
|
"header": {
|
|
"title": "About Tractatus",
|
|
"subtitle": "A framework for AI safety through architectural constraints, preserving human agency where it matters most."
|
|
},
|
|
"mission": {
|
|
"heading": "Our Mission",
|
|
"intro": "The Tractatus Framework exists to address a fundamental problem in AI safety: current approaches rely on training, fine-tuning, and corporate governance—all of which can fail, drift, or be overridden. We propose safety through architecture.",
|
|
"wittgenstein": "Inspired by Ludwig Wittgenstein's Tractatus Logico-Philosophicus, our framework recognizes that some domains—values, ethics, cultural context, human agency—cannot be systematized. What cannot be systematized must not be automated. AI systems should have structural constraints that prevent them from crossing these boundaries.",
|
|
"quote": "Whereof one cannot speak, thereof one must be silent.",
|
|
"quote_source": "— Ludwig Wittgenstein, Tractatus (§7)",
|
|
"applied": "Applied to AI: \"What cannot be systematized must not be automated.\""
|
|
},
|
|
"core_values": {
|
|
"heading": "Core Values",
|
|
"sovereignty_title": "Sovereignty",
|
|
"sovereignty_desc": "Individuals and communities must maintain control over decisions affecting their data, privacy, and values. AI systems must preserve human agency, not erode it.",
|
|
"transparency_title": "Transparency",
|
|
"transparency_desc": "All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand how and why systems make choices, and have power to override them.",
|
|
"harmlessness_title": "Harmlessness",
|
|
"harmlessness_desc": "AI systems must not cause harm through action or inaction. This includes preventing drift, detecting degradation, and enforcing boundaries against values erosion.",
|
|
"community_title": "Community",
|
|
"community_desc": "AI safety is a collective endeavor. We are committed to open collaboration, knowledge sharing, and empowering communities to shape the AI systems that affect their lives.",
|
|
"read_values_btn": "Read Our Complete Values Statement →"
|
|
},
|
|
"how_it_works": {
|
|
"heading": "How It Works",
|
|
"intro": "The Tractatus Framework consists of five integrated components that work together to enforce structural safety:",
|
|
"classifier_title": "InstructionPersistenceClassifier",
|
|
"classifier_desc": "Classifies instructions by quadrant (Strategic, Operational, Tactical, System, Stochastic) and determines persistence level (HIGH/MEDIUM/LOW/VARIABLE).",
|
|
"validator_title": "CrossReferenceValidator",
|
|
"validator_desc": "Validates AI actions against stored instructions to prevent pattern recognition bias (like the 27027 incident where AI's training patterns immediately overrode user's explicit \"port 27027\" instruction).",
|
|
"boundary_title": "BoundaryEnforcer",
|
|
"boundary_desc": "Ensures AI never makes values decisions without human approval. Privacy trade-offs, user agency, cultural context—these require human judgment.",
|
|
"pressure_title": "ContextPressureMonitor",
|
|
"pressure_desc": "Detects when session conditions increase error probability (token pressure, message length, task complexity) and adjusts behavior or suggests handoff.",
|
|
"metacognitive_title": "MetacognitiveVerifier",
|
|
"metacognitive_desc": "AI self-checks complex reasoning before proposing actions. Evaluates alignment, coherence, completeness, safety, and alternatives.",
|
|
"read_technical_btn": "Read Technical Documentation & Implementation Guide →"
|
|
},
|
|
"origin_story": {
|
|
"heading": "Origin Story",
|
|
"paragraph_1": "The Tractatus Framework emerged from real-world AI failures experienced during extended Claude Code sessions. The \"27027 incident\"—where AI's training patterns immediately overrode an explicit instruction (user said \"port 27027\", AI used \"port 27017\")—revealed that traditional safety approaches were insufficient. This wasn't forgetting; it was pattern recognition bias autocorrecting the user.",
|
|
"paragraph_2": "After documenting multiple failure modes (pattern recognition bias, values drift, silent degradation), we recognized a pattern: AI systems lacked structural constraints. They could theoretically \"learn\" safety, but in practice their training patterns overrode explicit instructions, and the problem gets worse as capabilities increase.",
|
|
"paragraph_3": "The solution wasn't better training—it was architecture. Drawing inspiration from Wittgenstein's insight that some things lie beyond the limits of language (and thus systematization), we built a framework that enforces boundaries through structure, not aspiration."
|
|
},
|
|
"license": {
|
|
"heading": "License & Contribution",
|
|
"intro": "The Tractatus Framework is open source under the Apache License 2.0. We encourage:",
|
|
"encouragement_1": "Academic research and validation studies",
|
|
"encouragement_2": "Implementation in production AI systems",
|
|
"encouragement_3": "Submission of failure case studies",
|
|
"encouragement_4": "Theoretical extensions and improvements",
|
|
"encouragement_5": "Community collaboration and knowledge sharing",
|
|
"rationale": "The framework is intentionally permissive because AI safety benefits from transparency and collective improvement, not proprietary control.",
|
|
"why_apache_title": "Why Apache 2.0?",
|
|
"why_apache_intro": "We chose Apache 2.0 over MIT because it provides:",
|
|
"patent_protection": "Patent Protection: Explicit patent grant protects users from patent litigation by contributors",
|
|
"contributor_clarity": "Contributor Clarity: Clear terms for how contributions are licensed",
|
|
"permissive_use": "Permissive Use: Like MIT, allows commercial use and inclusion in proprietary products",
|
|
"community_standard": "Community Standard: Widely used in AI/ML projects (TensorFlow, PyTorch, Apache Spark)",
|
|
"view_license_link": "View full Apache 2.0 License →"
|
|
},
|
|
"cta": {
|
|
"title": "Join the Movement",
|
|
"description": "Help build AI systems that preserve human agency through architectural guarantees.",
|
|
"for_researchers_btn": "For Researchers",
|
|
"for_implementers_btn": "For Implementers",
|
|
"for_leaders_btn": "For Leaders"
|
|
}
|
|
}
|