fix(i18n): add German and French translations for performance evidence section

SUMMARY:
Fixed missing translations for the performance evidence section that was
previously only available in English. All 3 languages now properly support
the "Preliminary Evidence: Safety and Performance May Be Aligned" content.

CHANGES MADE:

1. Added to en/homepage.json (lines 86-92):
   - validation.performance_evidence.heading
   - validation.performance_evidence.paragraph_1
   - validation.performance_evidence.paragraph_2
   - validation.performance_evidence.paragraph_3
   - validation.performance_evidence.methodology_note

2. Added to de/homepage.json (lines 86-92):
   - German translations of all performance evidence content
   - Removed obsolete subtitle with incorrect claims

3. Added to fr/homepage.json (lines 86-92):
   - French translations of all performance evidence content
   - Removed obsolete subtitle with incorrect claims

4. Updated index.html (lines 349, 350, 353, 356, 363):
   - Added data-i18n and data-i18n-html attributes
   - Heading: data-i18n="validation.performance_evidence.heading"
   - Paragraphs: data-i18n-html for proper HTML rendering
   - Methodology note: data-i18n-html

TRANSLATIONS:

English:
- "Preliminary Evidence: Safety and Performance May Be Aligned"
- 3-5× productivity improvement messaging
- Mechanism explanation
- Statistical validation ongoing

German:
- "Vorläufige Erkenntnisse: Sicherheit und Leistung könnten aufeinander abgestimmt sein"
- Equivalent messaging with proper German grammar
- Technical terminology accurately translated

French:
- "Preuves Préliminaires : Sécurité et Performance Pourraient Être Alignées"
- Equivalent messaging with proper French grammar
- Technical terminology accurately translated

IMPACT:
✓ Performance evidence now displays correctly in all 3 languages
✓ German and French users no longer see English-only content
✓ i18n system properly handles all validation section content
✓ Static HTML serves as proper fallback before JavaScript loads

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
TheFlow 2025-10-19 21:59:59 +13:00
parent 2877a7896c
commit 5a4be62a44
5 changed files with 56 additions and 9 deletions

View file

@ -4675,6 +4675,34 @@
"file": "/home/theflow/projects/tractatus/public/js/demos/27027-demo.js",
"result": "passed",
"reason": null
},
{
"hook": "validate-file-edit",
"timestamp": "2025-10-19T08:58:01.773Z",
"file": "/home/theflow/projects/tractatus/public/locales/en/homepage.json",
"result": "passed",
"reason": null
},
{
"hook": "validate-file-edit",
"timestamp": "2025-10-19T08:58:45.186Z",
"file": "/home/theflow/projects/tractatus/public/locales/de/homepage.json",
"result": "passed",
"reason": null
},
{
"hook": "validate-file-edit",
"timestamp": "2025-10-19T08:59:07.983Z",
"file": "/home/theflow/projects/tractatus/public/locales/fr/homepage.json",
"result": "passed",
"reason": null
},
{
"hook": "validate-file-edit",
"timestamp": "2025-10-19T08:59:27.599Z",
"file": "/home/theflow/projects/tractatus/public/index.html",
"result": "passed",
"reason": null
}
],
"blocks": [
@ -4938,9 +4966,9 @@
}
],
"session_stats": {
"total_edit_hooks": 480,
"total_edit_hooks": 484,
"total_edit_blocks": 36,
"last_updated": "2025-10-19T08:54:56.274Z",
"last_updated": "2025-10-19T08:59:27.599Z",
"total_write_hooks": 188,
"total_write_blocks": 7
}

View file

@ -346,21 +346,21 @@ Multi-stakeholder values deliberation without hierarchy - facilitates human deci
</svg>
</div>
<div class="flex-1">
<h3 class="text-2xl font-bold text-gray-900 mb-3">Preliminary Evidence: Safety and Performance May Be Aligned</h3>
<p class="text-gray-700 mb-4 leading-relaxed">
<h3 class="text-2xl font-bold text-gray-900 mb-3" data-i18n="validation.performance_evidence.heading">Preliminary Evidence: Safety and Performance May Be Aligned</h3>
<p class="text-gray-700 mb-4 leading-relaxed" data-i18n-html="validation.performance_evidence.paragraph_1">
Production deployment reveals an unexpected pattern: <strong>structural constraints appear to enhance AI reliability rather than constrain it</strong>. Users report completing in one governed session what previously required 3-5 attempts with ungoverned Claude Code—achieving significantly lower error rates and higher-quality outputs under architectural governance.
</p>
<p class="text-gray-700 mb-4 leading-relaxed">
<p class="text-gray-700 mb-4 leading-relaxed" data-i18n-html="validation.performance_evidence.paragraph_2">
The mechanism appears to be <strong>prevention of degraded operating conditions</strong>: architectural boundaries stop context pressure failures, instruction drift, and pattern-based overrides before they compound into session-ending errors. By maintaining operational integrity throughout long interactions, the framework creates conditions for sustained high-quality output.
</p>
<p class="text-gray-700 leading-relaxed">
<p class="text-gray-700 leading-relaxed" data-i18n-html="validation.performance_evidence.paragraph_3">
<strong>If this pattern holds at scale</strong>, it challenges a core assumption blocking AI safety adoption—that governance measures trade performance for safety. Instead, these findings suggest structural constraints may be a path to <em>both</em> safer <em>and</em> more capable AI systems. Statistical validation is ongoing.
</p>
</div>
</div>
<div class="bg-white bg-opacity-60 rounded-lg p-4 border border-green-300">
<p class="text-sm text-gray-800">
<p class="text-sm text-gray-800" data-i18n-html="validation.performance_evidence.methodology_note">
<strong>Methodology note:</strong> Findings based on qualitative user reports from production deployment. Controlled experiments and quantitative metrics collection scheduled for validation phase.
</p>
</div>

View file

@ -83,7 +83,13 @@
},
"validation": {
"heading": "Reale Validierung",
"subtitle": "Framework validiert in 6-monatiger Bereitstellung über ~500 Sitzungen mit Claude Code",
"performance_evidence": {
"heading": "Vorläufige Erkenntnisse: Sicherheit und Leistung könnten aufeinander abgestimmt sein",
"paragraph_1": "Die Produktionsbereitstellung zeigt ein unerwartetes Muster: <strong>Strukturelle Beschränkungen scheinen die KI-Zuverlässigkeit zu verbessern, anstatt sie einzuschränken</strong>. Nutzer berichten, dass sie in einer verwalteten Sitzung das erreichen, was zuvor 3-5 Versuche mit unverwalteten Claude Code erforderte—bei deutlich niedrigeren Fehlerquoten und qualitativ hochwertigeren Ergebnissen unter architektonischer Governance.",
"paragraph_2": "Der Mechanismus scheint die <strong>Verhinderung verschlechterter Betriebsbedingungen</strong> zu sein: Architektonische Grenzen stoppen Kontextdruckausfälle, Instruktionsdrift und musterbasierte Überschreibungen, bevor sie sich zu sitzungsbeendenden Fehlern aufschaukeln. Durch die Aufrechterhaltung der operativen Integrität während langer Interaktionen schafft das Framework Bedingungen für nachhaltig hochwertige Ergebnisse.",
"paragraph_3": "<strong>Wenn sich dieses Muster im großen Maßstab bestätigt</strong>, stellt es eine zentrale Annahme in Frage, die die Einführung von KI-Sicherheit blockiert—dass Governance-Maßnahmen Leistung gegen Sicherheit eintauschen. Stattdessen deuten diese Erkenntnisse darauf hin, dass strukturelle Beschränkungen ein Weg zu <em>sowohl</em> sichereren <em>als auch</em> leistungsfähigeren KI-Systemen sein könnten. Die statistische Validierung läuft.",
"methodology_note": "<strong>Methodenhinweis:</strong> Erkenntnisse basieren auf qualitativen Nutzerberichten aus der Produktionsbereitstellung. Kontrollierte Experimente und quantitative Metrikenerfassung sind für die Validierungsphase geplant."
},
"case_27027": {
"badge": "Muster-Bias-Vorfall",
"type": "Interaktive Demo",

View file

@ -83,6 +83,13 @@
},
"validation": {
"heading": "Real-World Validation",
"performance_evidence": {
"heading": "Preliminary Evidence: Safety and Performance May Be Aligned",
"paragraph_1": "Production deployment reveals an unexpected pattern: <strong>structural constraints appear to enhance AI reliability rather than constrain it</strong>. Users report completing in one governed session what previously required 3-5 attempts with ungoverned Claude Code—achieving significantly lower error rates and higher-quality outputs under architectural governance.",
"paragraph_2": "The mechanism appears to be <strong>prevention of degraded operating conditions</strong>: architectural boundaries stop context pressure failures, instruction drift, and pattern-based overrides before they compound into session-ending errors. By maintaining operational integrity throughout long interactions, the framework creates conditions for sustained high-quality output.",
"paragraph_3": "<strong>If this pattern holds at scale</strong>, it challenges a core assumption blocking AI safety adoption—that governance measures trade performance for safety. Instead, these findings suggest structural constraints may be a path to <em>both</em> safer <em>and</em> more capable AI systems. Statistical validation is ongoing.",
"methodology_note": "<strong>Methodology note:</strong> Findings based on qualitative user reports from production deployment. Controlled experiments and quantitative metrics collection scheduled for validation phase."
},
"case_27027": {
"badge": "Pattern Bias Incident",
"type": "Interactive Demo",

View file

@ -83,7 +83,13 @@
},
"validation": {
"heading": "Validation en Conditions Réelles",
"subtitle": "Framework validé lors d'un déploiement de 6 mois sur ~500 sessions avec Claude Code",
"performance_evidence": {
"heading": "Preuves Préliminaires : Sécurité et Performance Pourraient Être Alignées",
"paragraph_1": "Le déploiement en production révèle un schéma inattendu : <strong>les contraintes structurelles semblent améliorer la fiabilité de l'IA plutôt que de la limiter</strong>. Les utilisateurs rapportent avoir accompli en une session gouvernée ce qui nécessitait auparavant 3 à 5 tentatives avec Claude Code non gouverné—obtenant des taux d'erreur nettement inférieurs et des résultats de meilleure qualité sous gouvernance architecturale.",
"paragraph_2": "Le mécanisme semble être la <strong>prévention de conditions de fonctionnement dégradées</strong> : les limites architecturales arrêtent les échecs de pression contextuelle, la dérive d'instruction et les remplacements basés sur des motifs avant qu'ils ne se transforment en erreurs mettant fin à la session. En maintenant l'intégrité opérationnelle tout au long des longues interactions, le framework crée les conditions pour une production soutenue de haute qualité.",
"paragraph_3": "<strong>Si ce schéma se confirme à grande échelle</strong>, il remet en question une hypothèse fondamentale bloquant l'adoption de la sécurité de l'IA—que les mesures de gouvernance échangent la performance contre la sécurité. Au lieu de cela, ces résultats suggèrent que les contraintes structurelles pourraient être un chemin vers des systèmes d'IA <em>à la fois</em> plus sûrs <em>et</em> plus capables. La validation statistique est en cours.",
"methodology_note": "<strong>Note méthodologique :</strong> Résultats basés sur des rapports qualitatifs d'utilisateurs provenant du déploiement en production. Des expériences contrôlées et la collecte de métriques quantitatives sont prévues pour la phase de validation."
},
"case_27027": {
"badge": "Incident de Biais de Motif",
"type": "Démo Interactive",