feat(i18n): complete leader.html accordion translations for DE/FR

Added translations for 7 remaining accordion sections in leader.html:
- Demo: Audit Logging (8 keys)
- Demo: Incident Learning (8 keys)
- Demo: Pluralistic Deliberation (15 keys)
- Validated vs Not Validated (6 keys)
- EU AI Act Considerations (8 keys)
- Research Foundations (7 keys)
- Scope & Limitations (12 keys)

All JSON code blocks and technical identifiers remain in English.
Only human-readable descriptive content is translated.

Total: ~64 new translation keys added to EN/DE/FR leader.json files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
TheFlow 2025-10-26 18:26:47 +13:00
parent d1bfd3f811
commit ca0ea92790
4 changed files with 267 additions and 72 deletions

View file

@ -206,7 +206,7 @@
<div id="demo-audit-content" class="accordion-content" role="region" aria-labelledby="demo-audit-button">
<div class="p-5 border-t border-gray-200">
<div class="bg-white rounded border border-gray-200 p-4 mb-4">
<h4 class="text-sm font-semibold text-gray-900 mb-3">Sample Audit Log Structure</h4>
<h4 class="text-sm font-semibold text-gray-900 mb-3" data-i18n="sections.governance_capabilities.sample_heading">Sample Audit Log Structure</h4>
<pre class="text-xs font-mono bg-gray-50 p-3 rounded overflow-x-auto text-gray-700"><code>{
"timestamp": "2025-10-13T14:23:17.482Z",
"session_id": "sess_2025-10-13-001",
@ -230,18 +230,18 @@
<div class="space-y-3 text-sm text-gray-700">
<div class="bg-blue-50 border-l-4 border-blue-500 p-3">
<strong class="text-blue-900">Immutability:</strong> <span class="text-blue-800">Audit logs stored in append-only database. AI cannot modify or delete entries.</span>
<strong class="text-blue-900" data-i18n="sections.governance_capabilities.immutability_label">Immutability:</strong> <span class="text-blue-800" data-i18n="sections.governance_capabilities.immutability_text">Audit logs stored in append-only database. AI cannot modify or delete entries.</span>
</div>
<div class="bg-green-50 border-l-4 border-green-500 p-3">
<strong class="text-green-900">Compliance Evidence:</strong> <span class="text-green-800">Automatic tagging with regulatory requirements (EU AI Act Article 14, GDPR Article 22, etc.)</span>
<strong class="text-green-900" data-i18n="sections.governance_capabilities.compliance_label">Compliance Evidence:</strong> <span class="text-green-800" data-i18n="sections.governance_capabilities.compliance_text">Automatic tagging with regulatory requirements (EU AI Act Article 14, GDPR Article 22, etc.)</span>
</div>
<div class="bg-amber-50 border-l-4 border-amber-500 p-3">
<strong class="text-amber-900">Export Capabilities:</strong> <span class="text-amber-800">Generate compliance reports for regulators showing human oversight enforcement</span>
<strong class="text-amber-900" data-i18n="sections.governance_capabilities.export_label">Export Capabilities:</strong> <span class="text-amber-800" data-i18n="sections.governance_capabilities.export_text">Generate compliance reports for regulators showing human oversight enforcement</span>
</div>
</div>
<div class="mt-4 pt-4 border-t border-gray-200">
<p class="text-xs text-gray-600">
<p class="text-xs text-gray-600" data-i18n="sections.governance_capabilities.footer_text">
When regulator asks "How do you prove effective human oversight at scale?", this audit trail provides structural evidence independent of AI cooperation.
</p>
</div>
@ -265,34 +265,34 @@
<div class="space-y-4">
<!-- Flow diagram -->
<div class="bg-gray-50 border border-gray-200 rounded p-4">
<h4 class="text-sm font-semibold text-gray-900 mb-4">Incident Learning Flow</h4>
<h4 class="text-sm font-semibold text-gray-900 mb-4" data-i18n="sections.governance_capabilities.flow_heading">Incident Learning Flow</h4>
<div class="space-y-3">
<div class="flex items-center gap-3">
<div class="bg-red-100 text-red-700 px-3 py-2 rounded text-xs font-medium w-40">1. Incident Detected</div>
<div class="text-xs text-gray-600">CrossReferenceValidator flags policy violation</div>
<div class="text-xs text-gray-600" data-i18n="sections.governance_capabilities.step_1_desc">CrossReferenceValidator flags policy violation</div>
</div>
<div class="flex items-center gap-3">
<div class="bg-amber-100 text-amber-900 px-3 py-2 rounded text-xs font-medium w-40">2. Root Cause Analysis</div>
<div class="text-xs text-gray-600">Automated analysis of instruction history, context state</div>
<div class="text-xs text-gray-600" data-i18n="sections.governance_capabilities.step_2_desc">Automated analysis of instruction history, context state</div>
</div>
<div class="flex items-center gap-3">
<div class="bg-blue-100 text-blue-700 px-3 py-2 rounded text-xs font-medium w-40">3. Rule Generation</div>
<div class="text-xs text-gray-600">Proposed governance rule to prevent recurrence</div>
<div class="text-xs text-gray-600" data-i18n="sections.governance_capabilities.step_3_desc">Proposed governance rule to prevent recurrence</div>
</div>
<div class="flex items-center gap-3">
<div class="bg-purple-100 text-purple-700 px-3 py-2 rounded text-xs font-medium w-40">4. Human Validation</div>
<div class="text-xs text-gray-600">Governance board reviews and approves new rule</div>
<div class="text-xs text-gray-600" data-i18n="sections.governance_capabilities.step_4_desc">Governance board reviews and approves new rule</div>
</div>
<div class="flex items-center gap-3">
<div class="bg-green-100 text-green-700 px-3 py-2 rounded text-xs font-medium w-40">5. Deployment</div>
<div class="text-xs text-gray-600">Rule added to persistent storage, active immediately</div>
<div class="text-xs text-gray-600" data-i18n="sections.governance_capabilities.step_5_desc">Rule added to persistent storage, active immediately</div>
</div>
</div>
</div>
<!-- Example rule -->
<div class="bg-white border border-gray-200 rounded p-4">
<h4 class="text-sm font-semibold text-gray-900 mb-2">Example Generated Rule</h4>
<h4 class="text-sm font-semibold text-gray-900 mb-2" data-i18n="sections.governance_capabilities.example_heading">Example Generated Rule</h4>
<pre class="text-xs font-mono bg-gray-50 p-3 rounded overflow-x-auto text-gray-700"><code>{
"rule_id": "TRA-OPS-0042",
"created": "2025-10-13T15:45:00Z",
@ -310,7 +310,7 @@
</div>
<div class="text-xs text-gray-600 pt-3 border-t border-gray-200">
<strong>Organisational Learning:</strong> When one team encounters governance failure, entire organisation benefits from automatically generated preventive rules. Scales governance knowledge without manual documentation.
<strong data-i18n="sections.governance_capabilities.learning_label">Organisational Learning:</strong> <span data-i18n="sections.governance_capabilities.learning_text">When one team encounters governance failure, entire organisation benefits from automatically generated preventive rules. Scales governance knowledge without manual documentation.</span>
</div>
</div>
</div>
@ -333,55 +333,55 @@
<div class="space-y-4">
<!-- Abstract conflict scenario -->
<div class="bg-amber-50 border-l-4 border-amber-500 p-4 text-sm">
<strong class="text-amber-900">Conflict Detection:</strong>
<p class="text-amber-800 mt-1">AI system identifies competing values in decision context (e.g., efficiency vs. transparency, cost vs. risk mitigation, innovation vs. regulatory compliance). BoundaryEnforcer blocks autonomous decision, escalates to PluralisticDeliberationOrchestrator.</p>
<strong class="text-amber-900" data-i18n="sections.governance_capabilities.conflict_label">Conflict Detection:</strong>
<p class="text-amber-800 mt-1" data-i18n="sections.governance_capabilities.conflict_text">AI system identifies competing values in decision context (e.g., efficiency vs. transparency, cost vs. risk mitigation, innovation vs. regulatory compliance). BoundaryEnforcer blocks autonomous decision, escalates to PluralisticDeliberationOrchestrator.</p>
</div>
<!-- Stakeholder identification -->
<div class="bg-white border border-gray-200 rounded p-4">
<h4 class="text-sm font-semibold text-gray-900 mb-3">Stakeholder Identification Process</h4>
<h4 class="text-sm font-semibold text-gray-900 mb-3" data-i18n="sections.governance_capabilities.stakeholder_heading">Stakeholder Identification Process</h4>
<div class="space-y-2 text-xs text-gray-700">
<div class="flex gap-2">
<span class="text-purple-600">1.</span>
<div><strong>Automatic Detection:</strong> System identifies which values frameworks are in tension (utilitarian, deontological, virtue ethics, contractarian, etc.)</div>
<div data-i18n="sections.governance_capabilities.stakeholder_1"><strong>Automatic Detection:</strong> System identifies which values frameworks are in tension (utilitarian, deontological, virtue ethics, contractarian, etc.)</div>
</div>
<div class="flex gap-2">
<span class="text-purple-600">2.</span>
<div><strong>Stakeholder Mapping:</strong> Identifies parties with legitimate interest in decision (affected parties, domain experts, governance authorities, community representatives)</div>
<div data-i18n="sections.governance_capabilities.stakeholder_2"><strong>Stakeholder Mapping:</strong> Identifies parties with legitimate interest in decision (affected parties, domain experts, governance authorities, community representatives)</div>
</div>
<div class="flex gap-2">
<span class="text-purple-600">3.</span>
<div><strong>Human Approval:</strong> Governance board reviews stakeholder list, adds/removes as appropriate (TRA-OPS-0002)</div>
<div data-i18n="sections.governance_capabilities.stakeholder_3"><strong>Human Approval:</strong> Governance board reviews stakeholder list, adds/removes as appropriate (TRA-OPS-0002)</div>
</div>
</div>
</div>
<!-- Deliberation process -->
<div class="bg-white border border-gray-200 rounded p-4">
<h4 class="text-sm font-semibold text-gray-900 mb-3">Non-Hierarchical Deliberation</h4>
<h4 class="text-sm font-semibold text-gray-900 mb-3" data-i18n="sections.governance_capabilities.deliberation_heading">Non-Hierarchical Deliberation</h4>
<div class="grid grid-cols-1 md:grid-cols-2 gap-3 text-xs">
<div class="bg-blue-50 p-3 rounded">
<div class="font-semibold text-blue-900 mb-1">Equal Voice</div>
<div class="text-blue-800">All stakeholders present perspectives without hierarchical weighting. Technical experts don't automatically override community concerns.</div>
<div class="font-semibold text-blue-900 mb-1" data-i18n="sections.governance_capabilities.equal_voice_title">Equal Voice</div>
<div class="text-blue-800" data-i18n="sections.governance_capabilities.equal_voice_text">All stakeholders present perspectives without hierarchical weighting. Technical experts don't automatically override community concerns.</div>
</div>
<div class="bg-green-50 p-3 rounded">
<div class="font-semibold text-green-900 mb-1">Documented Dissent</div>
<div class="text-green-800">Minority positions recorded in full. Dissenting stakeholders can document why consensus fails their values framework.</div>
<div class="font-semibold text-green-900 mb-1" data-i18n="sections.governance_capabilities.dissent_title">Documented Dissent</div>
<div class="text-green-800" data-i18n="sections.governance_capabilities.dissent_text">Minority positions recorded in full. Dissenting stakeholders can document why consensus fails their values framework.</div>
</div>
<div class="bg-purple-50 p-3 rounded">
<div class="font-semibold text-purple-900 mb-1">Moral Remainder</div>
<div class="text-purple-800">System documents unavoidable value trade-offs. Even "correct" decision creates documented harm to other legitimate values.</div>
<div class="font-semibold text-purple-900 mb-1" data-i18n="sections.governance_capabilities.moral_title">Moral Remainder</div>
<div class="text-purple-800" data-i18n="sections.governance_capabilities.moral_text">System documents unavoidable value trade-offs. Even "correct" decision creates documented harm to other legitimate values.</div>
</div>
<div class="bg-amber-50 p-3 rounded">
<div class="font-semibold text-amber-900 mb-1">Precedent (Not Binding)</div>
<div class="text-amber-800">Decision becomes informative precedent for similar conflicts. But context differences mean precedents guide, not dictate.</div>
<div class="font-semibold text-amber-900 mb-1" data-i18n="sections.governance_capabilities.precedent_title">Precedent (Not Binding)</div>
<div class="text-amber-800" data-i18n="sections.governance_capabilities.precedent_text">Decision becomes informative precedent for similar conflicts. But context differences mean precedents guide, not dictate.</div>
</div>
</div>
</div>
<!-- Output structure -->
<div class="bg-gray-50 border border-gray-200 rounded p-4">
<h4 class="text-sm font-semibold text-gray-900 mb-2">Deliberation Record Structure</h4>
<h4 class="text-sm font-semibold text-gray-900 mb-2" data-i18n="sections.governance_capabilities.record_heading">Deliberation Record Structure</h4>
<pre class="text-xs font-mono bg-white p-3 rounded overflow-x-auto text-gray-700"><code>{
"deliberation_id": "delib_2025-10-13-003",
"conflict_type": "efficiency_vs_transparency",
@ -405,7 +405,7 @@
}</code></pre>
</div>
<div class="text-xs text-gray-600 pt-3 border-t border-gray-200">
<div class="text-xs text-gray-600 pt-3 border-t border-gray-200" data-i18n="sections.governance_capabilities.key_principle">
<strong>Key Principle:</strong> When legitimate values conflict, no algorithm can determine the "correct" answer. Tractatus ensures decisions are made through inclusive deliberation with full documentation of trade-offs, rather than AI imposing single values framework or decision-maker dismissing stakeholder concerns.
</div>
</div>
@ -435,13 +435,13 @@
<div id="validation-content" class="accordion-content" role="region" aria-labelledby="validation-button">
<div class="p-5 border-t border-gray-200 space-y-4 text-sm text-gray-700">
<div>
<strong class="text-gray-900">Validated:</strong> Framework successfully governs Claude Code in development workflows. User reports order-of-magnitude improvement in productivity for non-technical operators building production systems.
<strong class="text-gray-900" data-i18n="sections.development_status.validated_label">Validated:</strong> <span data-i18n="sections.development_status.validated_text">Framework successfully governs Claude Code in development workflows. User reports order-of-magnitude improvement in productivity for non-technical operators building production systems.</span>
</div>
<div>
<strong class="text-gray-900">Not Validated:</strong> Performance at enterprise scale, integration complexity with existing systems, effectiveness against adversarial prompts, cross-platform consistency.
<strong class="text-gray-900" data-i18n="sections.development_status.not_validated_label">Not Validated:</strong> <span data-i18n="sections.development_status.not_validated_text">Performance at enterprise scale, integration complexity with existing systems, effectiveness against adversarial prompts, cross-platform consistency.</span>
</div>
<div>
<strong class="text-gray-900">Known Limitation:</strong> Framework can be bypassed if AI simply chooses not to use governance tools. Voluntary invocation remains a structural weakness requiring external enforcement mechanisms.
<strong class="text-gray-900" data-i18n="sections.development_status.limitation_label">Known Limitation:</strong> <span data-i18n="sections.development_status.limitation_text">Framework can be bypassed if AI simply chooses not to use governance tools. Voluntary invocation remains a structural weakness requiring external enforcement mechanisms.</span>
</div>
</div>
</div>
@ -461,22 +461,22 @@
</button>
<div id="euaiact-content" class="accordion-content" role="region" aria-labelledby="euaiact-button">
<div class="p-5 border-t border-gray-200 prose prose-sm max-w-none text-gray-700">
<p class="mb-4">
<p class="mb-4" data-i18n="sections.eu_ai_act.intro">
The EU AI Act (Regulation 2024/1689) establishes human oversight requirements for high-risk AI systems (Article 14). Organisations must ensure AI systems are "effectively overseen by natural persons" with authority to interrupt or disregard AI outputs.
</p>
<p class="mb-4">
<p class="mb-4" data-i18n="sections.eu_ai_act.addresses">
Tractatus addresses this through architectural controls that:
</p>
<ul class="list-disc pl-6 space-y-2 mb-4">
<li>Generate immutable audit trails documenting AI decision-making processes</li>
<li>Enforce human approval requirements for values-based decisions</li>
<li>Provide evidence of oversight mechanisms independent of AI cooperation</li>
<li>Document compliance with transparency and record-keeping obligations</li>
<li data-i18n="sections.eu_ai_act.bullet_1">Generate immutable audit trails documenting AI decision-making processes</li>
<li data-i18n="sections.eu_ai_act.bullet_2">Enforce human approval requirements for values-based decisions</li>
<li data-i18n="sections.eu_ai_act.bullet_3">Provide evidence of oversight mechanisms independent of AI cooperation</li>
<li data-i18n="sections.eu_ai_act.bullet_4">Document compliance with transparency and record-keeping obligations</li>
</ul>
<p class="mb-4">
<p class="mb-4" data-i18n="sections.eu_ai_act.disclaimer">
<strong>This does not constitute legal compliance advice.</strong> Organisations should evaluate whether these architectural patterns align with their specific regulatory obligations in consultation with legal counsel.
</p>
<p class="text-gray-600 text-xs">
<p class="text-gray-600 text-xs" data-i18n="sections.eu_ai_act.penalties">
Maximum penalties under EU AI Act: €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices; €15 million or 3% for other violations.
</p>
</div>
@ -497,32 +497,32 @@
</button>
<div id="research-content" class="accordion-content" role="region" aria-labelledby="research-button">
<div class="p-5 border-t border-gray-200 text-sm text-gray-700 space-y-3">
<p>
<p data-i18n="sections.research_foundations.intro">
Tractatus draws on 40+ years of organisational theory research: time-based organisation (Bluedorn, Ancona), knowledge orchestration (Crossan), post-bureaucratic authority (Laloux), structural inertia (Hannan & Freeman).
</p>
<p>
<p data-i18n="sections.research_foundations.premise">
Core premise: When knowledge becomes ubiquitous through AI, authority must derive from appropriate time horizon and domain expertise rather than hierarchical position. Governance systems must orchestrate decision-making across strategic, operational, and tactical timescales.
</p>
<p>
<a href="/downloads/organizational-theory-foundations-of-the-tractatus-framework.pdf" target="_blank" class="text-amber-800 hover:text-amber-900 font-medium inline-flex items-center underline">
View complete organisational theory foundations (PDF)
<span data-i18n="sections.research_foundations.view_pdf">View complete organisational theory foundations (PDF)</span>
<svg class="w-4 h-4 ml-1" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M10 6H6a2 2 0 00-2 2v10a2 2 0 002 2h10a2 2 0 002-2v-4M14 4h6m0 0v6m0-6L10 14"/>
</svg>
</a>
</p>
<p class="mt-3">
<strong>AI Safety Research:</strong> Architectural Safeguards Against LLM Hierarchical DominanceHow Tractatus protects pluralistic values from AI pattern bias while maintaining safety boundaries.
<span data-i18n="sections.research_foundations.ai_safety_title"><strong>AI Safety Research:</strong> Architectural Safeguards Against LLM Hierarchical Dominance</span><span data-i18n="sections.research_foundations.ai_safety_desc">How Tractatus protects pluralistic values from AI pattern bias while maintaining safety boundaries.</span>
<span class="inline-flex gap-3 mt-2">
<a href="/docs/research/ARCHITECTURAL-SAFEGUARDS-Against-LLM-Hierarchical-Dominance-Prose.pdf" target="_blank" class="text-amber-800 hover:text-amber-900 font-medium inline-flex items-center underline">
PDF
<span data-i18n="sections.research_foundations.pdf_link">PDF</span>
<svg class="w-4 h-4 ml-1" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M10 6H6a2 2 0 00-2 2v10a2 2 0 002 2h10a2 2 0 002-2v-4M14 4h6m0 0v6m0-6L10 14"/>
</svg>
</a>
<span class="text-gray-400">|</span>
<a href="/docs.html?doc=architectural-safeguards-against-llm-hierarchical-dominance-prose" class="text-amber-800 hover:text-amber-900 font-medium inline-flex items-center underline">
Read online
<span data-i18n="sections.research_foundations.read_online">Read online</span>
<svg class="w-4 h-4 ml-1" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12h6m-6 4h6m2 5H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg>
@ -550,21 +550,21 @@
<div>
<strong class="text-gray-900" data-i18n="sections.scope_limitations.not_title">Tractatus is not:</strong>
<ul class="list-disc pl-6 mt-2 space-y-1 text-gray-600">
<li>A comprehensive AI safety solution</li>
<li>Independently validated or security-audited</li>
<li>Tested against adversarial attacks</li>
<li>Proven effective across multiple organisations</li>
<li>A substitute for legal compliance review</li>
<li>A commercial product (research framework, Apache 2.0 licence)</li>
<li data-i18n="sections.scope_limitations.not_1">A comprehensive AI safety solution</li>
<li data-i18n="sections.scope_limitations.not_2">Independently validated or security-audited</li>
<li data-i18n="sections.scope_limitations.not_3">Tested against adversarial attacks</li>
<li data-i18n="sections.scope_limitations.not_4">Proven effective across multiple organisations</li>
<li data-i18n="sections.scope_limitations.not_5">A substitute for legal compliance review</li>
<li data-i18n="sections.scope_limitations.not_6">A commercial product (research framework, Apache 2.0 licence)</li>
</ul>
</div>
<div>
<strong class="text-gray-900" data-i18n="sections.scope_limitations.offers_title">What it offers:</strong>
<ul class="list-disc pl-6 mt-2 space-y-1 text-gray-600">
<li>Architectural patterns for external governance controls</li>
<li>Reference implementation demonstrating feasibility</li>
<li>Foundation for organisational pilots and validation studies</li>
<li>Evidence that structural approaches to AI safety merit investigation</li>
<li data-i18n="sections.scope_limitations.offers_1">Architectural patterns for external governance controls</li>
<li data-i18n="sections.scope_limitations.offers_2">Reference implementation demonstrating feasibility</li>
<li data-i18n="sections.scope_limitations.offers_3">Foundation for organisational pilots and validation studies</li>
<li data-i18n="sections.scope_limitations.offers_4">Evidence that structural approaches to AI safety merit investigation</li>
</ul>
</div>
</div>

View file

@ -50,27 +50,92 @@
"continuous_improvement_title": "Kontinuierliche Verbesserung: Vorfall → Regelerstellung",
"continuous_improvement_desc": "Lernen aus Fehlern, automatisierte Regelgenerierung, Validierung",
"pluralistic_deliberation_title": "Pluralistische Deliberation: Wertekonfliktlösung",
"pluralistic_deliberation_desc": "Multi-Stakeholder-Engagement, nicht-hierarchischer Prozess, Dokumentation moralischer Reste"
"pluralistic_deliberation_desc": "Multi-Stakeholder-Engagement, nicht-hierarchischer Prozess, Dokumentation moralischer Reste",
"sample_heading": "Beispiel für die Struktur eines Audit-Protokolls",
"immutability_label": "Unveränderlichkeit:",
"immutability_text": "Audit-Protokolle werden in einer reinen Append-Datenbank gespeichert. AI kann keine Einträge ändern oder löschen.",
"compliance_label": "Beweise für die Einhaltung der Vorschriften:",
"compliance_text": "Automatische Kennzeichnung mit regulatorischen Anforderungen (EU AI Act Artikel 14, GDPR Artikel 22, etc.)",
"export_label": "Exportfähigkeiten:",
"export_text": "Erstellung von Konformitätsberichten für Aufsichtsbehörden, die die Durchsetzung von Human Oversight zeigen",
"footer_text": "Wenn die Aufsichtsbehörde die Frage stellt, wie man eine effektive menschliche Aufsicht in großem Maßstab nachweisen kann, liefert dieser Prüfpfad strukturelle Beweise unabhängig von der KI-Zusammenarbeit.",
"flow_heading": "Lernfluss bei Vorfällen",
"step_1_desc": "CrossReferenceValidator kennzeichnet Richtlinienverletzung",
"step_2_desc": "Automatisierte Analyse der Unterrichtshistorie, des Kontextstatus",
"step_3_desc": "Vorgeschlagene Governance-Regel zur Vermeidung von Wiederholungen",
"step_4_desc": "Governance Board prüft und billigt neue Regelung",
"step_5_desc": "Regel zum dauerhaften Speicher hinzugefügt, sofort aktiv",
"example_heading": "Beispiel für eine generierte Regel",
"learning_label": "Organisatorisches Lernen:",
"learning_text": "Wenn bei einem Team ein Governance-Fehler auftritt, profitiert das gesamte Unternehmen von automatisch generierten Präventivregeln. Skalierung des Governance-Wissens ohne manuelle Dokumentation.",
"conflict_label": "Erkennung von Konflikten:",
"conflict_text": "KI-System identifiziert konkurrierende Werte im Entscheidungskontext (z.B. Effizienz vs. Transparenz, Kosten vs. Risikominderung, Innovation vs. Einhaltung von Vorschriften). BoundaryEnforcer blockiert autonome Entscheidung, eskaliert zu PluralisticDeliberationOrchestrator.",
"stakeholder_heading": "Prozess der Identifizierung von Stakeholdern",
"stakeholder_1": "Automatische Erkennung: Das System erkennt, welche Wertesysteme in einem Spannungsverhältnis stehen (Utilitarismus, Deontologie, Tugendethik, Kontraktualismus usw.)",
"stakeholder_2": "Stakeholder-Mapping: Identifizierung der Parteien, die ein berechtigtes Interesse an der Entscheidung haben (Betroffene, Fachexperten, Verwaltungsbehörden, Vertreter der Gemeinschaft)",
"stakeholder_3": "Menschliche Zustimmung: Der Lenkungsausschuss prüft die Liste der Interessenvertreter und fügt sie gegebenenfalls hinzu oder entfernt sie (TRA-OPS-0002)",
"deliberation_heading": "Nicht-hierarchische Deliberation",
"equal_voice_title": "Gleiche Stimme",
"equal_voice_text": "Alle Beteiligten bringen ihre Sichtweisen ohne hierarchische Gewichtung ein. Technische Experten setzen sich nicht automatisch über die Belange der Gemeinschaft hinweg.",
"dissent_title": "Dokumentierter Dissens",
"dissent_text": "Minderheitenpositionen werden vollständig erfasst. Abweichende Interessengruppen können dokumentieren, warum der Konsens ihren Werterahmen sprengt.",
"moral_title": "Moralischer Überrest",
"moral_text": "Das System dokumentiert unvermeidbare Werteabwägungen. Selbst eine korrekte Entscheidung führt zu einem dokumentierten Schaden für andere legitime Werte.",
"precedent_title": "Präzedenzfall (nicht bindend)",
"precedent_text": "Die Entscheidung wird zu einem informativen Präzedenzfall für ähnliche Konflikte. Aber Unterschiede im Kontext bedeuten, dass Präzedenzfälle leiten, nicht diktieren.",
"record_heading": "Struktur des Deliberationsprotokolls",
"key_principle": "Hauptgrundsatz: Wenn legitime Werte miteinander in Konflikt stehen, kann kein Algorithmus die richtige Antwort bestimmen. Der Tractatus stellt sicher, dass Entscheidungen durch umfassende Überlegungen mit vollständiger Dokumentation der Kompromisse getroffen werden, anstatt dass die KI einen einzigen Werterahmen vorgibt oder der Entscheidungsträger die Bedenken der Interessengruppen abweist."
},
"development_status": {
"heading": "Entwicklungsstatus",
"warning_title": "Frühstadium-Forschungs-Framework",
"warning_text": "Tractatus ist ein Proof-of-Concept, der über sechs Monate in einem Einzelprojekt-Kontext (diese Website) entwickelt wurde. Es demonstriert architektonische Muster für KI-Governance, wurde jedoch keiner unabhängigen Validierung, Red-Team-Tests oder Multi-Organisations-Bereitstellung unterzogen.",
"validation_title": "Validiert vs. Nicht Validiert"
"validation_title": "Validiert vs. Nicht Validiert",
"validated_label": "Bestätigt:",
"validated_text": "Framework regelt erfolgreich Claude Code in Entwicklungsworkflows. Der Anwender berichtet von einer Produktivitätssteigerung in Größenordnungen für nichttechnische Anwender, die Produktionssysteme aufbauen.",
"not_validated_label": "Nicht validiert:",
"not_validated_text": "Leistung im Unternehmensmaßstab, Komplexität der Integration in bestehende Systeme, Wirksamkeit gegenüber gegnerischen Aufforderungen, plattformübergreifende Konsistenz.",
"limitation_label": "Bekannte Einschränkung:",
"limitation_text": "Der Rahmen kann umgangen werden, wenn KI sich einfach dafür entscheidet, die Steuerungsinstrumente nicht zu nutzen. Die freiwillige Inanspruchnahme bleibt eine strukturelle Schwäche, die externe Durchsetzungsmechanismen erfordert."
},
"eu_ai_act": {
"heading": "EU AI Act-Überlegungen",
"article_14_title": "Verordnung 2024/1689, Artikel 14: Menschliche Aufsicht"
"article_14_title": "Verordnung 2024/1689, Artikel 14: Menschliche Aufsicht",
"intro": "Das EU-KI-Gesetz (Verordnung 2024/1689) legt Anforderungen an die menschliche Aufsicht über KI-Systeme mit hohem Risiko fest (Artikel 14). Organisationen müssen sicherstellen, dass KI-Systeme wirksam von natürlichen Personen überwacht werden, die befugt sind, KI-Ausgaben zu unterbrechen oder zu missachten.",
"addresses": "Der Tractatus begegnet diesem Problem durch architektonische Kontrollen, die:",
"bullet_1": "Generierung unveränderlicher Prüfpfade, die KI-Entscheidungsprozesse dokumentieren",
"bullet_2": "Durchsetzung menschlicher Genehmigungsanforderungen für wertebasierte Entscheidungen",
"bullet_3": "Nachweis von Überwachungsmechanismen, die von der AI-Zusammenarbeit unabhängig sind",
"bullet_4": "Dokumentieren Sie die Einhaltung der Transparenz- und Aufbewahrungspflichten",
"disclaimer": "Dies stellt keine Beratung zur Einhaltung von Rechtsvorschriften dar. Organisationen sollten in Absprache mit ihren Rechtsberatern prüfen, ob diese Architekturmuster mit ihren spezifischen rechtlichen Verpflichtungen übereinstimmen.",
"penalties": "Maximale Strafen gemäß EU-KI-Gesetz: 35 Millionen Euro oder 7 Prozent des weltweiten Jahresumsatzes (je nachdem, welcher Wert höher ist) für verbotene AI-Praktiken; 15 Millionen Euro oder 3 Prozent für andere Verstöße."
},
"research_foundations": {
"heading": "Forschungsgrundlagen",
"org_theory_title": "Organisationstheorie & Philosophische Basis"
"org_theory_title": "Organisationstheorie & Philosophische Basis",
"intro": "Der Tractatus stützt sich auf mehr als 40 Jahre Forschung im Bereich der Organisationstheorie: zeitbasierte Organisation (Bluedorn, Ancona), Wissensorchestrierung (Crossan), postbürokratische Autorität (Laloux), strukturelle Trägheit (Hannan Freeman).",
"premise": "Kernaussage: Wenn Wissen durch KI allgegenwärtig wird, muss sich die Autorität aus einem angemessenen Zeithorizont und Fachwissen ableiten und nicht aus einer hierarchischen Position. Governance-Systeme müssen die Entscheidungsfindung über strategische, operative und taktische Zeiträume hinweg orchestrieren.",
"view_pdf": "Vollständige organisationstheoretische Grundlagen anzeigen (PDF)",
"ai_safety_title": "KI-Sicherheitsforschung: Architektonische Schutzmaßnahmen gegen LLM Hierarchische Dominanz",
"ai_safety_desc": "Wie der Tractatus pluralistische Werte vor der Voreingenommenheit von KI-Mustern schützt und gleichzeitig Sicherheitsgrenzen beibehält.",
"pdf_link": "PDF",
"read_online": "Online lesen"
},
"scope_limitations": {
"heading": "Umfang & Einschränkungen",
"title": "Was dies nicht ist • Was es bietet",
"not_title": "Tractatus ist nicht:",
"offers_title": "Was es bietet:"
"offers_title": "Was es bietet:",
"not_1": "Eine umfassende AI-Sicherheitslösung",
"not_2": "Unabhängig validiert oder sicherheitsüberprüft",
"not_3": "Getestet gegen gegnerische Angriffe",
"not_4": "Bewährte Wirksamkeit in verschiedenen Organisationen",
"not_5": "Ein Ersatz für die Überprüfung der Einhaltung von Rechtsvorschriften",
"not_6": "Ein kommerzielles Produkt (Forschungsrahmen, Apache 2.0 Lizenz)",
"offers_1": "Architektonische Muster für externe Governance-Kontrollen",
"offers_2": "Referenzimplementierung zum Nachweis der Machbarkeit",
"offers_3": "Grundlage für Organisationspiloten und Validierungsstudien",
"offers_4": "Beweise dafür, dass strukturelle Ansätze für die KI-Sicherheit eine Untersuchung verdienen"
}
},
"footer": {

View file

@ -50,27 +50,92 @@
"continuous_improvement_title": "Continuous Improvement: Incident → Rule Creation",
"continuous_improvement_desc": "Learning from failures, automated rule generation, validation",
"pluralistic_deliberation_title": "Pluralistic Deliberation: Values Conflict Resolution",
"pluralistic_deliberation_desc": "Multi-stakeholder engagement, non-hierarchical process, moral remainder documentation"
"pluralistic_deliberation_desc": "Multi-stakeholder engagement, non-hierarchical process, moral remainder documentation",
"sample_heading": "Sample Audit Log Structure",
"immutability_label": "Immutability:",
"immutability_text": "Audit logs stored in append-only database. AI cannot modify or delete entries.",
"compliance_label": "Compliance Evidence:",
"compliance_text": "Automatic tagging with regulatory requirements (EU AI Act Article 14, GDPR Article 22, etc.)",
"export_label": "Export Capabilities:",
"export_text": "Generate compliance reports for regulators showing human oversight enforcement",
"footer_text": "When regulator asks How do you prove effective human oversight at scale, this audit trail provides structural evidence independent of AI cooperation.",
"flow_heading": "Incident Learning Flow",
"step_1_desc": "CrossReferenceValidator flags policy violation",
"step_2_desc": "Automated analysis of instruction history, context state",
"step_3_desc": "Proposed governance rule to prevent recurrence",
"step_4_desc": "Governance board reviews and approves new rule",
"step_5_desc": "Rule added to persistent storage, active immediately",
"example_heading": "Example Generated Rule",
"learning_label": "Organisational Learning:",
"learning_text": "When one team encounters governance failure, entire organisation benefits from automatically generated preventive rules. Scales governance knowledge without manual documentation.",
"conflict_label": "Conflict Detection:",
"conflict_text": "AI system identifies competing values in decision context (e.g., efficiency vs. transparency, cost vs. risk mitigation, innovation vs. regulatory compliance). BoundaryEnforcer blocks autonomous decision, escalates to PluralisticDeliberationOrchestrator.",
"stakeholder_heading": "Stakeholder Identification Process",
"stakeholder_1": "Automatic Detection: System identifies which values frameworks are in tension (utilitarian, deontological, virtue ethics, contractarian, etc.)",
"stakeholder_2": "Stakeholder Mapping: Identifies parties with legitimate interest in decision (affected parties, domain experts, governance authorities, community representatives)",
"stakeholder_3": "Human Approval: Governance board reviews stakeholder list, adds/removes as appropriate (TRA-OPS-0002)",
"deliberation_heading": "Non-Hierarchical Deliberation",
"equal_voice_title": "Equal Voice",
"equal_voice_text": "All stakeholders present perspectives without hierarchical weighting. Technical experts do not automatically override community concerns.",
"dissent_title": "Documented Dissent",
"dissent_text": "Minority positions recorded in full. Dissenting stakeholders can document why consensus fails their values framework.",
"moral_title": "Moral Remainder",
"moral_text": "System documents unavoidable value trade-offs. Even correct decision creates documented harm to other legitimate values.",
"precedent_title": "Precedent (Not Binding)",
"precedent_text": "Decision becomes informative precedent for similar conflicts. But context differences mean precedents guide, not dictate.",
"record_heading": "Deliberation Record Structure",
"key_principle": "Key Principle: When legitimate values conflict, no algorithm can determine the correct answer. Tractatus ensures decisions are made through inclusive deliberation with full documentation of trade-offs, rather than AI imposing single values framework or decision-maker dismissing stakeholder concerns."
},
"development_status": {
"heading": "Development Status",
"warning_title": "Early-Stage Research Framework",
"warning_text": "Tractatus is a proof-of-concept developed over six months in a single project context (this website). It demonstrates architectural patterns for AI governance but has not undergone independent validation, red-team testing, or multi-organisation deployment.",
"validation_title": "Validated vs. Not Validated"
"validation_title": "Validated vs. Not Validated",
"validated_label": "Validated:",
"validated_text": "Framework successfully governs Claude Code in development workflows. User reports order-of-magnitude improvement in productivity for non-technical operators building production systems.",
"not_validated_label": "Not Validated:",
"not_validated_text": "Performance at enterprise scale, integration complexity with existing systems, effectiveness against adversarial prompts, cross-platform consistency.",
"limitation_label": "Known Limitation:",
"limitation_text": "Framework can be bypassed if AI simply chooses not to use governance tools. Voluntary invocation remains a structural weakness requiring external enforcement mechanisms."
},
"eu_ai_act": {
"heading": "EU AI Act Considerations",
"article_14_title": "Regulation 2024/1689, Article 14: Human Oversight"
"article_14_title": "Regulation 2024/1689, Article 14: Human Oversight",
"intro": "The EU AI Act (Regulation 2024/1689) establishes human oversight requirements for high-risk AI systems (Article 14). Organisations must ensure AI systems are effectively overseen by natural persons with authority to interrupt or disregard AI outputs.",
"addresses": "Tractatus addresses this through architectural controls that:",
"bullet_1": "Generate immutable audit trails documenting AI decision-making processes",
"bullet_2": "Enforce human approval requirements for values-based decisions",
"bullet_3": "Provide evidence of oversight mechanisms independent of AI cooperation",
"bullet_4": "Document compliance with transparency and record-keeping obligations",
"disclaimer": "This does not constitute legal compliance advice. Organisations should evaluate whether these architectural patterns align with their specific regulatory obligations in consultation with legal counsel.",
"penalties": "Maximum penalties under EU AI Act: 35 million euros or 7 percent of global annual turnover (whichever is higher) for prohibited AI practices; 15 million euros or 3 percent for other violations."
},
"research_foundations": {
"heading": "Research Foundations",
"org_theory_title": "Organisational Theory & Philosophical Basis"
"org_theory_title": "Organisational Theory & Philosophical Basis",
"intro": "Tractatus draws on 40+ years of organisational theory research: time-based organisation (Bluedorn, Ancona), knowledge orchestration (Crossan), post-bureaucratic authority (Laloux), structural inertia (Hannan Freeman).",
"premise": "Core premise: When knowledge becomes ubiquitous through AI, authority must derive from appropriate time horizon and domain expertise rather than hierarchical position. Governance systems must orchestrate decision-making across strategic, operational, and tactical timescales.",
"view_pdf": "View complete organisational theory foundations (PDF)",
"ai_safety_title": "AI Safety Research: Architectural Safeguards Against LLM Hierarchical Dominance",
"ai_safety_desc": "How Tractatus protects pluralistic values from AI pattern bias while maintaining safety boundaries.",
"pdf_link": "PDF",
"read_online": "Read online"
},
"scope_limitations": {
"heading": "Scope & Limitations",
"title": "What This Is Not • What It Offers",
"not_title": "Tractatus is not:",
"offers_title": "What it offers:"
"offers_title": "What it offers:",
"not_1": "A comprehensive AI safety solution",
"not_2": "Independently validated or security-audited",
"not_3": "Tested against adversarial attacks",
"not_4": "Proven effective across multiple organisations",
"not_5": "A substitute for legal compliance review",
"not_6": "A commercial product (research framework, Apache 2.0 licence)",
"offers_1": "Architectural patterns for external governance controls",
"offers_2": "Reference implementation demonstrating feasibility",
"offers_3": "Foundation for organisational pilots and validation studies",
"offers_4": "Evidence that structural approaches to AI safety merit investigation"
}
}
}

View file

@ -50,27 +50,92 @@
"continuous_improvement_title": "Amélioration Continue : Incident → Création de Règles",
"continuous_improvement_desc": "Apprentissage à partir des échecs, génération automatisée de règles, validation",
"pluralistic_deliberation_title": "Délibération Pluraliste : Résolution des Conflits de Valeurs",
"pluralistic_deliberation_desc": "Engagement multi-parties prenantes, processus non hiérarchique, documentation des restes moraux"
"pluralistic_deliberation_desc": "Engagement multi-parties prenantes, processus non hiérarchique, documentation des restes moraux",
"sample_heading": "Exemple de structure de journal d'audit",
"immutability_label": "Immutabilité :",
"immutability_text": "Les journaux d'audit sont stockés dans une base de données en annexe seulement. L'IA ne peut pas modifier ou supprimer des entrées.",
"compliance_label": "Preuves de conformité :",
"compliance_text": "Marquage automatique des exigences réglementaires (article 14 de la loi européenne sur l'IA, article 22 du GDPR, etc.)",
"export_label": "Capacités d'exportation :",
"export_text": "Générer des rapports de conformité pour les régulateurs montrant l'application de la surveillance humaine",
"footer_text": "Lorsque l'autorité de régulation demande comment prouver l'efficacité de la surveillance humaine à grande échelle, cette piste d'audit fournit des preuves structurelles indépendantes de la coopération de l'IA.",
"flow_heading": "Flux d'apprentissage en cas d'incident",
"step_1_desc": "CrossReferenceValidator signale une violation de politique",
"step_2_desc": "Analyse automatisée de l'historique des instructions, de l'état du contexte",
"step_3_desc": "Proposition d'une règle de gouvernance pour éviter que cela ne se reproduise",
"step_4_desc": "Le conseil de gouvernance examine et approuve la nouvelle règle",
"step_5_desc": "Règle ajoutée à la mémoire permanente, active immédiatement",
"example_heading": "Exemple de règle générée",
"learning_label": "Apprentissage organisationnel :",
"learning_text": "Lorsqu'une équipe est confrontée à un problème de gouvernance, l'ensemble de l'organisation bénéficie de règles préventives générées automatiquement. L'extension des connaissances en matière de gouvernance ne nécessite pas de documentation manuelle.",
"conflict_label": "Détection des conflits :",
"conflict_text": "Le système d'IA identifie les valeurs concurrentes dans le contexte de la décision (par exemple, l'efficacité par rapport à la transparence, le coût par rapport à l'atténuation des risques, l'innovation par rapport au respect de la réglementation). Le BoundaryEnforcer bloque la décision autonome et fait appel au PluralisticDeliberationOrchestrator.",
"stakeholder_heading": "Processus d'identification des parties prenantes",
"stakeholder_1": "Détection automatique : Le système identifie les cadres de valeurs qui sont en tension (utilitaire, déontologique, éthique de la vertu, contractualiste, etc.)",
"stakeholder_2": "Cartographie des parties prenantes : Identifie les parties ayant un intérêt légitime dans la décision (parties concernées, experts du domaine, autorités de gouvernance, représentants de la communauté).",
"stakeholder_3": "Approbation humaine : Le conseil de gouvernance examine la liste des parties prenantes, ajoute/supprime le cas échéant (TRA-OPS-0002).",
"deliberation_heading": "Délibération non hiérarchique",
"equal_voice_title": "Une voix égale",
"equal_voice_text": "Toutes les parties prenantes présentent leurs points de vue sans hiérarchisation. Les experts techniques ne prennent pas automatiquement le pas sur les préoccupations de la communauté.",
"dissent_title": "Dissidence documentée",
"dissent_text": "Les positions minoritaires sont enregistrées dans leur intégralité. Les parties prenantes dissidentes peuvent expliquer pourquoi le consensus ne respecte pas leur cadre de valeurs.",
"moral_title": "Le reste moral",
"moral_text": "Le système documente les compromis inévitables en matière de valeurs. Même une décision correcte porte atteinte à d'autres valeurs légitimes.",
"precedent_title": "Précédent (non contraignant)",
"precedent_text": "La décision devient un précédent informatif pour des conflits similaires. Mais les différences de contexte font que les précédents guident, et non dictent.",
"record_heading": "Structure du procès-verbal de délibération",
"key_principle": "Principe clé : en cas de conflit de valeurs légitimes, aucun algorithme ne peut déterminer la bonne réponse. Le Tractatus garantit que les décisions sont prises à l'issue de délibérations inclusives avec une documentation complète des compromis, plutôt que l'IA impose un cadre de valeurs unique ou que le décideur rejette les préoccupations des parties prenantes."
},
"development_status": {
"heading": "État du Développement",
"warning_title": "Cadre de Recherche en Phase Initiale",
"warning_text": "Tractatus est une preuve de concept développée sur six mois dans un contexte de projet unique (ce site web). Il démontre des modèles architecturaux pour la gouvernance de l'IA mais n'a pas subi de validation indépendante, de tests d'équipe rouge ou de déploiement multi-organisationnel.",
"validation_title": "Validé vs Non Validé"
"validation_title": "Validé vs Non Validé",
"validated_label": "Validé :",
"validated_text": "Le cadre régit avec succès le code Claude dans les flux de travail de développement. L'utilisateur signale une amélioration de l'ordre de grandeur de la productivité pour les opérateurs non techniques qui construisent des systèmes de production.",
"not_validated_label": "Non validé :",
"not_validated_text": "Performance à l'échelle de l'entreprise, complexité de l'intégration avec les systèmes existants, efficacité contre les messages adverses, cohérence entre les plates-formes.",
"limitation_label": "Limitation connue :",
"limitation_text": "Le cadre peut être contourné si l'IA choisit simplement de ne pas utiliser les outils de gouvernance. L'invocation volontaire reste une faiblesse structurelle nécessitant des mécanismes d'application externes."
},
"eu_ai_act": {
"heading": "Considérations du Règlement Européen sur l'IA",
"article_14_title": "Règlement 2024/1689, Article 14 : Surveillance Humaine"
"article_14_title": "Règlement 2024/1689, Article 14 : Surveillance Humaine",
"intro": "La loi européenne sur l'IA (règlement 2024/1689) établit des exigences de supervision humaine pour les systèmes d'IA à haut risque (article 14). Les organisations doivent s'assurer que les systèmes d'IA sont effectivement supervisés par des personnes physiques ayant le pouvoir d'interrompre ou d'ignorer les résultats de l'IA.",
"addresses": "Le Tractatus aborde cette question par le biais de contrôles architecturaux :",
"bullet_1": "Générer des pistes d'audit immuables documentant les processus décisionnels de l'IA",
"bullet_2": "Renforcer les exigences en matière d'approbation humaine pour les décisions fondées sur des valeurs",
"bullet_3": "Fournir des preuves de l'existence de mécanismes de contrôle indépendants de la coopération avec l'IA",
"bullet_4": "Documenter le respect des obligations de transparence et d'archivage",
"disclaimer": "Il ne s'agit pas d'un avis de conformité juridique. Les organisations doivent évaluer si ces modèles architecturaux sont conformes à leurs obligations réglementaires spécifiques en consultant un conseiller juridique.",
"penalties": "Sanctions maximales prévues par la loi européenne sur l'IA : 35 millions d'euros ou 7 % du chiffre d'affaires annuel mondial (le montant le plus élevé étant retenu) pour les pratiques d'IA interdites ; 15 millions d'euros ou 3 % pour les autres violations."
},
"research_foundations": {
"heading": "Fondements de Recherche",
"org_theory_title": "Théorie Organisationnelle & Base Philosophique"
"org_theory_title": "Théorie Organisationnelle & Base Philosophique",
"intro": "Tractatus s'appuie sur plus de 40 ans de recherche en théorie organisationnelle : organisation basée sur le temps (Bluedorn, Ancona), orchestration des connaissances (Crossan), autorité post-bureaucratique (Laloux), inertie structurelle (Hannan Freeman).",
"premise": "Principe de base : lorsque la connaissance devient omniprésente grâce à l'IA, l'autorité doit découler d'un horizon temporel approprié et d'une expertise dans le domaine plutôt que d'une position hiérarchique. Les systèmes de gouvernance doivent orchestrer la prise de décision sur des échelles de temps stratégiques, opérationnelles et tactiques.",
"view_pdf": "Voir les fondements complets de la théorie des organisations (PDF)",
"ai_safety_title": "Recherche sur la sécurité de l'IA : Sauvegardes architecturales contre la domination hiérarchique du LLM",
"ai_safety_desc": "Comment le Tractatus protège les valeurs pluralistes du biais du modèle d'IA tout en maintenant des limites de sécurité.",
"pdf_link": "PDF (EN ANGLAIS)",
"read_online": "Lire en ligne"
},
"scope_limitations": {
"heading": "Portée & Limitations",
"title": "Ce que ce n'est pas • Ce qu'il offre",
"not_title": "Tractatus n'est pas :",
"offers_title": "Ce qu'il offre :"
"offers_title": "Ce qu'il offre :",
"not_1": "Une solution complète de sécurité de l'IA",
"not_2": "Validation indépendante ou audit de sécurité",
"not_3": "Testé contre les attaques adverses",
"not_4": "Efficacité prouvée dans plusieurs organisations",
"not_5": "Un substitut à l'examen de la conformité juridique",
"not_6": "Un produit commercial (cadre de recherche, licence Apache 2.0)",
"offers_1": "Modèles architecturaux pour les contrôles de gouvernance externe",
"offers_2": "Mise en œuvre de référence démontrant la faisabilité",
"offers_3": "Base pour les pilotes organisationnels et les études de validation",
"offers_4": "Preuve que les approches structurelles de la sécurité de l'IA méritent d'être étudiées"
}
},
"footer": {