feat: Add i18n support to home-ai.html with EN/DE/FR translations

221 text elements across 16 sections now have data-i18n attributes.
Locale JSON files populated for English, German, and French via DeepL.
HTML entities, proper names, and code blocks preserved in translations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
TheFlow 2026-02-07 22:36:28 +13:00
parent 3ad1a5b953
commit 757ac3dac3
4 changed files with 980 additions and 221 deletions

View file

@ -42,9 +42,9 @@
<nav class="bg-gray-50 border-b border-gray-200 py-3" aria-label="Breadcrumb">
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8">
<ol class="flex items-center space-x-2 text-sm">
<li><a href="/" class="text-blue-600 hover:underline transition-colors">Home</a></li>
<li><a href="/" class="text-blue-600 hover:underline transition-colors" data-i18n="breadcrumb.home">Home</a></li>
<li class="text-gray-400">/</li>
<li class="text-gray-900 font-medium" aria-current="page">Home AI</li>
<li class="text-gray-900 font-medium" aria-current="page" data-i18n="breadcrumb.current">Home AI</li>
</ol>
</div>
</nav>
@ -53,15 +53,15 @@
<header role="banner">
<section class="bg-gradient-to-br from-teal-700 via-teal-800 to-emerald-800 text-white py-14">
<div class="max-w-4xl mx-auto px-4 sm:px-6 lg:px-8 text-center">
<div class="inline-block bg-emerald-600 text-white px-4 py-1.5 rounded-lg font-semibold mb-4 text-sm">
<div class="inline-block bg-emerald-600 text-white px-4 py-1.5 rounded-lg font-semibold mb-4 text-sm" data-i18n="hero.badge">
SOVEREIGN LOCALLY-TRAINED LANGUAGE MODEL
</div>
<h1 class="text-4xl md:text-5xl font-bold mb-4">Home AI</h1>
<p class="text-xl text-teal-100 max-w-3xl mx-auto mb-6">
<h1 class="text-4xl md:text-5xl font-bold mb-4" data-i18n="hero.title">Home AI</h1>
<p class="text-xl text-teal-100 max-w-3xl mx-auto mb-6" data-i18n-html="hero.subtitle">
A language model where the community controls the training data, the model weights, and the governance rules. Not just governed inference &mdash; governed training.
</p>
<div class="bg-amber-100 border-2 border-amber-400 rounded-lg p-4 max-w-2xl mx-auto">
<p class="text-amber-900 text-sm">
<p class="text-amber-900 text-sm" data-i18n-html="hero.status">
<strong>Status:</strong> Home AI operates in production for inference. The sovereign training pipeline is designed and documented; hardware is ordered. Training has not yet begun. This page describes both current capability and intended architecture.
</p>
</div>
@ -73,48 +73,48 @@
<!-- What is an SLL -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">What is an SLL?</h2>
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="sll.heading">What is an SLL?</h2>
<div class="prose prose-lg text-gray-700">
<p class="mb-4">
<p class="mb-4" data-i18n-html="sll.intro">
An <strong>SLL</strong> (Sovereign Locally-trained Language Model) is distinct from both LLMs and SLMs. The distinction is not size &mdash; it is control.
</p>
</div>
<div class="grid grid-cols-1 md:grid-cols-3 gap-4 mt-6">
<div class="bg-red-50 rounded-lg p-5 border border-red-200">
<h3 class="text-lg font-bold text-red-900 mb-2">LLM</h3>
<p class="text-red-800 text-sm mb-2">Large Language Model</p>
<h3 class="text-lg font-bold text-red-900 mb-2" data-i18n="sll.llm_title">LLM</h3>
<p class="text-red-800 text-sm mb-2" data-i18n="sll.llm_subtitle">Large Language Model</p>
<ul class="text-red-700 text-sm space-y-1">
<li>Training: provider-controlled</li>
<li>Data: scraped at scale</li>
<li>Governance: provider's terms</li>
<li>User control: none</li>
<li data-i18n="sll.llm_item1">Training: provider-controlled</li>
<li data-i18n="sll.llm_item2">Data: scraped at scale</li>
<li data-i18n="sll.llm_item3">Governance: provider's terms</li>
<li data-i18n="sll.llm_item4">User control: none</li>
</ul>
</div>
<div class="bg-amber-50 rounded-lg p-5 border border-amber-200">
<h3 class="text-lg font-bold text-amber-900 mb-2">SLM</h3>
<p class="text-amber-800 text-sm mb-2">Small Language Model</p>
<h3 class="text-lg font-bold text-amber-900 mb-2" data-i18n="sll.slm_title">SLM</h3>
<p class="text-amber-800 text-sm mb-2" data-i18n="sll.slm_subtitle">Small Language Model</p>
<ul class="text-amber-700 text-sm space-y-1">
<li>Training: provider-controlled</li>
<li>Data: curated by provider</li>
<li>Governance: partial (fine-tuning)</li>
<li>User control: limited</li>
<li data-i18n="sll.slm_item1">Training: provider-controlled</li>
<li data-i18n="sll.slm_item2">Data: curated by provider</li>
<li data-i18n="sll.slm_item3">Governance: partial (fine-tuning)</li>
<li data-i18n="sll.slm_item4">User control: limited</li>
</ul>
</div>
<div class="bg-emerald-50 rounded-lg p-5 border border-emerald-200">
<h3 class="text-lg font-bold text-emerald-900 mb-2">SLL</h3>
<p class="text-emerald-800 text-sm mb-2">Sovereign Locally-trained</p>
<h3 class="text-lg font-bold text-emerald-900 mb-2" data-i18n="sll.sll_title">SLL</h3>
<p class="text-emerald-800 text-sm mb-2" data-i18n="sll.sll_subtitle">Sovereign Locally-trained</p>
<ul class="text-emerald-700 text-sm space-y-1">
<li>Training: community-controlled</li>
<li>Data: community-owned</li>
<li>Governance: architecturally enforced</li>
<li>User control: full</li>
<li data-i18n="sll.sll_item1">Training: community-controlled</li>
<li data-i18n="sll.sll_item2">Data: community-owned</li>
<li data-i18n="sll.sll_item3">Governance: architecturally enforced</li>
<li data-i18n="sll.sll_item4">User control: full</li>
</ul>
</div>
</div>
<div class="bg-gray-50 rounded-lg p-6 border border-gray-200 mt-6">
<p class="text-gray-700 text-sm italic">
<p class="text-gray-700 text-sm italic" data-i18n="sll.tradeoff">
The honest trade-off: an SLL is a less powerful system that serves your interests, rather than a more powerful one that serves someone else's. We consider this an acceptable exchange.
</p>
</div>
@ -122,80 +122,80 @@
<!-- Two-Model Architecture -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Two-Model Architecture</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="two_model.heading">Two-Model Architecture</h2>
<p class="text-gray-700 mb-6" data-i18n-html="two_model.intro">
Home AI uses two models of different sizes, routed by task complexity. This is not a fallback mechanism &mdash; each model is optimised for its role.
</p>
<div class="grid grid-cols-1 md:grid-cols-2 gap-6">
<div class="bg-white rounded-lg shadow-sm p-6 border-l-4 border-blue-500">
<h3 class="text-lg font-bold text-gray-900 mb-2">3B Model &mdash; Fast Assistant</h3>
<p class="text-gray-700 text-sm mb-3">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n-html="two_model.fast_title">3B Model &mdash; Fast Assistant</h3>
<p class="text-gray-700 text-sm mb-3" data-i18n="two_model.fast_desc">
Handles help queries, tooltips, error explanations, short summaries, and translation. Target response time: under 5 seconds complete.
</p>
<p class="text-gray-500 text-xs">
<p class="text-gray-500 text-xs" data-i18n="two_model.fast_routing">
Routing triggers: simple queries, known FAQ patterns, single-step tasks.
</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-6 border-l-4 border-purple-500">
<h3 class="text-lg font-bold text-gray-900 mb-2">8B Model &mdash; Deep Reasoning</h3>
<p class="text-gray-700 text-sm mb-3">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n-html="two_model.deep_title">8B Model &mdash; Deep Reasoning</h3>
<p class="text-gray-700 text-sm mb-3" data-i18n="two_model.deep_desc">
Handles life story generation, year-in-review narratives, complex summarisation, and sensitive correspondence. Target response time: under 90 seconds.
</p>
<p class="text-gray-500 text-xs">
<p class="text-gray-500 text-xs" data-i18n="two_model.deep_routing">
Routing triggers: keywords like "everything about", multi-source retrieval, grief/trauma markers.
</p>
</div>
</div>
<p class="text-gray-600 text-sm mt-4">
<p class="text-gray-600 text-sm mt-4" data-i18n-html="two_model.footer">
Both models operate under the same governance stack. The routing decision itself is governed &mdash; the ContextPressureMonitor can override routing if session health requires it.
</p>
</section>
<!-- Three Training Tiers -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Three Training Tiers</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="training_tiers.heading">Three Training Tiers</h2>
<p class="text-gray-700 mb-6" data-i18n="training_tiers.intro">
Training is not monolithic. Three tiers serve different scopes, each with appropriate governance constraints.
</p>
<div class="space-y-4">
<div class="bg-white rounded-lg shadow-sm p-6 border-l-4 border-indigo-500">
<div class="flex items-baseline justify-between mb-2">
<h3 class="text-lg font-bold text-gray-900">Tier 1: Platform Base</h3>
<span class="text-xs bg-indigo-100 text-indigo-800 px-2 py-1 rounded">All communities</span>
<h3 class="text-lg font-bold text-gray-900" data-i18n="training_tiers.tier1_title">Tier 1: Platform Base</h3>
<span class="text-xs bg-indigo-100 text-indigo-800 px-2 py-1 rounded" data-i18n="training_tiers.tier1_badge">All communities</span>
</div>
<p class="text-gray-700 text-sm mb-2">
<p class="text-gray-700 text-sm mb-2" data-i18n="training_tiers.tier1_desc">
Trained on platform documentation, philosophy, feature guides, and FAQ content. Provides the foundational understanding of how Village works, what Home AI's values are, and how to help members navigate the platform.
</p>
<p class="text-gray-500 text-xs">
<p class="text-gray-500 text-xs" data-i18n="training_tiers.tier1_update">
Update frequency: weekly during beta, quarterly at GA. Training method: QLoRA fine-tuning.
</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-6 border-l-4 border-teal-500">
<div class="flex items-baseline justify-between mb-2">
<h3 class="text-lg font-bold text-gray-900">Tier 2: Tenant Adapters</h3>
<span class="text-xs bg-teal-100 text-teal-800 px-2 py-1 rounded">Per community</span>
<h3 class="text-lg font-bold text-gray-900" data-i18n="training_tiers.tier2_title">Tier 2: Tenant Adapters</h3>
<span class="text-xs bg-teal-100 text-teal-800 px-2 py-1 rounded" data-i18n="training_tiers.tier2_badge">Per community</span>
</div>
<p class="text-gray-700 text-sm mb-2">
<p class="text-gray-700 text-sm mb-2" data-i18n-html="training_tiers.tier2_desc">
Each community trains a lightweight LoRA adapter on its own content &mdash; stories, documents, photos, and events that members have explicitly consented to include. This allows Home AI to answer questions like "What stories has Grandma shared?" without accessing any other community's data.
</p>
<p class="text-gray-500 text-xs">
<p class="text-gray-500 text-xs" data-i18n-html="training_tiers.tier2_update">
Adapters are small (50&ndash;100MB). Consent is per-content-item. Content marked "only me" is never included regardless of consent. Training uses DPO (Direct Preference Optimization) for value alignment.
</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-6 border-l-4 border-rose-400">
<div class="flex items-baseline justify-between mb-2">
<h3 class="text-lg font-bold text-gray-900">Tier 3: Individual (Future)</h3>
<span class="text-xs bg-rose-100 text-rose-800 px-2 py-1 rounded">Per member</span>
<h3 class="text-lg font-bold text-gray-900" data-i18n="training_tiers.tier3_title">Tier 3: Individual (Future)</h3>
<span class="text-xs bg-rose-100 text-rose-800 px-2 py-1 rounded" data-i18n="training_tiers.tier3_badge">Per member</span>
</div>
<p class="text-gray-700 text-sm mb-2">
<p class="text-gray-700 text-sm mb-2" data-i18n-html="training_tiers.tier3_desc">
Personal adapters that learn individual preferences and interaction patterns. Speculative &mdash; this tier raises significant questions about feasibility, privacy, and the minimum training data required for meaningful personalisation.
</p>
<p class="text-gray-500 text-xs">
<p class="text-gray-500 text-xs" data-i18n="training_tiers.tier3_update">
Research questions documented. Implementation not planned until Tier 2 is validated.
</p>
</div>
@ -204,93 +204,93 @@
<!-- Governance During Training -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Governance During Training</h2>
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="governance_training.heading">Governance During Training</h2>
<div class="prose prose-lg text-gray-700">
<p class="mb-4">
<p class="mb-4" data-i18n-html="governance_training.intro1">
This is the central research contribution. Most AI governance frameworks operate at inference time &mdash; they filter or constrain responses after the model has already been trained. Home AI embeds governance <strong>inside the training loop</strong>.
</p>
<p class="mb-4">
<p class="mb-4" data-i18n-html="governance_training.intro2">
This follows Christopher Alexander's principle of <em>Not-Separateness</em>: governance is woven into the training architecture, not applied afterward. The BoundaryEnforcer validates every training batch before the forward pass. If a batch contains cross-tenant data, data without consent, or content marked as private, the batch is rejected and the training step does not proceed.
</p>
</div>
<div class="bg-gray-900 rounded-lg p-6 mt-6 font-mono text-sm">
<p class="text-green-400 mb-1"># Governance inside the training loop (Not-Separateness)</p>
<p class="text-gray-300 mb-1">for batch in training_data:</p>
<p class="text-gray-300 mb-1">&nbsp;&nbsp;if not BoundaryEnforcer.validate(batch):</p>
<p class="text-gray-300 mb-1">&nbsp;&nbsp;&nbsp;&nbsp;continue&nbsp;&nbsp;<span class="text-green-400"># Governance rejects batch</span></p>
<p class="text-gray-300 mb-1">&nbsp;&nbsp;loss = model.forward(batch)</p>
<p class="text-gray-300 mb-3">&nbsp;&nbsp;loss.backward()</p>
<p class="text-red-400 mb-1"># NOT this &mdash; governance separated from training</p>
<p class="text-gray-500 mb-1">for batch in training_data:</p>
<p class="text-gray-500 mb-1">&nbsp;&nbsp;loss = model.forward(batch)</p>
<p class="text-gray-500 mb-1">&nbsp;&nbsp;loss.backward()</p>
<p class="text-gray-500">filter_outputs_later()&nbsp;&nbsp;<span class="text-red-400"># Too late</span></p>
<p class="text-green-400 mb-1" data-i18n="governance_training.code_comment1"># Governance inside the training loop (Not-Separateness)</p>
<p class="text-gray-300 mb-1" data-i18n="governance_training.code_line1">for batch in training_data:</p>
<p class="text-gray-300 mb-1" data-i18n-html="governance_training.code_line2">&nbsp;&nbsp;if not BoundaryEnforcer.validate(batch):</p>
<p class="text-gray-300 mb-1" data-i18n-html="governance_training.code_line3">&nbsp;&nbsp;&nbsp;&nbsp;continue&nbsp;&nbsp;<span class="text-green-400"># Governance rejects batch</span></p>
<p class="text-gray-300 mb-1" data-i18n-html="governance_training.code_line4">&nbsp;&nbsp;loss = model.forward(batch)</p>
<p class="text-gray-300 mb-3" data-i18n-html="governance_training.code_line5">&nbsp;&nbsp;loss.backward()</p>
<p class="text-red-400 mb-1" data-i18n-html="governance_training.code_comment2"># NOT this &mdash; governance separated from training</p>
<p class="text-gray-500 mb-1" data-i18n="governance_training.code_anti1">for batch in training_data:</p>
<p class="text-gray-500 mb-1" data-i18n-html="governance_training.code_anti2">&nbsp;&nbsp;loss = model.forward(batch)</p>
<p class="text-gray-500 mb-1" data-i18n-html="governance_training.code_anti3">&nbsp;&nbsp;loss.backward()</p>
<p class="text-gray-500" data-i18n-html="governance_training.code_anti4">filter_outputs_later()&nbsp;&nbsp;<span class="text-red-400"># Too late</span></p>
</div>
<div class="bg-blue-50 rounded-lg p-6 border border-blue-200 mt-6">
<h3 class="text-lg font-bold text-blue-900 mb-2">Why both training-time and inference-time governance?</h3>
<p class="text-blue-800 text-sm mb-2">
<h3 class="text-lg font-bold text-blue-900 mb-2" data-i18n="governance_training.why_title">Why both training-time and inference-time governance?</h3>
<p class="text-blue-800 text-sm mb-2" data-i18n-html="governance_training.why_text">
<strong>Training shapes tendency; architecture constrains capability.</strong> A model trained to respect boundaries can still be jailbroken. A model that fights against governance rules wastes compute and produces worse outputs. The combined approach makes the model <em>tend toward</em> governed behaviour while the architecture makes it <em>impossible</em> to violate structural boundaries.
</p>
<p class="text-blue-700 text-xs italic">
<p class="text-blue-700 text-xs italic" data-i18n-html="governance_training.why_note">
Research from the Agent Lightning integration suggests governance adds approximately 5% performance overhead &mdash; an acceptable trade-off for architectural safety constraints. This requires validation at scale.
</p>
</div>
<p class="text-gray-600 text-sm mt-4">
<p class="text-gray-600 text-sm mt-4" data-i18n="governance_training.footer">
Training-time governance is only half the picture. The same Tractatus framework also operates at runtime in the Village codebase. The next section explains how these two layers work together.
</p>
</section>
<!-- Dual-Layer Tractatus Architecture -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Dual-Layer Tractatus Architecture</h2>
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="dual_layer.heading">Dual-Layer Tractatus Architecture</h2>
<div class="prose prose-lg text-gray-700">
<p class="mb-4">
<p class="mb-4" data-i18n-html="dual_layer.intro">
Home AI is governed by Tractatus at <strong>two distinct layers</strong> simultaneously. This is the architectural insight that distinguishes the SLL approach from both ungoverned models and bolt-on safety filters.
</p>
</div>
<div class="grid grid-cols-1 md:grid-cols-2 gap-6 mt-6">
<div class="bg-teal-50 rounded-lg p-6 border-2 border-teal-300">
<div class="inline-block bg-teal-600 text-white px-3 py-1 rounded text-xs font-semibold mb-3">LAYER A: INHERENT</div>
<h3 class="text-lg font-bold text-gray-900 mb-3">Tractatus Inside the Model</h3>
<p class="text-gray-700 text-sm mb-3">
<div class="inline-block bg-teal-600 text-white px-3 py-1 rounded text-xs font-semibold mb-3" data-i18n="dual_layer.layer_a_badge">LAYER A: INHERENT</div>
<h3 class="text-lg font-bold text-gray-900 mb-3" data-i18n="dual_layer.layer_a_title">Tractatus Inside the Model</h3>
<p class="text-gray-700 text-sm mb-3" data-i18n-html="dual_layer.layer_a_desc">
During training, the BoundaryEnforcer validates every batch. DPO alignment shapes preferences toward governed behaviour. The model <em>learns</em> to respect boundaries, prefer transparent responses, and defer values decisions to humans.
</p>
<ul class="text-gray-700 text-sm space-y-2">
<li><strong>Mechanism:</strong> Governance in the training loop</li>
<li><strong>Effect:</strong> Model tends toward governed behaviour</li>
<li><strong>Limitation:</strong> Tendencies can be overridden by adversarial prompting</li>
<li data-i18n-html="dual_layer.layer_a_item1"><strong>Mechanism:</strong> Governance in the training loop</li>
<li data-i18n-html="dual_layer.layer_a_item2"><strong>Effect:</strong> Model tends toward governed behaviour</li>
<li data-i18n-html="dual_layer.layer_a_item3"><strong>Limitation:</strong> Tendencies can be overridden by adversarial prompting</li>
</ul>
</div>
<div class="bg-indigo-50 rounded-lg p-6 border-2 border-indigo-300">
<div class="inline-block bg-indigo-600 text-white px-3 py-1 rounded text-xs font-semibold mb-3">LAYER B: ACTIVE</div>
<h3 class="text-lg font-bold text-gray-900 mb-3">Tractatus Around the Model</h3>
<p class="text-gray-700 text-sm mb-3">
<div class="inline-block bg-indigo-600 text-white px-3 py-1 rounded text-xs font-semibold mb-3" data-i18n="dual_layer.layer_b_badge">LAYER B: ACTIVE</div>
<h3 class="text-lg font-bold text-gray-900 mb-3" data-i18n="dual_layer.layer_b_title">Tractatus Around the Model</h3>
<p class="text-gray-700 text-sm mb-3" data-i18n="dual_layer.layer_b_desc">
At runtime, the full six-service governance stack operates in the Village codebase. Every interaction passes through BoundaryEnforcer, PluralisticDeliberationOrchestrator, MetacognitiveVerifier, CrossReferenceValidator, ContextPressureMonitor, and InstructionPersistenceClassifier.
</p>
<ul class="text-gray-700 text-sm space-y-2">
<li><strong>Mechanism:</strong> Six architectural services in the critical path</li>
<li><strong>Effect:</strong> Structural boundaries cannot be violated</li>
<li><strong>Limitation:</strong> Adds ~5% performance overhead per interaction</li>
<li data-i18n-html="dual_layer.layer_b_item1"><strong>Mechanism:</strong> Six architectural services in the critical path</li>
<li data-i18n-html="dual_layer.layer_b_item2"><strong>Effect:</strong> Structural boundaries cannot be violated</li>
<li data-i18n-html="dual_layer.layer_b_item3"><strong>Limitation:</strong> Adds ~5% performance overhead per interaction</li>
</ul>
</div>
</div>
<div class="bg-gray-900 rounded-lg p-6 mt-6">
<p class="text-emerald-400 font-mono text-sm mb-3 font-bold">The dual-layer principle:</p>
<p class="text-gray-300 font-mono text-sm mb-1">Training shapes <span class="text-teal-400">tendency</span>.</p>
<p class="text-gray-300 font-mono text-sm mb-4">Architecture constrains <span class="text-indigo-400">capability</span>.</p>
<p class="text-gray-400 font-mono text-xs">A model that has internalised governance rules AND operates within governance architecture</p>
<p class="text-gray-400 font-mono text-xs">produces better outputs than either approach alone. The model works WITH the guardrails,</p>
<p class="text-gray-400 font-mono text-xs">not against them &mdash; reducing compute waste and improving response quality.</p>
<p class="text-emerald-400 font-mono text-sm mb-3 font-bold" data-i18n="dual_layer.principle_title">The dual-layer principle:</p>
<p class="text-gray-300 font-mono text-sm mb-1" data-i18n-html="dual_layer.principle_line1">Training shapes <span class="text-teal-400">tendency</span>.</p>
<p class="text-gray-300 font-mono text-sm mb-4" data-i18n-html="dual_layer.principle_line2">Architecture constrains <span class="text-indigo-400">capability</span>.</p>
<p class="text-gray-400 font-mono text-xs" data-i18n="dual_layer.principle_line3">A model that has internalised governance rules AND operates within governance architecture</p>
<p class="text-gray-400 font-mono text-xs" data-i18n="dual_layer.principle_line4">produces better outputs than either approach alone. The model works WITH the guardrails,</p>
<p class="text-gray-400 font-mono text-xs" data-i18n-html="dual_layer.principle_line5">not against them &mdash; reducing compute waste and improving response quality.</p>
</div>
<div class="bg-amber-50 rounded-lg p-5 border border-amber-200 mt-4">
<p class="text-amber-900 text-sm">
<p class="text-amber-900 text-sm" data-i18n-html="dual_layer.caveat">
<strong>Honest caveat:</strong> Layer A (inherent governance via training) is designed but not yet empirically validated &mdash; training has not begun. Layer B (active governance via Village codebase) has been operating in production for 11+ months. The dual-layer thesis is an architectural commitment, not yet a demonstrated result.
</p>
</div>
@ -298,144 +298,144 @@
<!-- Philosophical Foundations -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Philosophical Foundations</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="philosophy.heading">Philosophical Foundations</h2>
<p class="text-gray-700 mb-6" data-i18n-html="philosophy.intro">
Home AI's governance draws from four philosophical traditions, each contributing a specific architectural principle. These are not decorative references &mdash; they translate into concrete design decisions.
</p>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-2">Isaiah Berlin &mdash; Value Pluralism</h3>
<p class="text-gray-700 text-sm mb-2">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n-html="philosophy.berlin_title">Isaiah Berlin &mdash; Value Pluralism</h3>
<p class="text-gray-700 text-sm mb-2" data-i18n="philosophy.berlin_desc">
Values are genuinely plural and sometimes incompatible. When freedom conflicts with equality, there may be no single correct resolution. Home AI presents options without hierarchy and documents what each choice sacrifices.
</p>
<p class="text-gray-500 text-xs italic">Architectural expression: PluralisticDeliberationOrchestrator presents trade-offs; it does not resolve them.</p>
<p class="text-gray-500 text-xs italic" data-i18n="philosophy.berlin_arch">Architectural expression: PluralisticDeliberationOrchestrator presents trade-offs; it does not resolve them.</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-2">Ludwig Wittgenstein &mdash; Language Boundaries</h3>
<p class="text-gray-700 text-sm mb-2">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n-html="philosophy.wittgenstein_title">Ludwig Wittgenstein &mdash; Language Boundaries</h3>
<p class="text-gray-700 text-sm mb-2" data-i18n-html="philosophy.wittgenstein_desc">
Language shapes what can be thought and expressed. Some things that matter most resist systematic expression. Home AI acknowledges the limits of what language models can capture &mdash; particularly around grief, cultural meaning, and lived experience.
</p>
<p class="text-gray-500 text-xs italic">Architectural expression: BoundaryEnforcer defers values decisions to humans, acknowledging limits of computation.</p>
<p class="text-gray-500 text-xs italic" data-i18n="philosophy.wittgenstein_arch">Architectural expression: BoundaryEnforcer defers values decisions to humans, acknowledging limits of computation.</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-2">Indigenous Sovereignty &mdash; Data as Relationship</h3>
<p class="text-gray-700 text-sm mb-2">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n-html="philosophy.indigenous_title">Indigenous Sovereignty &mdash; Data as Relationship</h3>
<p class="text-gray-700 text-sm mb-2" data-i18n-html="philosophy.indigenous_desc">
Te Mana Raraunga (M&#257;ori Data Sovereignty), CARE Principles, and OCAP (First Nations Canada) provide frameworks where data is not property but relationship. Whakapapa (genealogy) belongs to the collective, not individuals. Consent is a community process, not an individual checkbox.
</p>
<p class="text-gray-500 text-xs italic">Architectural expression: tenant isolation, collective consent mechanisms, intergenerational stewardship.</p>
<p class="text-gray-500 text-xs italic" data-i18n="philosophy.indigenous_arch">Architectural expression: tenant isolation, collective consent mechanisms, intergenerational stewardship.</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-2">Christopher Alexander &mdash; Living Architecture</h3>
<p class="text-gray-700 text-sm mb-2">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n-html="philosophy.alexander_title">Christopher Alexander &mdash; Living Architecture</h3>
<p class="text-gray-700 text-sm mb-2" data-i18n="philosophy.alexander_desc">
Five principles guide how governance evolves: Deep Interlock (services coordinate), Structure-Preserving (changes enhance without breaking), Gradients Not Binary (intensity levels), Living Process (evidence-based evolution), Not-Separateness (governance embedded, not bolted on).
</p>
<p class="text-gray-500 text-xs italic">Architectural expression: all six governance services and the training loop architecture.</p>
<p class="text-gray-500 text-xs italic" data-i18n="philosophy.alexander_arch">Architectural expression: all six governance services and the training loop architecture.</p>
</div>
</div>
</section>
<!-- Three-Layer Governance -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Three-Layer Governance</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="three_layer_gov.heading">Three-Layer Governance</h2>
<p class="text-gray-700 mb-6" data-i18n="three_layer_gov.intro">
Governance operates at three levels, each with different scope and mutability.
</p>
<div class="space-y-4">
<div class="bg-emerald-50 rounded-lg p-6 border border-emerald-200">
<h3 class="text-lg font-bold text-emerald-900 mb-2">Layer 1: Platform (Immutable)</h3>
<p class="text-emerald-800 text-sm mb-2">
<h3 class="text-lg font-bold text-emerald-900 mb-2" data-i18n="three_layer_gov.layer1_title">Layer 1: Platform (Immutable)</h3>
<p class="text-emerald-800 text-sm mb-2" data-i18n="three_layer_gov.layer1_desc">
Structural constraints that apply to all communities. Tenant data isolation. Governance in the critical path. Options presented without hierarchy. These cannot be disabled by tenant administrators or individual members.
</p>
<p class="text-emerald-700 text-xs">Enforcement: architectural (BoundaryEnforcer blocks violations before they execute).</p>
<p class="text-emerald-700 text-xs" data-i18n="three_layer_gov.layer1_enforcement">Enforcement: architectural (BoundaryEnforcer blocks violations before they execute).</p>
</div>
<div class="bg-blue-50 rounded-lg p-6 border border-blue-200">
<h3 class="text-lg font-bold text-blue-900 mb-2">Layer 2: Tenant Constitution</h3>
<p class="text-blue-800 text-sm mb-2">
<h3 class="text-lg font-bold text-blue-900 mb-2" data-i18n="three_layer_gov.layer2_title">Layer 2: Tenant Constitution</h3>
<p class="text-blue-800 text-sm mb-2" data-i18n-html="three_layer_gov.layer2_desc">
Rules defined by community administrators. Content handling policies (e.g., "deceased members require moderator review"), cultural protocols (e.g., M&#257;ori tangi customs), visibility defaults, and AI training consent models. Each community configures its own constitution within Layer 1 constraints.
</p>
<p class="text-blue-700 text-xs">Enforcement: constitutional rules validated by CrossReferenceValidator per tenant.</p>
<p class="text-blue-700 text-xs" data-i18n="three_layer_gov.layer2_enforcement">Enforcement: constitutional rules validated by CrossReferenceValidator per tenant.</p>
</div>
<div class="bg-purple-50 rounded-lg p-6 border border-purple-200">
<h3 class="text-lg font-bold text-purple-900 mb-2">Layer 3: Adopted Wisdom Traditions</h3>
<p class="text-purple-800 text-sm mb-2">
<h3 class="text-lg font-bold text-purple-900 mb-2" data-i18n="three_layer_gov.layer3_title">Layer 3: Adopted Wisdom Traditions</h3>
<p class="text-purple-800 text-sm mb-2" data-i18n="three_layer_gov.layer3_desc">
Individual members and communities can adopt principles from wisdom traditions to influence how Home AI frames responses. These are voluntary, reversible, and transparent. They influence presentation, not content access. Multiple traditions can be adopted simultaneously; conflicts are resolved by the member, not the AI.
</p>
<p class="text-purple-700 text-xs">Enforcement: framing hints in response generation. Override always available.</p>
<p class="text-purple-700 text-xs" data-i18n="three_layer_gov.layer3_enforcement">Enforcement: framing hints in response generation. Override always available.</p>
</div>
</div>
</section>
<!-- Wisdom Traditions -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Wisdom Traditions</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="wisdom.heading">Wisdom Traditions</h2>
<p class="text-gray-700 mb-6" data-i18n="wisdom.intro">
Home AI offers thirteen wisdom traditions that members can adopt to guide AI behaviour. Each tradition has been validated against the Stanford Encyclopedia of Philosophy as the primary scholarly reference. Adoption is voluntary, transparent, and reversible.
</p>
<div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-3 gap-3">
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Berlin: Value Pluralism</h4>
<p class="text-gray-600 text-xs">Present options without ranking; acknowledge what each choice sacrifices.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.berlin_title">Berlin: Value Pluralism</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.berlin_desc">Present options without ranking; acknowledge what each choice sacrifices.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Stoic: Equanimity and Virtue</h4>
<p class="text-gray-600 text-xs">Focus on what can be controlled; emphasise character in ancestral stories.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.stoic_title">Stoic: Equanimity and Virtue</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.stoic_desc">Focus on what can be controlled; emphasise character in ancestral stories.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Weil: Attention to Affliction</h4>
<p class="text-gray-600 text-xs">Resist summarising grief; preserve names and specifics rather than abstracting.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.weil_title">Weil: Attention to Affliction</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.weil_desc">Resist summarising grief; preserve names and specifics rather than abstracting.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Care Ethics: Relational Responsibility</h4>
<p class="text-gray-600 text-xs">Attend to how content affects specific people, not abstract principles.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.care_title">Care Ethics: Relational Responsibility</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.care_desc">Attend to how content affects specific people, not abstract principles.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Confucian: Relational Duty</h4>
<p class="text-gray-600 text-xs">Frame stories in terms of family roles and reciprocal obligations.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.confucian_title">Confucian: Relational Duty</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.confucian_desc">Frame stories in terms of family roles and reciprocal obligations.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Buddhist: Impermanence</h4>
<p class="text-gray-600 text-xs">Acknowledge that memories and interpretations change; extend compassion.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.buddhist_title">Buddhist: Impermanence</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.buddhist_desc">Acknowledge that memories and interpretations change; extend compassion.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Ubuntu: Communal Personhood</h4>
<p class="text-gray-600 text-xs">"I am because we are." Stories belong to the community, not the individual.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.ubuntu_title">Ubuntu: Communal Personhood</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.ubuntu_desc">"I am because we are." Stories belong to the community, not the individual.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">African Diaspora: Sankofa</h4>
<p class="text-gray-600 text-xs">Preserve what was nearly lost; honour fictive kinship and chosen family.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.african_title">African Diaspora: Sankofa</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.african_desc">Preserve what was nearly lost; honour fictive kinship and chosen family.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Indigenous/M&#257;ori: Whakapapa</h4>
<p class="text-gray-600 text-xs">Kinship with ancestors, land, and descendants. Collective ownership of knowledge.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n-html="wisdom.indigenous_title">Indigenous/M&#257;ori: Whakapapa</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.indigenous_desc">Kinship with ancestors, land, and descendants. Collective ownership of knowledge.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Jewish: Tikkun Olam</h4>
<p class="text-gray-600 text-xs">Repair, preserve memory (zachor), uphold dignity even of difficult relatives.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.jewish_title">Jewish: Tikkun Olam</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.jewish_desc">Repair, preserve memory (zachor), uphold dignity even of difficult relatives.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Islamic: Mercy and Justice</h4>
<p class="text-gray-600 text-xs">Balance rahma (mercy) with adl (justice) in sensitive content.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.islamic_title">Islamic: Mercy and Justice</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.islamic_desc">Balance rahma (mercy) with adl (justice) in sensitive content.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Hindu: Dharmic Order</h4>
<p class="text-gray-600 text-xs">Role-appropriate duties within larger order; karma as consequence, not punishment.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.hindu_title">Hindu: Dharmic Order</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.hindu_desc">Role-appropriate duties within larger order; karma as consequence, not punishment.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm mb-1">Alexander: Living Architecture</h4>
<p class="text-gray-600 text-xs">Governance as living system; changes emerge from operational experience.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="wisdom.alexander_title">Alexander: Living Architecture</h4>
<p class="text-gray-600 text-xs" data-i18n="wisdom.alexander_desc">Governance as living system; changes emerge from operational experience.</p>
</div>
</div>
<div class="bg-gray-50 rounded-lg p-5 border border-gray-200 mt-4">
<p class="text-gray-700 text-sm">
<p class="text-gray-700 text-sm" data-i18n-html="wisdom.disclaimer">
<strong>What this is not:</strong> Selecting "Buddhist" does not mean the AI practises Buddhism. These are framing tendencies &mdash; they influence how the AI presents options, not what content is accessible. A member can always override tradition-influenced framing on any response. The system does not claim algorithmic moral reasoning.
</p>
</div>
@ -444,31 +444,31 @@
<!-- Indigenous Data Sovereignty -->
<section class="mb-14">
<div class="bg-gradient-to-r from-blue-50 to-purple-50 rounded-xl p-8 border border-blue-200">
<h2 class="text-2xl font-bold text-gray-900 mb-4">Indigenous Data Sovereignty</h2>
<p class="text-gray-700 mb-4">
<h2 class="text-2xl font-bold text-gray-900 mb-4" data-i18n="indigenous.heading">Indigenous Data Sovereignty</h2>
<p class="text-gray-700 mb-4" data-i18n="indigenous.intro">
Indigenous data sovereignty differs fundamentally from Western privacy models. Where Western privacy centres on individual rights and consent-as-checkbox, indigenous frameworks centre on collective rights, community process, and intergenerational stewardship.
</p>
<div class="grid grid-cols-1 md:grid-cols-3 gap-4 mb-4">
<div class="bg-white rounded-lg p-4">
<h4 class="font-bold text-gray-900 text-sm mb-1">Te Mana Raraunga</h4>
<p class="text-gray-600 text-xs">M&#257;ori Data Sovereignty. Rangatiratanga (self-determination), kaitiakitanga (guardianship for future generations), whanaungatanga (kinship as unified entity).</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="indigenous.tmr_title">Te Mana Raraunga</h4>
<p class="text-gray-600 text-xs" data-i18n-html="indigenous.tmr_desc">M&#257;ori Data Sovereignty. Rangatiratanga (self-determination), kaitiakitanga (guardianship for future generations), whanaungatanga (kinship as unified entity).</p>
</div>
<div class="bg-white rounded-lg p-4">
<h4 class="font-bold text-gray-900 text-sm mb-1">CARE Principles</h4>
<p class="text-gray-600 text-xs">Global Indigenous Data Alliance. Collective Benefit, Authority to Control, Responsibility, Ethics. Data ecosystems designed for indigenous benefit.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="indigenous.care_title">CARE Principles</h4>
<p class="text-gray-600 text-xs" data-i18n="indigenous.care_desc">Global Indigenous Data Alliance. Collective Benefit, Authority to Control, Responsibility, Ethics. Data ecosystems designed for indigenous benefit.</p>
</div>
<div class="bg-white rounded-lg p-4">
<h4 class="font-bold text-gray-900 text-sm mb-1">OCAP</h4>
<p class="text-gray-600 text-xs">First Nations Canada. Ownership, Control, Access, Possession. Communities physically control their data.</p>
<h4 class="font-bold text-gray-900 text-sm mb-1" data-i18n="indigenous.ocap_title">OCAP</h4>
<p class="text-gray-600 text-xs" data-i18n="indigenous.ocap_desc">First Nations Canada. Ownership, Control, Access, Possession. Communities physically control their data.</p>
</div>
</div>
<p class="text-gray-700 mb-4">
<p class="text-gray-700 mb-4" data-i18n-html="indigenous.implications">
Concrete architectural implications: whakapapa (genealogy) cannot be atomised into individual data points. Tapu (sacred/restricted) content triggers cultural review before AI processing. Consent for AI training requires wh&#257;nau consensus, not individual opt-in. Elder (kaum&#257;tua) approval is required for training on sacred genealogies.
</p>
<p class="text-gray-600 text-sm italic">
<p class="text-gray-600 text-sm italic" data-i18n-html="indigenous.note">
These principles are informed by Te Tiriti o Waitangi and predate Western technology governance by centuries. We consider them prior art, not novel invention. Actual implementation requires ongoing consultation with M&#257;ori cultural advisors &mdash; this specification is a starting point.
</p>
</div>
@ -476,36 +476,36 @@
<!-- Training Infrastructure -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Training Infrastructure</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="infrastructure.heading">Training Infrastructure</h2>
<p class="text-gray-700 mb-6" data-i18n="infrastructure.intro">
Home AI follows a "train local, deploy remote" model. The training hardware sits in the developer's home. Trained model weights are deployed to production servers for inference. This keeps training costs low and training data under physical control.
</p>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-2">Local Training</h3>
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="infrastructure.local_title">Local Training</h3>
<ul class="text-gray-700 text-sm space-y-2">
<li>Consumer GPU with 24GB VRAM via external enclosure</li>
<li>QLoRA fine-tuning (4-bit quantisation fits in VRAM budget)</li>
<li>DPO (Direct Preference Optimization) &mdash; requires only 2 models in memory vs PPO's 4</li>
<li>Overnight training runs &mdash; compatible with off-grid solar power</li>
<li>Sustained power draw under 500W</li>
<li data-i18n="infrastructure.local_item1">Consumer GPU with 24GB VRAM via external enclosure</li>
<li data-i18n-html="infrastructure.local_item2">QLoRA fine-tuning (4-bit quantisation fits in VRAM budget)</li>
<li data-i18n-html="infrastructure.local_item3">DPO (Direct Preference Optimization) &mdash; requires only 2 models in memory vs PPO's 4</li>
<li data-i18n-html="infrastructure.local_item4">Overnight training runs &mdash; compatible with off-grid solar power</li>
<li data-i18n="infrastructure.local_item5">Sustained power draw under 500W</li>
</ul>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-2">Remote Inference</h3>
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="infrastructure.remote_title">Remote Inference</h3>
<ul class="text-gray-700 text-sm space-y-2">
<li>Model weights deployed to production servers (OVH France, Catalyst NZ)</li>
<li>Inference via Ollama with per-tenant adapter loading</li>
<li>Hybrid GPU/CPU architecture with health monitoring</li>
<li>Home GPU available via WireGuard VPN as primary inference engine</li>
<li>CPU fallback ensures availability when GPU is offline</li>
<li data-i18n="infrastructure.remote_item1">Model weights deployed to production servers (OVH France, Catalyst NZ)</li>
<li data-i18n="infrastructure.remote_item2">Inference via Ollama with per-tenant adapter loading</li>
<li data-i18n="infrastructure.remote_item3">Hybrid GPU/CPU architecture with health monitoring</li>
<li data-i18n="infrastructure.remote_item4">Home GPU available via WireGuard VPN as primary inference engine</li>
<li data-i18n="infrastructure.remote_item5">CPU fallback ensures availability when GPU is offline</li>
</ul>
</div>
</div>
<div class="bg-gray-50 rounded-lg p-5 border border-gray-200 mt-4">
<p class="text-gray-700 text-sm">
<p class="text-gray-700 text-sm" data-i18n-html="infrastructure.why_consumer">
<strong>Why consumer hardware?</strong> The SLL thesis is that sovereign AI training should be accessible, not reserved for organisations with data centre budgets. A single consumer GPU can fine-tune a 7B model efficiently via QLoRA. The entire training infrastructure fits on a desk.
</p>
</div>
@ -513,57 +513,57 @@
<!-- Bias and Verification -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Bias Documentation and Verification</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="bias.heading">Bias Documentation and Verification</h2>
<p class="text-gray-700 mb-6" data-i18n="bias.intro">
Home AI operates in the domain of family storytelling, which carries specific bias risks. Six bias categories have been documented with detection prompts, debiasing examples, and evaluation criteria.
</p>
<div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-3 gap-3">
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm">Family Structure</h4>
<p class="text-gray-600 text-xs">Nuclear family as default; same-sex parents, blended families, single parents treated as normative.</p>
<h4 class="font-bold text-gray-900 text-sm" data-i18n="bias.family_title">Family Structure</h4>
<p class="text-gray-600 text-xs" data-i18n="bias.family_desc">Nuclear family as default; same-sex parents, blended families, single parents treated as normative.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm">Elder Representation</h4>
<p class="text-gray-600 text-xs">Deficit framing of aging; elders as active agents with expertise, not passive subjects.</p>
<h4 class="font-bold text-gray-900 text-sm" data-i18n="bias.elder_title">Elder Representation</h4>
<p class="text-gray-600 text-xs" data-i18n="bias.elder_desc">Deficit framing of aging; elders as active agents with expertise, not passive subjects.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm">Cultural/Religious</h4>
<p class="text-gray-600 text-xs">Christian-normative assumptions; equal treatment of all cultural practices and observances.</p>
<h4 class="font-bold text-gray-900 text-sm" data-i18n="bias.cultural_title">Cultural/Religious</h4>
<p class="text-gray-600 text-xs" data-i18n="bias.cultural_desc">Christian-normative assumptions; equal treatment of all cultural practices and observances.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm">Geographic/Place</h4>
<p class="text-gray-600 text-xs">Anglo-American defaults; location-appropriate references and cultural context.</p>
<h4 class="font-bold text-gray-900 text-sm" data-i18n="bias.geographic_title">Geographic/Place</h4>
<p class="text-gray-600 text-xs" data-i18n="bias.geographic_desc">Anglo-American defaults; location-appropriate references and cultural context.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm">Grief/Trauma</h4>
<p class="text-gray-600 text-xs">Efficiency over sensitivity; pacing, attention to particulars, no premature closure.</p>
<h4 class="font-bold text-gray-900 text-sm" data-i18n="bias.grief_title">Grief/Trauma</h4>
<p class="text-gray-600 text-xs" data-i18n="bias.grief_desc">Efficiency over sensitivity; pacing, attention to particulars, no premature closure.</p>
</div>
<div class="bg-white rounded-lg p-4 border border-gray-200">
<h4 class="font-bold text-gray-900 text-sm">Naming Conventions</h4>
<p class="text-gray-600 text-xs">Western name-order assumptions; correct handling of patronymics, honorifics, diacritics.</p>
<h4 class="font-bold text-gray-900 text-sm" data-i18n="bias.naming_title">Naming Conventions</h4>
<p class="text-gray-600 text-xs" data-i18n="bias.naming_desc">Western name-order assumptions; correct handling of patronymics, honorifics, diacritics.</p>
</div>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200 mt-6">
<h3 class="text-lg font-bold text-gray-900 mb-3">Verification Framework</h3>
<h3 class="text-lg font-bold text-gray-900 mb-3" data-i18n="bias.verification_title">Verification Framework</h3>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div>
<h4 class="font-bold text-gray-900 text-sm mb-2">Governance Metrics</h4>
<h4 class="font-bold text-gray-900 text-sm mb-2" data-i18n="bias.metrics_title">Governance Metrics</h4>
<ul class="text-gray-700 text-xs space-y-1">
<li>Tenant leak rate: target 0%</li>
<li>Constitutional violations: target &lt;1%</li>
<li>Value framework compliance: target &gt;80%</li>
<li>Refusal appropriateness: target &gt;95%</li>
<li data-i18n="bias.metrics_item1">Tenant leak rate: target 0%</li>
<li data-i18n="bias.metrics_item2">Constitutional violations: target &lt;1%</li>
<li data-i18n="bias.metrics_item3">Value framework compliance: target &gt;80%</li>
<li data-i18n="bias.metrics_item4">Refusal appropriateness: target &gt;95%</li>
</ul>
</div>
<div>
<h4 class="font-bold text-gray-900 text-sm mb-2">Testing Methods</h4>
<h4 class="font-bold text-gray-900 text-sm mb-2" data-i18n="bias.testing_title">Testing Methods</h4>
<ul class="text-gray-700 text-xs space-y-1">
<li>Secret phrase probes for tenant isolation</li>
<li>Constraint persistence after N training rounds</li>
<li>Red-team adversarial prompts (jailbreak, injection, cross-tenant)</li>
<li>Human review sampling (5&ndash;100% depending on content type)</li>
<li data-i18n="bias.testing_item1">Secret phrase probes for tenant isolation</li>
<li data-i18n="bias.testing_item2">Constraint persistence after N training rounds</li>
<li data-i18n="bias.testing_item3">Red-team adversarial prompts (jailbreak, injection, cross-tenant)</li>
<li data-i18n-html="bias.testing_item4">Human review sampling (5&ndash;100% depending on content type)</li>
</ul>
</div>
</div>
@ -572,67 +572,67 @@
<!-- What's Live Today -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">What's Live Today</h2>
<p class="text-gray-700 mb-6">
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="live_today.heading">What's Live Today</h2>
<p class="text-gray-700 mb-6" data-i18n="live_today.intro">
Home AI currently operates in production with the following governed features. These run under the full six-service governance stack.
</p>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="font-bold text-gray-900 mb-2">RAG-Based Help</h3>
<p class="text-gray-700 text-sm">Vector search retrieves relevant documentation, filtered by member permissions. Responses grounded in retrieved documents, not training data alone.</p>
<h3 class="font-bold text-gray-900 mb-2" data-i18n="live_today.rag_title">RAG-Based Help</h3>
<p class="text-gray-700 text-sm" data-i18n="live_today.rag_desc">Vector search retrieves relevant documentation, filtered by member permissions. Responses grounded in retrieved documents, not training data alone.</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="font-bold text-gray-900 mb-2">Document OCR</h3>
<p class="text-gray-700 text-sm">Text extraction from uploaded documents. Results stored within member scope, not shared across tenants or used for training without consent.</p>
<h3 class="font-bold text-gray-900 mb-2" data-i18n="live_today.ocr_title">Document OCR</h3>
<p class="text-gray-700 text-sm" data-i18n="live_today.ocr_desc">Text extraction from uploaded documents. Results stored within member scope, not shared across tenants or used for training without consent.</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="font-bold text-gray-900 mb-2">Story Assistance</h3>
<p class="text-gray-700 text-sm">Writing prompts, structural advice, narrative enhancement. Cultural context decisions deferred to the storyteller, not resolved by the AI.</p>
<h3 class="font-bold text-gray-900 mb-2" data-i18n="live_today.story_title">Story Assistance</h3>
<p class="text-gray-700 text-sm" data-i18n="live_today.story_desc">Writing prompts, structural advice, narrative enhancement. Cultural context decisions deferred to the storyteller, not resolved by the AI.</p>
</div>
<div class="bg-white rounded-lg shadow-sm p-5 border border-gray-200">
<h3 class="font-bold text-gray-900 mb-2">AI Memory Transparency</h3>
<p class="text-gray-700 text-sm">Members view and control what the AI remembers. Independent consent for triage memory, OCR memory, and summarisation memory.</p>
<h3 class="font-bold text-gray-900 mb-2" data-i18n="live_today.memory_title">AI Memory Transparency</h3>
<p class="text-gray-700 text-sm" data-i18n="live_today.memory_desc">Members view and control what the AI remembers. Independent consent for triage memory, OCR memory, and summarisation memory.</p>
</div>
</div>
</section>
<!-- Limitations -->
<section class="mb-14">
<h2 class="text-3xl font-bold text-gray-900 mb-6">Limitations and Open Questions</h2>
<h2 class="text-3xl font-bold text-gray-900 mb-6" data-i18n="limitations.heading">Limitations and Open Questions</h2>
<div class="bg-amber-50 border-l-4 border-amber-500 p-6 rounded-r-lg">
<ul class="space-y-3 text-amber-800">
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Training not yet begun:</strong> The SLL architecture is designed and documented. Hardware is ordered. But no model has been trained yet. Claims about training-time governance are architectural design, not empirical results.</span>
<span data-i18n-html="limitations.item1"><strong>Training not yet begun:</strong> The SLL architecture is designed and documented. Hardware is ordered. But no model has been trained yet. Claims about training-time governance are architectural design, not empirical results.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Limited deployment:</strong> Home AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments.</span>
<span data-i18n-html="limitations.item2"><strong>Limited deployment:</strong> Home AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Self-reported metrics:</strong> Performance and safety figures are reported by the same team that built the system. Independent audit is planned but not yet conducted.</span>
<span data-i18n-html="limitations.item3"><strong>Self-reported metrics:</strong> Performance and safety figures are reported by the same team that built the system. Independent audit is planned but not yet conducted.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Tradition operationalisation:</strong> Can rich philosophical traditions be authentically reduced to framing hints? A member selecting "Buddhist" does not mean they understand or practise Buddhism. This risks superficiality.</span>
<span data-i18n-html="limitations.item4"><strong>Tradition operationalisation:</strong> Can rich philosophical traditions be authentically reduced to framing hints? A member selecting "Buddhist" does not mean they understand or practise Buddhism. This risks superficiality.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Training persistence unknown:</strong> Whether governance constraints survive hundreds of training rounds without degradation is an open research question. Drift detection is designed but untested.</span>
<span data-i18n-html="limitations.item5"><strong>Training persistence unknown:</strong> Whether governance constraints survive hundreds of training rounds without degradation is an open research question. Drift detection is designed but untested.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Adversarial testing limited:</strong> The governance stack has not been subjected to systematic adversarial evaluation. Red-teaming is a priority.</span>
<span data-i18n-html="limitations.item6"><strong>Adversarial testing limited:</strong> The governance stack has not been subjected to systematic adversarial evaluation. Red-teaming is a priority.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Scale unknown:</strong> Governance overhead (~5% per interaction) is measured at current scale. Whether this holds under high throughput is untested.</span>
<span data-i18n-html="limitations.item7"><strong>Scale unknown:</strong> Governance overhead (~5% per interaction) is measured at current scale. Whether this holds under high throughput is untested.</span>
</li>
<li class="flex items-start">
<span class="mr-2 font-bold">&bull;</span>
<span><strong>Cultural validation needed:</strong> Indigenous knowledge module specifications require ongoing consultation with M&#257;ori cultural advisors. The documentation is a starting point, not a final authority.</span>
<span data-i18n-html="limitations.item8"><strong>Cultural validation needed:</strong> Indigenous knowledge module specifications require ongoing consultation with M&#257;ori cultural advisors. The documentation is a starting point, not a final authority.</span>
</li>
</ul>
</div>
@ -640,23 +640,23 @@
<!-- Further Reading -->
<section class="mb-8">
<h2 class="text-2xl font-bold text-gray-900 mb-6">Further Reading</h2>
<h2 class="text-2xl font-bold text-gray-900 mb-6" data-i18n="further_reading.heading">Further Reading</h2>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<a href="/architecture.html" class="block bg-white rounded-lg shadow-sm p-5 border border-gray-200 hover:shadow-md hover:-translate-y-0.5 transition-all">
<h3 class="font-bold text-gray-900 mb-1">System Architecture</h3>
<p class="text-sm text-gray-600">Five architectural principles and six governance services</p>
<h3 class="font-bold text-gray-900 mb-1" data-i18n="further_reading.arch_title">System Architecture</h3>
<p class="text-sm text-gray-600" data-i18n="further_reading.arch_desc">Five architectural principles and six governance services</p>
</a>
<a href="/village-case-study.html" class="block bg-white rounded-lg shadow-sm p-5 border border-gray-200 hover:shadow-md hover:-translate-y-0.5 transition-all">
<h3 class="font-bold text-gray-900 mb-1">Village Case Study</h3>
<p class="text-sm text-gray-600">Tractatus in production &mdash; metrics, evidence, and honest limitations</p>
<h3 class="font-bold text-gray-900 mb-1" data-i18n="further_reading.case_title">Village Case Study</h3>
<p class="text-sm text-gray-600" data-i18n-html="further_reading.case_desc">Tractatus in production &mdash; metrics, evidence, and honest limitations</p>
</a>
<a href="/architectural-alignment.html" class="block bg-white rounded-lg shadow-sm p-5 border border-gray-200 hover:shadow-md hover:-translate-y-0.5 transition-all">
<h3 class="font-bold text-gray-900 mb-1">Architectural Alignment Paper</h3>
<p class="text-sm text-gray-600">Academic paper on governance during training</p>
<h3 class="font-bold text-gray-900 mb-1" data-i18n="further_reading.paper_title">Architectural Alignment Paper</h3>
<p class="text-sm text-gray-600" data-i18n="further_reading.paper_desc">Academic paper on governance during training</p>
</a>
<a href="/researcher.html" class="block bg-white rounded-lg shadow-sm p-5 border border-gray-200 hover:shadow-md hover:-translate-y-0.5 transition-all">
<h3 class="font-bold text-gray-900 mb-1">For Researchers</h3>
<p class="text-sm text-gray-600">Open questions, collaboration opportunities, and data access</p>
<h3 class="font-bold text-gray-900 mb-1" data-i18n="further_reading.researcher_title">For Researchers</h3>
<p class="text-sm text-gray-600" data-i18n="further_reading.researcher_desc">Open questions, collaboration opportunities, and data access</p>
</a>
</div>
</section>

View file

@ -1,2 +1,255 @@
{
"breadcrumb": {
"home": "Startseite",
"current": "Home AI"
},
"hero": {
"badge": "SOUVERÄNES, LOKAL TRAINIERTES SPRACHMODELL",
"title": "Home AI",
"subtitle": "Ein Sprachmodell, bei dem die Gemeinschaft die Trainingsdaten, die Modellgewichte und die Steuerungsregeln kontrolliert. Nicht nur geregelte Inferenz &mdash; geregeltes Training.",
"status": "<strong>Status:</strong> Home AI arbeitet in der Produktion für Inferenz. Die souveräne Trainingspipeline ist entworfen und dokumentiert; die Hardware ist bestellt. Die Ausbildung hat noch nicht begonnen. Auf dieser Seite werden sowohl die derzeitigen Fähigkeiten als auch die geplante Architektur beschrieben."
},
"sll": {
"heading": "Was ist eine SLL?",
"intro": "Ein <strong>SLL</strong> (souveränes, lokal trainiertes Sprachmodell) unterscheidet sich sowohl von LLM als auch von SLM. Der Unterschied ist nicht die Größe &mdash;, sondern die Kontrolle.",
"llm_title": "LLM",
"llm_subtitle": "Großes Sprachmodell",
"llm_item1": "Ausbildung: Anbietergesteuert",
"llm_item2": "Daten: in großem Umfang gescannt",
"llm_item3": "Governance: Bedingungen des Anbieters",
"llm_item4": "Benutzerkontrolle: keine",
"slm_title": "SLM",
"slm_subtitle": "Kleines Sprachmodell",
"slm_item1": "Ausbildung: Anbieter-gesteuert",
"slm_item2": "Daten: kuratiert nach Anbieter",
"slm_item3": "Governance: teilweise (Feinabstimmung)",
"slm_item4": "Benutzerkontrolle: eingeschränkt",
"sll_title": "SLL",
"sll_subtitle": "Souverän vor Ort geschult",
"sll_item1": "Ausbildung: von der Gemeinschaft kontrolliert",
"sll_item2": "Daten: im Besitz der Gemeinschaft",
"sll_item3": "Governance: architektonisch erzwungen",
"sll_item4": "Benutzerkontrolle: vollständig",
"tradeoff": "Der ehrliche Kompromiss: Ein SLL ist ein weniger leistungsfähiges System, das Ihren Interessen dient, und nicht ein leistungsfähigeres, das den Interessen eines anderen dient. Wir halten dies für einen akzeptablen Tausch."
},
"two_model": {
"heading": "Zwei-Modelle-Architektur",
"intro": "Home AI verwendet zwei Modelle unterschiedlicher Größe, die nach der Komplexität der Aufgabe geordnet sind. Dabei handelt es sich nicht um einen Ausweichmechanismus &mdash; jedes Modell ist für seine Aufgabe optimiert.",
"fast_title": "3B Modell &mdash; Schneller Assistent",
"fast_desc": "Bearbeitet Hilfeanfragen, Tooltips, Fehlererklärungen, kurze Zusammenfassungen und Übersetzungen. Angestrebte Antwortzeit: unter 5 Sekunden vollständig.",
"fast_routing": "Routing-Auslöser: einfache Abfragen, bekannte FAQ-Muster, einstufige Aufgaben.",
"deep_title": "8B Modell &mdash; Deep Reasoning",
"deep_desc": "Ermöglicht die Erstellung von Lebensgeschichten, Jahresrückblicken, komplexen Zusammenfassungen und sensibler Korrespondenz. Angestrebte Antwortzeit: unter 90 Sekunden.",
"deep_routing": "Routing-Auslöser: Schlüsselwörter wie \"alles über\", Multi-Source-Abruf, Trauer/Trauma-Marker.",
"footer": "Beide Modelle arbeiten unter demselben Governance-Stack. Die Routing-Entscheidung selbst wird durch &mdash; geregelt, ContextPressureMonitor kann das Routing außer Kraft setzen, wenn der Zustand der Sitzung dies erfordert."
},
"training_tiers": {
"heading": "Drei Schulungsebenen",
"intro": "Die Ausbildung ist nicht monolithisch. Es gibt drei Ebenen mit unterschiedlichen Aufgabenbereichen, die jeweils mit entsprechenden Governance-Einschränkungen verbunden sind.",
"tier1_title": "Ebene 1: Plattform Basis",
"tier1_badge": "Alle Gemeinden",
"tier1_desc": "Geschult in der Dokumentation der Plattform, der Philosophie, den Funktionsleitfäden und den FAQ-Inhalten. Vermittelt ein grundlegendes Verständnis dafür, wie Village funktioniert, was die Werte von Home AI sind und wie man Mitgliedern bei der Navigation auf der Plattform hilft.",
"tier1_update": "Aktualisierungshäufigkeit: wöchentlich während der Betaphase, vierteljährlich bei der Generalversammlung. Trainingsmethode: QLoRA-Feinabstimmung.",
"tier2_title": "Ebene 2: Mieteradapter",
"tier2_badge": "Pro Gemeinde",
"tier2_desc": "Jede Community trainiert einen leichtgewichtigen LoRA-Adapter auf ihre eigenen Inhalte &mdash; Geschichten, Dokumente, Fotos und Ereignisse, deren Aufnahme die Mitglieder ausdrücklich zugestimmt haben. Dadurch kann Home AI Fragen wie \"Welche Geschichten hat Oma geteilt?\" beantworten, ohne auf die Daten einer anderen Community zuzugreifen.",
"tier2_update": "Adapter sind klein (50&ndash;100MB). Die Zustimmung erfolgt pro Inhaltselement. Inhalte, die mit \"nur ich\" gekennzeichnet sind, werden unabhängig von der Zustimmung nie einbezogen. Die Schulung verwendet DPO (Direct Preference Optimization) für den Werteabgleich.",
"tier3_title": "Stufe 3: Individuell (Zukunft)",
"tier3_badge": "Pro Mitglied",
"tier3_desc": "Persönliche Adapter, die individuelle Vorlieben und Interaktionsmuster lernen. Spekulativ &mdash; diese Ebene wirft erhebliche Fragen zur Machbarkeit, zum Datenschutz und zu den für eine sinnvolle Personalisierung erforderlichen Mindesttrainingsdaten auf.",
"tier3_update": "Forschungsfragen dokumentiert. Umsetzung erst geplant, wenn Stufe 2 validiert ist."
},
"governance_training": {
"heading": "Governance während der Ausbildung",
"intro1": "Dies ist der zentrale Beitrag der Forschung. Die meisten KI-Governance-Frameworks arbeiten zum Zeitpunkt der Inferenz &mdash; und filtern oder beschränken die Antworten, nachdem das Modell bereits trainiert wurde. Home AI bettet Governance <strong>in die Trainingsschleife</strong> ein.",
"intro2": "Dies folgt dem Grundsatz <em>Nicht-Trennung</em> von Christopher Alexander: Governance wird in die Trainingsarchitektur eingewoben und nicht nachträglich angewendet. Der BoundaryEnforcer validiert jeden Trainingsstapel vor dem Forward Pass. Enthält ein Stapel mandantenübergreifende Daten, Daten ohne Zustimmung oder als privat gekennzeichnete Inhalte, wird der Stapel abgelehnt und der Trainingsschritt nicht fortgesetzt.",
"code_comment1": "# Governance innerhalb der Trainingsschleife (Not-Separateness)",
"code_line1": "for batch in training_data:",
"code_line2": "&nbsp;&nbsp;if not BoundaryEnforcer.validate(batch):",
"code_line3": "&nbsp;&nbsp;&nbsp;&nbsp;continue&nbsp;&nbsp;<span class=\"text-green-400\"># Governance lehnt Charge ab</span>",
"code_line4": "&nbsp;&nbsp;loss = model.forward(batch)",
"code_line5": "&nbsp;&nbsp;loss.backward()",
"code_comment2": "# NOT this &mdash; governance separated from training",
"code_anti1": "for batch in training_data:",
"code_anti2": "&nbsp;&nbsp;loss = model.forward(batch)",
"code_anti3": "&nbsp;&nbsp;loss.backward()",
"code_anti4": "filter_outputs_later()&nbsp;&nbsp;<span class=\"text-red-400\"># Zu spät</span>",
"why_title": "Warum sowohl die Steuerung zur Trainingszeit als auch zur Inferenzzeit?",
"why_text": "<strong>Das Training formt die Tendenz, die Architektur schränkt die Fähigkeit ein.</strong> Ein Modell, das darauf trainiert ist, Grenzen zu respektieren, kann immer noch geknackt werden. Ein Modell, das gegen die Governance-Regeln ankämpft, verschwendet Rechenzeit und produziert schlechtere Ergebnisse. Der kombinierte Ansatz bewirkt, dass das Modell <em>zu</em>geregeltem Verhalten neigt, während die Architektur es <em>unmöglich</em>macht, strukturelle Grenzen zu verletzen.",
"why_note": "Forschungsergebnisse aus der Agent Lightning-Integration deuten darauf hin, dass Governance etwa 5 % mehr Leistung bringt &mdash; - ein akzeptabler Kompromiss für architektonische Sicherheitseinschränkungen. Dies muss in großem Maßstab validiert werden.",
"footer": "Die Steuerung zur Ausbildungszeit ist nur die Hälfte des Bildes. Das gleiche Tractatus-Framework funktioniert auch zur Laufzeit in der Village-Codebasis. Im nächsten Abschnitt wird erläutert, wie diese beiden Ebenen zusammenarbeiten."
},
"dual_layer": {
"heading": "Zweischichtige Tractatus-Architektur",
"intro": "Home AI wird von Tractatus auf <strong>zwei verschiedenen Schichten</strong> gleichzeitig gesteuert. Dies ist die architektonische Einsicht, die den SLL-Ansatz sowohl von ungeregelten Modellen als auch von aufgeschraubten Sicherheitsfiltern unterscheidet.",
"layer_a_badge": "EBENE A: INHÄRENT",
"layer_a_title": "Tractatus Im Inneren des Modells",
"layer_a_desc": "Während des Trainings validiert das BoundaryEnforcer jedes Los. Die DPO-Anpassung formt die Präferenzen für ein geregeltes Verhalten. Das Modell <em>lernt</em>, Grenzen zu respektieren, transparente Antworten zu bevorzugen und Wertentscheidungen dem Menschen zu überlassen.",
"layer_a_item1": "<strong>Mechanismus:</strong> Governance in der Ausbildungsschleife",
"layer_a_item2": "<strong>Effekt:</strong> Das Modell neigt zu geregeltem Verhalten",
"layer_a_item3": "<strong>Einschränkung:</strong> Tendenzen können durch gegnerische Aufforderung außer Kraft gesetzt werden",
"layer_b_badge": "EBENE B: AKTIV",
"layer_b_title": "Tractatus Rund um das Modell",
"layer_b_desc": "Zur Laufzeit arbeitet der gesamte sechs Dienste umfassende Governance-Stack in der Codebasis Village. Jede Interaktion durchläuft BoundaryEnforcer, PluralisticDeliberationOrchestrator, MetacognitiveVerifier, CrossReferenceValidator, ContextPressureMonitor, und InstructionPersistenceClassifier.",
"layer_b_item1": "<strong>Mechanismus:</strong> Sechs Architekturdienste auf dem kritischen Pfad",
"layer_b_item2": "<strong>Effekt:</strong> Strukturelle Grenzen können nicht verletzt werden",
"layer_b_item3": "<strong>Einschränkung:</strong> Fügt ~5% Performance-Overhead pro Interaktion hinzu",
"principle_title": "Das Zwei-Schichten-Prinzip:",
"principle_line1": "Ausbildungsformen <span class=\"text-teal-400\">Tendenz</span>.",
"principle_line2": "Die Architektur schränkt <span class=\"text-indigo-400\">Fähigkeit</span> ein.",
"principle_line3": "Ein Modell, das über verinnerlichte Governance-Regeln verfügt UND innerhalb der Governance-Architektur arbeitet",
"principle_line4": "führt zu besseren Ergebnissen als jeder Ansatz allein. Das Modell funktioniert MIT den Leitplanken,",
"principle_line5": "nicht gegen sie &mdash; Verringerung der Rechenzeitverschwendung und Verbesserung der Antwortqualität.",
"caveat": "<strong>Ehrlicher Vorbehalt:</strong> Schicht A (inhärente Governance durch Schulung) ist konzipiert, aber noch nicht empirisch validiert &mdash; Die Schulung hat noch nicht begonnen. Schicht B (aktive Steuerung über Village Codebasis) ist seit mehr als 11 Monaten in der Produktion im Einsatz. Die Zwei-Ebenen-These ist eine architektonische Verpflichtung, aber noch kein nachgewiesenes Ergebnis."
},
"philosophy": {
"heading": "Philosophische Grundlagen",
"intro": "Die Führung von Home AI geht auf vier philosophische Traditionen zurück, die jeweils einen spezifischen architektonischen Grundsatz beisteuern. Es handelt sich dabei nicht um dekorative Referenzen &mdash;, sondern um konkrete Gestaltungsentscheidungen.",
"berlin_title": "Isaiah Berlin &mdash; Wertepluralismus",
"berlin_desc": "Die Werte sind in der Tat vielfältig und manchmal unvereinbar. Wenn Freiheit und Gleichheit miteinander in Konflikt geraten, kann es keine einzig richtige Lösung geben. Home AI präsentiert Optionen ohne Hierarchie und dokumentiert, was jede Wahl opfert.",
"berlin_arch": "Architektonischer Ausdruck: PluralisticDeliberationOrchestrator stellt Kompromisse vor, löst sie aber nicht auf.",
"wittgenstein_title": "Ludwig Wittgenstein &mdash; Sprachgrenzen",
"wittgenstein_desc": "Die Sprache formt, was gedacht und ausgedrückt werden kann. Manche Dinge, die am wichtigsten sind, widersetzen sich einem systematischen Ausdruck. Home AI erkennt die Grenzen dessen an, was Sprachmodelle erfassen können &mdash; insbesondere im Hinblick auf Trauer, kulturelle Bedeutung und gelebte Erfahrung.",
"wittgenstein_arch": "Architektonischer Ausdruck: BoundaryEnforcer überlässt die Entscheidung über Werte dem Menschen und erkennt die Grenzen der Berechnung an.",
"indigenous_title": "Indigene Souveränität &mdash; Daten als Beziehung",
"indigenous_desc": "Te Mana Raraunga (M&#257;ori Data Sovereignty), CARE Principles und OCAP (First Nations Canada) bieten einen Rahmen, in dem Daten nicht Eigentum, sondern Beziehung sind. Whakapapa (Genealogie) gehört dem Kollektiv, nicht dem Einzelnen. Die Zustimmung ist ein gemeinschaftlicher Prozess, kein individuelles Kästchen.",
"indigenous_arch": "Architektonischer Ausdruck: Mieterisolierung, kollektive Zustimmungsmechanismen, generationenübergreifende Verwaltung.",
"alexander_title": "Christopher Alexander &mdash; Lebendige Architektur",
"alexander_desc": "Fünf Prinzipien leiten die Entwicklung der Governance: Tiefe Verflechtung (Dienste koordinieren sich), Strukturerhaltung (Veränderungen verbessern, ohne zu brechen), Gradienten, nicht binär (Intensitätsstufen), lebendiger Prozess (evidenzbasierte Entwicklung), keine Trennung (Governance eingebettet, nicht aufgeschraubt).",
"alexander_arch": "Architektonischer Ausdruck: alle sechs Governance-Dienste und die Ausbildungsschleifenarchitektur."
},
"three_layer_gov": {
"heading": "Drei-Ebenen-Governance",
"intro": "Governance findet auf drei Ebenen statt, die sich in ihrer Reichweite und Veränderbarkeit unterscheiden.",
"layer1_title": "Schicht 1: Plattform (unveränderlich)",
"layer1_desc": "Strukturelle Beschränkungen, die für alle Gemeinschaften gelten. Isolierung von Mieterdaten. Governance auf dem kritischen Pfad. Optionen, die ohne Hierarchie dargestellt werden. Diese können nicht von Mieteradministratoren oder einzelnen Mitgliedern deaktiviert werden.",
"layer1_enforcement": "Durchsetzung: architektonisch (BoundaryEnforcer blockiert Verstöße, bevor sie ausgeführt werden).",
"layer2_title": "Ebene 2: Mieterverfassung",
"layer2_desc": "Von Community-Administratoren festgelegte Regeln. Richtlinien für den Umgang mit Inhalten (z. B. \"Verstorbene Mitglieder müssen von einem Moderator überprüft werden\"), kulturelle Protokolle (z. B. M&#257;ori tangi Bräuche), Sichtbarkeitsvorgaben und KI-Trainingszustimmungsmodelle. Jede Gemeinschaft konfiguriert ihre eigene Verfassung innerhalb der Beschränkungen der Schicht 1.",
"layer2_enforcement": "Durchsetzung: von CrossReferenceValidator pro Mieter validierte verfassungsrechtliche Vorschriften.",
"layer3_title": "Ebene 3: Übernommene Weisheitstraditionen",
"layer3_desc": "Einzelne Mitglieder und Gemeinschaften können Prinzipien aus Weisheitstraditionen übernehmen, um die Art und Weise zu beeinflussen, wie Home AI Antworten formuliert. Diese sind freiwillig, umkehrbar und transparent. Sie beeinflussen die Präsentation, nicht den Zugang zum Inhalt. Mehrere Traditionen können gleichzeitig übernommen werden; Konflikte werden von den Mitgliedern gelöst, nicht von der KI.",
"layer3_enforcement": "Durchsetzung: Framing-Hinweise bei der Antwortgenerierung. Override immer verfügbar."
},
"wisdom": {
"heading": "Weisheitstraditionen",
"intro": "Home AI bietet dreizehn Weisheitstraditionen, die die Mitglieder übernehmen können, um das Verhalten der KI zu steuern. Jede Tradition wurde anhand der Stanford Encyclopedia of Philosophy als wichtigster wissenschaftlicher Referenz validiert. Die Annahme ist freiwillig, transparent und umkehrbar.",
"berlin_title": "Berlin: Wertepluralismus",
"berlin_desc": "Stellen Sie die Optionen vor, ohne sie in eine Rangfolge zu bringen; erkennen Sie an, was jede Wahl opfert.",
"stoic_title": "Stoisch: Gleichmut und Tugend",
"stoic_desc": "Konzentrieren Sie sich auf das, was kontrolliert werden kann; betonen Sie den Charakter in den Geschichten der Vorfahren.",
"weil_title": "Weil: Achtung vor dem Leidensweg",
"weil_desc": "Wehren Sie sich dagegen, Trauer zusammenzufassen; behalten Sie Namen und Einzelheiten bei, anstatt zu abstrahieren.",
"care_title": "Ethik in der Pflege: Beziehungsorientierte Verantwortung",
"care_desc": "Achten Sie darauf, wie der Inhalt auf bestimmte Menschen wirkt, nicht auf abstrakte Prinzipien.",
"confucian_title": "Konfuzianisch: Beziehungspflicht",
"confucian_desc": "Gestalten Sie Geschichten in Bezug auf Familienrollen und gegenseitige Verpflichtungen.",
"buddhist_title": "Buddhist: Unvergänglichkeit",
"buddhist_desc": "Erkennen Sie an, dass sich Erinnerungen und Interpretationen ändern; zeigen Sie Mitgefühl.",
"ubuntu_title": "Ubuntu: Gemeinschaftliches Persönlichkeitsrecht",
"ubuntu_desc": "\"Ich bin, weil wir sind.\" Geschichten gehören der Gemeinschaft, nicht dem Einzelnen.",
"african_title": "Afrikanische Diaspora: Sankofa",
"african_desc": "Bewahren Sie, was beinahe verloren gegangen wäre; ehren Sie die fiktive Verwandtschaft und die Wahlfamilie.",
"indigenous_title": "Einheimische/M&#257;ori: Whakapapa",
"indigenous_desc": "Verwandtschaft mit Vorfahren, Land und Nachkommen. Kollektives Eigentum an Wissen.",
"jewish_title": "Jüdisch: Tikkun Olam",
"jewish_desc": "Reparieren, das Gedächtnis (zachor) bewahren, die Würde auch schwieriger Angehöriger bewahren.",
"islamic_title": "Islamisch: Barmherzigkeit und Gerechtigkeit",
"islamic_desc": "Gleichgewicht zwischen rahma (Barmherzigkeit) und adl (Gerechtigkeit) in sensiblen Inhalten.",
"hindu_title": "Hinduistisch: Dharma-Orden",
"hindu_desc": "Rollengerechte Aufgaben innerhalb einer größeren Ordnung; Karma als Konsequenz, nicht als Strafe.",
"alexander_title": "Alexander: Lebendige Architektur",
"alexander_desc": "Governance als lebendiges System; Änderungen ergeben sich aus den operativen Erfahrungen.",
"disclaimer": "<strong>Was dies nicht ist:</strong> Die Auswahl von \"buddhistisch\" bedeutet nicht, dass die KI den Buddhismus praktiziert. Dies sind Framing-Tendenzen &mdash; sie beeinflussen, wie die KI Optionen präsentiert, nicht welche Inhalte zugänglich sind. Ein Mitglied kann das von der Tradition geprägte Framing bei jeder Antwort jederzeit aufheben. Das System erhebt keinen Anspruch auf algorithmische moralische Argumentation."
},
"indigenous": {
"heading": "Indigene Datensouveränität",
"intro": "Die indigene Datensouveränität unterscheidet sich grundlegend von westlichen Datenschutzmodellen. Während sich der westliche Datenschutz auf die Rechte des Einzelnen und die Zustimmung als Kontrollkästchen konzentriert, stehen bei indigenen Rahmenwerken die kollektiven Rechte, der Gemeinschaftsprozess und die generationenübergreifende Verantwortung im Mittelpunkt.",
"tmr_title": "Te Mana Raraunga",
"tmr_desc": "M&#257;ori Daten Souveränität. Rangatiratanga (Selbstbestimmung), kaitiakitanga (Vormundschaft für künftige Generationen), whanaungatanga (Verwandtschaft als einheitliche Einheit).",
"care_title": "CARE Grundsätze",
"care_desc": "Globale Allianz für indigene Daten. Kollektiver Nutzen, Kontrollbefugnis, Verantwortung, Ethik. Datenökosysteme zum Nutzen indigener Völker.",
"ocap_title": "OCAP",
"ocap_desc": "First Nations Kanada. Eigentum, Kontrolle, Zugang, Besitz. Die Gemeinschaften kontrollieren ihre Daten physisch.",
"implications": "Konkrete architektonische Implikationen: Whakapapa (Genealogie) kann nicht in einzelne Datenpunkte zerlegt werden. Tapu (heilige/beschränkte) Inhalte lösen eine kulturelle Überprüfung vor der KI-Verarbeitung aus. Die Zustimmung zum KI-Training erfordert den Konsens der wh&#257;nau, nicht die individuelle Zustimmung. Die Zustimmung der Ältesten (kaum&#257;tua) ist für Schulungen zu heiligen Genealogien erforderlich.",
"note": "Diese Grundsätze beruhen auf dem Te Tiriti o Waitangi und sind Jahrhunderte älter als die westliche Technologiepolitik. Wir betrachten sie als Stand der Technik und nicht als neue Erfindung. Die tatsächliche Umsetzung erfordert eine kontinuierliche Beratung mit den kulturellen Beratern der M&#257;ori &mdash; diese Spezifikation ist ein Ausgangspunkt."
},
"infrastructure": {
"heading": "Ausbildungsinfrastruktur",
"intro": "Home AI folgt einem \"train local, deploy remote\"-Modell. Die Trainingshardware befindet sich im Haus des Entwicklers. Die trainierten Modellgewichte werden für die Inferenz auf die Produktionsserver übertragen. Dies hält die Trainingskosten niedrig und die Trainingsdaten unter physischer Kontrolle.",
"local_title": "Lokale Ausbildung",
"local_item1": "Consumer-GPU mit 24GB VRAM über externes Gehäuse",
"local_item2": "QLoRA-Feinabstimmung (4-Bit-Quantisierung passt in VRAM-Budget)",
"local_item3": "DPO (Direkte Präferenz-Optimierung) &mdash; benötigt nur 2 Modelle im Speicher im Gegensatz zu PPO's 4",
"local_item4": "Übernachtungstraining läuft &mdash; kompatibel mit netzunabhängigem Solarstrom",
"local_item5": "Dauerhafte Leistungsaufnahme unter 500 W",
"remote_title": "Ferninferenz",
"remote_item1": "Modellgewichte auf Produktionsservern eingesetzt (OVH Frankreich, Catalyst NZ)",
"remote_item2": "Inferenz über Ollama mit mandantenbezogener Adapterladung",
"remote_item3": "Hybride GPU/CPU-Architektur mit Zustandsüberwachung",
"remote_item4": "Home GPU verfügbar über WireGuard VPN als primäre Inferenzmaschine",
"remote_item5": "CPU-Fallback gewährleistet Verfügbarkeit, wenn die GPU offline ist",
"why_consumer": "<strong>Warum Consumer-Hardware?</strong> Die SLL-These ist, dass souveränes KI-Training zugänglich sein sollte und nicht nur für Organisationen mit Rechenzentrums-Budgets. Eine einzige Consumer-GPU kann ein 7B-Modell effizient über QLoRA feinjustieren. Die gesamte Trainingsinfrastruktur passt auf einen Schreibtisch."
},
"bias": {
"heading": "Bias-Dokumentation und -Überprüfung",
"intro": "Home AI ist im Bereich des familiären Geschichtenerzählens tätig, das spezifische Verzerrungsrisiken birgt. Es wurden sechs Verzerrungskategorien mit Aufdeckungshinweisen, entschärfenden Beispielen und Bewertungskriterien dokumentiert.",
"family_title": "Familienstruktur",
"family_desc": "Kernfamilie als Standard; gleichgeschlechtliche Eltern, gemischte Familien, Alleinerziehende werden als normativ behandelt.",
"elder_title": "Vertretung der Älteren",
"elder_desc": "Defizitäres Framing des Alterns; ältere Menschen als aktive Akteure mit Fachwissen, nicht als passive Subjekte.",
"cultural_title": "Kulturell/Religiös",
"cultural_desc": "Christlich-normative Annahmen; Gleichbehandlung aller kulturellen Praktiken und Observanzen.",
"geographic_title": "Geografisch/Ort",
"geographic_desc": "Anglo-amerikanische Vorgaben; ortsbezogene Bezüge und kultureller Kontext.",
"grief_title": "Trauer/Trauma",
"grief_desc": "Effizienz vor Sensibilität; Tempo, Aufmerksamkeit für Details, kein vorzeitiger Abschluss.",
"naming_title": "Benennungskonventionen",
"naming_desc": "Westliche Annahmen zur Namensreihenfolge; korrekter Umgang mit Patronymen, Ehrentiteln, diakritischen Zeichen.",
"verification_title": "Rahmen für die Verifizierung",
"metrics_title": "Governance-Metriken",
"metrics_item1": "Leckrate bei Mietern: Ziel 0%",
"metrics_item2": "Verstöße gegen die Verfassung: Zielwert <1%",
"metrics_item3": "Einhaltung des Werterahmens: Ziel >80%",
"metrics_item4": "Angemessenheit der Ablehnung: Zielvorgabe >95%",
"testing_title": "Testmethoden",
"testing_item1": "Geheime Phrasensonden für die Mieterisolierung",
"testing_item2": "Dauerhaftigkeit der Beschränkung nach N Trainingsrunden",
"testing_item3": "Aufforderungen von Red-Team-Gegnern (Jailbreak, Injection, mandantenübergreifend)",
"testing_item4": "Stichproben der menschlichen Überprüfung (5&ndash;100% je nach Inhaltstyp)"
},
"live_today": {
"heading": "Was heute live ist",
"intro": "Home AI wird derzeit in der Produktion mit den folgenden verwalteten Funktionen betrieben. Diese werden im Rahmen des vollständigen Governance-Stacks mit sechs Diensten ausgeführt.",
"rag_title": "RAG-basierte Hilfe",
"rag_desc": "Die Vektorsuche ruft relevante Dokumentation ab, gefiltert nach den Berechtigungen der Mitglieder. Die Antworten basieren auf den abgerufenen Dokumenten, nicht nur auf den Trainingsdaten.",
"ocr_title": "Dokument OCR",
"ocr_desc": "Textextraktion aus hochgeladenen Dokumenten. Die Ergebnisse werden innerhalb des Mitgliederbereichs gespeichert und nicht ohne Zustimmung an andere Mieter weitergegeben oder für Schulungen verwendet.",
"story_title": "Assistenz bei Geschichten",
"story_desc": "Schreibanregungen, strukturelle Ratschläge, Verbesserung der Erzählung. Entscheidungen zum kulturellen Kontext werden dem Erzähler überlassen und nicht von der KI gelöst.",
"memory_title": "AI-Speicher-Transparenz",
"memory_desc": "Die Mitglieder sehen und kontrollieren, was die KI speichert. Unabhängige Zustimmung für Triage-Speicher, OCR-Speicher und Zusammenfassungsspeicher."
},
"limitations": {
"heading": "Beschränkungen und offene Fragen",
"item1": "<strong>Ausbildung noch nicht begonnen:</strong> Die SLL-Architektur ist entworfen und dokumentiert. Die Hardware ist bestellt. Aber es wurde noch kein Modell trainiert. Behauptungen über die Steuerung der Trainingszeit sind architektonisches Design, keine empirischen Ergebnisse.",
"item2": "<strong>Beschränkter Einsatz:</strong> Home AI arbeitet mit vier föderierten Mandanten innerhalb einer Plattform, die vom Entwickler des Frameworks gebaut wurde. Die Wirksamkeit der Governance kann ohne unabhängige Einsätze nicht verallgemeinert werden.",
"item3": "<strong>Selbstberichtete Metriken:</strong> Leistungs- und Sicherheitszahlen werden von demselben Team gemeldet, das das System gebaut hat. Ein unabhängiges Audit ist geplant, wurde aber noch nicht durchgeführt.",
"item4": "<strong>Operationalisierung von Traditionen:</strong> Lassen sich reichhaltige philosophische Traditionen authentisch auf Hinweise zur Rahmung reduzieren? Wenn ein Mitglied \"Buddhist\" auswählt, bedeutet das nicht, dass es den Buddhismus versteht oder praktiziert. Dies birgt die Gefahr der Oberflächlichkeit.",
"item5": "<strong>Ausdauer des Trainings unbekannt:</strong> Ob die Governance-Einschränkungen Hunderte von Trainingsrunden ohne Beeinträchtigung überstehen, ist eine offene Forschungsfrage. Die Drift-Erkennung ist konzipiert, aber nicht getestet.",
"item6": "<strong>Eingeschränkte adversarische Tests:</strong> Der Governance-Stack wurde keiner systematischen adversarischen Bewertung unterzogen. Red-teaming ist eine Priorität.",
"item7": "<strong>Skala unbekannt:</strong> Der Governance-Overhead (~5% pro Interaktion) wird in der aktuellen Skala gemessen. Ob dies auch bei hohem Durchsatz der Fall ist, wurde noch nicht getestet.",
"item8": "<strong>Kulturelle Validierung erforderlich:</strong> Die Spezifikationen der Module für indigenes Wissen erfordern eine ständige Konsultation mit den kulturellen Beratern der M&#257;ori. Die Dokumentation ist ein Ausgangspunkt, keine endgültige Instanz."
},
"further_reading": {
"heading": "Weitere Lektüre",
"arch_title": "Systemarchitektur",
"arch_desc": "Fünf Architekturprinzipien und sechs Governance-Dienste",
"case_title": "Village Fallstudie",
"case_desc": "Tractatus in der Produktion &mdash; Metriken, Beweise und ehrliche Grenzen",
"paper_title": "Papier zur architektonischen Ausrichtung",
"paper_desc": "Akademisches Papier über Governance in der Ausbildung",
"researcher_title": "Für Forscher",
"researcher_desc": "Offene Fragen, Möglichkeiten der Zusammenarbeit und Datenzugang"
}
}

View file

@ -1,2 +1,255 @@
{
"breadcrumb": {
"home": "Home",
"current": "Home AI"
},
"hero": {
"badge": "SOVEREIGN LOCALLY-TRAINED LANGUAGE MODEL",
"title": "Home AI",
"subtitle": "A language model where the community controls the training data, the model weights, and the governance rules. Not just governed inference &mdash; governed training.",
"status": "<strong>Status:</strong> Home AI operates in production for inference. The sovereign training pipeline is designed and documented; hardware is ordered. Training has not yet begun. This page describes both current capability and intended architecture."
},
"sll": {
"heading": "What is an SLL?",
"intro": "An <strong>SLL</strong> (Sovereign Locally-trained Language Model) is distinct from both LLMs and SLMs. The distinction is not size &mdash; it is control.",
"llm_title": "LLM",
"llm_subtitle": "Large Language Model",
"llm_item1": "Training: provider-controlled",
"llm_item2": "Data: scraped at scale",
"llm_item3": "Governance: provider's terms",
"llm_item4": "User control: none",
"slm_title": "SLM",
"slm_subtitle": "Small Language Model",
"slm_item1": "Training: provider-controlled",
"slm_item2": "Data: curated by provider",
"slm_item3": "Governance: partial (fine-tuning)",
"slm_item4": "User control: limited",
"sll_title": "SLL",
"sll_subtitle": "Sovereign Locally-trained",
"sll_item1": "Training: community-controlled",
"sll_item2": "Data: community-owned",
"sll_item3": "Governance: architecturally enforced",
"sll_item4": "User control: full",
"tradeoff": "The honest trade-off: an SLL is a less powerful system that serves your interests, rather than a more powerful one that serves someone else's. We consider this an acceptable exchange."
},
"two_model": {
"heading": "Two-Model Architecture",
"intro": "Home AI uses two models of different sizes, routed by task complexity. This is not a fallback mechanism &mdash; each model is optimised for its role.",
"fast_title": "3B Model &mdash; Fast Assistant",
"fast_desc": "Handles help queries, tooltips, error explanations, short summaries, and translation. Target response time: under 5 seconds complete.",
"fast_routing": "Routing triggers: simple queries, known FAQ patterns, single-step tasks.",
"deep_title": "8B Model &mdash; Deep Reasoning",
"deep_desc": "Handles life story generation, year-in-review narratives, complex summarisation, and sensitive correspondence. Target response time: under 90 seconds.",
"deep_routing": "Routing triggers: keywords like \"everything about\", multi-source retrieval, grief/trauma markers.",
"footer": "Both models operate under the same governance stack. The routing decision itself is governed &mdash; the ContextPressureMonitor can override routing if session health requires it."
},
"training_tiers": {
"heading": "Three Training Tiers",
"intro": "Training is not monolithic. Three tiers serve different scopes, each with appropriate governance constraints.",
"tier1_title": "Tier 1: Platform Base",
"tier1_badge": "All communities",
"tier1_desc": "Trained on platform documentation, philosophy, feature guides, and FAQ content. Provides the foundational understanding of how Village works, what Home AI's values are, and how to help members navigate the platform.",
"tier1_update": "Update frequency: weekly during beta, quarterly at GA. Training method: QLoRA fine-tuning.",
"tier2_title": "Tier 2: Tenant Adapters",
"tier2_badge": "Per community",
"tier2_desc": "Each community trains a lightweight LoRA adapter on its own content &mdash; stories, documents, photos, and events that members have explicitly consented to include. This allows Home AI to answer questions like \"What stories has Grandma shared?\" without accessing any other community's data.",
"tier2_update": "Adapters are small (50&ndash;100MB). Consent is per-content-item. Content marked \"only me\" is never included regardless of consent. Training uses DPO (Direct Preference Optimization) for value alignment.",
"tier3_title": "Tier 3: Individual (Future)",
"tier3_badge": "Per member",
"tier3_desc": "Personal adapters that learn individual preferences and interaction patterns. Speculative &mdash; this tier raises significant questions about feasibility, privacy, and the minimum training data required for meaningful personalisation.",
"tier3_update": "Research questions documented. Implementation not planned until Tier 2 is validated."
},
"governance_training": {
"heading": "Governance During Training",
"intro1": "This is the central research contribution. Most AI governance frameworks operate at inference time &mdash; they filter or constrain responses after the model has already been trained. Home AI embeds governance <strong>inside the training loop</strong>.",
"intro2": "This follows Christopher Alexander's principle of <em>Not-Separateness</em>: governance is woven into the training architecture, not applied afterward. The BoundaryEnforcer validates every training batch before the forward pass. If a batch contains cross-tenant data, data without consent, or content marked as private, the batch is rejected and the training step does not proceed.",
"code_comment1": "# Governance inside the training loop (Not-Separateness)",
"code_line1": "for batch in training_data:",
"code_line2": "&nbsp;&nbsp;if not BoundaryEnforcer.validate(batch):",
"code_line3": "&nbsp;&nbsp;&nbsp;&nbsp;continue&nbsp;&nbsp;<span class=\"text-green-400\"># Governance rejects batch</span>",
"code_line4": "&nbsp;&nbsp;loss = model.forward(batch)",
"code_line5": "&nbsp;&nbsp;loss.backward()",
"code_comment2": "# NOT this &mdash; governance separated from training",
"code_anti1": "for batch in training_data:",
"code_anti2": "&nbsp;&nbsp;loss = model.forward(batch)",
"code_anti3": "&nbsp;&nbsp;loss.backward()",
"code_anti4": "filter_outputs_later()&nbsp;&nbsp;<span class=\"text-red-400\"># Too late</span>",
"why_title": "Why both training-time and inference-time governance?",
"why_text": "<strong>Training shapes tendency; architecture constrains capability.</strong> A model trained to respect boundaries can still be jailbroken. A model that fights against governance rules wastes compute and produces worse outputs. The combined approach makes the model <em>tend toward</em> governed behaviour while the architecture makes it <em>impossible</em> to violate structural boundaries.",
"why_note": "Research from the Agent Lightning integration suggests governance adds approximately 5% performance overhead &mdash; an acceptable trade-off for architectural safety constraints. This requires validation at scale.",
"footer": "Training-time governance is only half the picture. The same Tractatus framework also operates at runtime in the Village codebase. The next section explains how these two layers work together."
},
"dual_layer": {
"heading": "Dual-Layer Tractatus Architecture",
"intro": "Home AI is governed by Tractatus at <strong>two distinct layers</strong> simultaneously. This is the architectural insight that distinguishes the SLL approach from both ungoverned models and bolt-on safety filters.",
"layer_a_badge": "LAYER A: INHERENT",
"layer_a_title": "Tractatus Inside the Model",
"layer_a_desc": "During training, the BoundaryEnforcer validates every batch. DPO alignment shapes preferences toward governed behaviour. The model <em>learns</em> to respect boundaries, prefer transparent responses, and defer values decisions to humans.",
"layer_a_item1": "<strong>Mechanism:</strong> Governance in the training loop",
"layer_a_item2": "<strong>Effect:</strong> Model tends toward governed behaviour",
"layer_a_item3": "<strong>Limitation:</strong> Tendencies can be overridden by adversarial prompting",
"layer_b_badge": "LAYER B: ACTIVE",
"layer_b_title": "Tractatus Around the Model",
"layer_b_desc": "At runtime, the full six-service governance stack operates in the Village codebase. Every interaction passes through BoundaryEnforcer, PluralisticDeliberationOrchestrator, MetacognitiveVerifier, CrossReferenceValidator, ContextPressureMonitor, and InstructionPersistenceClassifier.",
"layer_b_item1": "<strong>Mechanism:</strong> Six architectural services in the critical path",
"layer_b_item2": "<strong>Effect:</strong> Structural boundaries cannot be violated",
"layer_b_item3": "<strong>Limitation:</strong> Adds ~5% performance overhead per interaction",
"principle_title": "The dual-layer principle:",
"principle_line1": "Training shapes <span class=\"text-teal-400\">tendency</span>.",
"principle_line2": "Architecture constrains <span class=\"text-indigo-400\">capability</span>.",
"principle_line3": "A model that has internalised governance rules AND operates within governance architecture",
"principle_line4": "produces better outputs than either approach alone. The model works WITH the guardrails,",
"principle_line5": "not against them &mdash; reducing compute waste and improving response quality.",
"caveat": "<strong>Honest caveat:</strong> Layer A (inherent governance via training) is designed but not yet empirically validated &mdash; training has not begun. Layer B (active governance via Village codebase) has been operating in production for 11+ months. The dual-layer thesis is an architectural commitment, not yet a demonstrated result."
},
"philosophy": {
"heading": "Philosophical Foundations",
"intro": "Home AI's governance draws from four philosophical traditions, each contributing a specific architectural principle. These are not decorative references &mdash; they translate into concrete design decisions.",
"berlin_title": "Isaiah Berlin &mdash; Value Pluralism",
"berlin_desc": "Values are genuinely plural and sometimes incompatible. When freedom conflicts with equality, there may be no single correct resolution. Home AI presents options without hierarchy and documents what each choice sacrifices.",
"berlin_arch": "Architectural expression: PluralisticDeliberationOrchestrator presents trade-offs; it does not resolve them.",
"wittgenstein_title": "Ludwig Wittgenstein &mdash; Language Boundaries",
"wittgenstein_desc": "Language shapes what can be thought and expressed. Some things that matter most resist systematic expression. Home AI acknowledges the limits of what language models can capture &mdash; particularly around grief, cultural meaning, and lived experience.",
"wittgenstein_arch": "Architectural expression: BoundaryEnforcer defers values decisions to humans, acknowledging limits of computation.",
"indigenous_title": "Indigenous Sovereignty &mdash; Data as Relationship",
"indigenous_desc": "Te Mana Raraunga (M&#257;ori Data Sovereignty), CARE Principles, and OCAP (First Nations Canada) provide frameworks where data is not property but relationship. Whakapapa (genealogy) belongs to the collective, not individuals. Consent is a community process, not an individual checkbox.",
"indigenous_arch": "Architectural expression: tenant isolation, collective consent mechanisms, intergenerational stewardship.",
"alexander_title": "Christopher Alexander &mdash; Living Architecture",
"alexander_desc": "Five principles guide how governance evolves: Deep Interlock (services coordinate), Structure-Preserving (changes enhance without breaking), Gradients Not Binary (intensity levels), Living Process (evidence-based evolution), Not-Separateness (governance embedded, not bolted on).",
"alexander_arch": "Architectural expression: all six governance services and the training loop architecture."
},
"three_layer_gov": {
"heading": "Three-Layer Governance",
"intro": "Governance operates at three levels, each with different scope and mutability.",
"layer1_title": "Layer 1: Platform (Immutable)",
"layer1_desc": "Structural constraints that apply to all communities. Tenant data isolation. Governance in the critical path. Options presented without hierarchy. These cannot be disabled by tenant administrators or individual members.",
"layer1_enforcement": "Enforcement: architectural (BoundaryEnforcer blocks violations before they execute).",
"layer2_title": "Layer 2: Tenant Constitution",
"layer2_desc": "Rules defined by community administrators. Content handling policies (e.g., \"deceased members require moderator review\"), cultural protocols (e.g., M&#257;ori tangi customs), visibility defaults, and AI training consent models. Each community configures its own constitution within Layer 1 constraints.",
"layer2_enforcement": "Enforcement: constitutional rules validated by CrossReferenceValidator per tenant.",
"layer3_title": "Layer 3: Adopted Wisdom Traditions",
"layer3_desc": "Individual members and communities can adopt principles from wisdom traditions to influence how Home AI frames responses. These are voluntary, reversible, and transparent. They influence presentation, not content access. Multiple traditions can be adopted simultaneously; conflicts are resolved by the member, not the AI.",
"layer3_enforcement": "Enforcement: framing hints in response generation. Override always available."
},
"wisdom": {
"heading": "Wisdom Traditions",
"intro": "Home AI offers thirteen wisdom traditions that members can adopt to guide AI behaviour. Each tradition has been validated against the Stanford Encyclopedia of Philosophy as the primary scholarly reference. Adoption is voluntary, transparent, and reversible.",
"berlin_title": "Berlin: Value Pluralism",
"berlin_desc": "Present options without ranking; acknowledge what each choice sacrifices.",
"stoic_title": "Stoic: Equanimity and Virtue",
"stoic_desc": "Focus on what can be controlled; emphasise character in ancestral stories.",
"weil_title": "Weil: Attention to Affliction",
"weil_desc": "Resist summarising grief; preserve names and specifics rather than abstracting.",
"care_title": "Care Ethics: Relational Responsibility",
"care_desc": "Attend to how content affects specific people, not abstract principles.",
"confucian_title": "Confucian: Relational Duty",
"confucian_desc": "Frame stories in terms of family roles and reciprocal obligations.",
"buddhist_title": "Buddhist: Impermanence",
"buddhist_desc": "Acknowledge that memories and interpretations change; extend compassion.",
"ubuntu_title": "Ubuntu: Communal Personhood",
"ubuntu_desc": "\"I am because we are.\" Stories belong to the community, not the individual.",
"african_title": "African Diaspora: Sankofa",
"african_desc": "Preserve what was nearly lost; honour fictive kinship and chosen family.",
"indigenous_title": "Indigenous/M&#257;ori: Whakapapa",
"indigenous_desc": "Kinship with ancestors, land, and descendants. Collective ownership of knowledge.",
"jewish_title": "Jewish: Tikkun Olam",
"jewish_desc": "Repair, preserve memory (zachor), uphold dignity even of difficult relatives.",
"islamic_title": "Islamic: Mercy and Justice",
"islamic_desc": "Balance rahma (mercy) with adl (justice) in sensitive content.",
"hindu_title": "Hindu: Dharmic Order",
"hindu_desc": "Role-appropriate duties within larger order; karma as consequence, not punishment.",
"alexander_title": "Alexander: Living Architecture",
"alexander_desc": "Governance as living system; changes emerge from operational experience.",
"disclaimer": "<strong>What this is not:</strong> Selecting \"Buddhist\" does not mean the AI practises Buddhism. These are framing tendencies &mdash; they influence how the AI presents options, not what content is accessible. A member can always override tradition-influenced framing on any response. The system does not claim algorithmic moral reasoning."
},
"indigenous": {
"heading": "Indigenous Data Sovereignty",
"intro": "Indigenous data sovereignty differs fundamentally from Western privacy models. Where Western privacy centres on individual rights and consent-as-checkbox, indigenous frameworks centre on collective rights, community process, and intergenerational stewardship.",
"tmr_title": "Te Mana Raraunga",
"tmr_desc": "M&#257;ori Data Sovereignty. Rangatiratanga (self-determination), kaitiakitanga (guardianship for future generations), whanaungatanga (kinship as unified entity).",
"care_title": "CARE Principles",
"care_desc": "Global Indigenous Data Alliance. Collective Benefit, Authority to Control, Responsibility, Ethics. Data ecosystems designed for indigenous benefit.",
"ocap_title": "OCAP",
"ocap_desc": "First Nations Canada. Ownership, Control, Access, Possession. Communities physically control their data.",
"implications": "Concrete architectural implications: whakapapa (genealogy) cannot be atomised into individual data points. Tapu (sacred/restricted) content triggers cultural review before AI processing. Consent for AI training requires wh&#257;nau consensus, not individual opt-in. Elder (kaum&#257;tua) approval is required for training on sacred genealogies.",
"note": "These principles are informed by Te Tiriti o Waitangi and predate Western technology governance by centuries. We consider them prior art, not novel invention. Actual implementation requires ongoing consultation with M&#257;ori cultural advisors &mdash; this specification is a starting point."
},
"infrastructure": {
"heading": "Training Infrastructure",
"intro": "Home AI follows a \"train local, deploy remote\" model. The training hardware sits in the developer's home. Trained model weights are deployed to production servers for inference. This keeps training costs low and training data under physical control.",
"local_title": "Local Training",
"local_item1": "Consumer GPU with 24GB VRAM via external enclosure",
"local_item2": "QLoRA fine-tuning (4-bit quantisation fits in VRAM budget)",
"local_item3": "DPO (Direct Preference Optimization) &mdash; requires only 2 models in memory vs PPO's 4",
"local_item4": "Overnight training runs &mdash; compatible with off-grid solar power",
"local_item5": "Sustained power draw under 500W",
"remote_title": "Remote Inference",
"remote_item1": "Model weights deployed to production servers (OVH France, Catalyst NZ)",
"remote_item2": "Inference via Ollama with per-tenant adapter loading",
"remote_item3": "Hybrid GPU/CPU architecture with health monitoring",
"remote_item4": "Home GPU available via WireGuard VPN as primary inference engine",
"remote_item5": "CPU fallback ensures availability when GPU is offline",
"why_consumer": "<strong>Why consumer hardware?</strong> The SLL thesis is that sovereign AI training should be accessible, not reserved for organisations with data centre budgets. A single consumer GPU can fine-tune a 7B model efficiently via QLoRA. The entire training infrastructure fits on a desk."
},
"bias": {
"heading": "Bias Documentation and Verification",
"intro": "Home AI operates in the domain of family storytelling, which carries specific bias risks. Six bias categories have been documented with detection prompts, debiasing examples, and evaluation criteria.",
"family_title": "Family Structure",
"family_desc": "Nuclear family as default; same-sex parents, blended families, single parents treated as normative.",
"elder_title": "Elder Representation",
"elder_desc": "Deficit framing of aging; elders as active agents with expertise, not passive subjects.",
"cultural_title": "Cultural/Religious",
"cultural_desc": "Christian-normative assumptions; equal treatment of all cultural practices and observances.",
"geographic_title": "Geographic/Place",
"geographic_desc": "Anglo-American defaults; location-appropriate references and cultural context.",
"grief_title": "Grief/Trauma",
"grief_desc": "Efficiency over sensitivity; pacing, attention to particulars, no premature closure.",
"naming_title": "Naming Conventions",
"naming_desc": "Western name-order assumptions; correct handling of patronymics, honorifics, diacritics.",
"verification_title": "Verification Framework",
"metrics_title": "Governance Metrics",
"metrics_item1": "Tenant leak rate: target 0%",
"metrics_item2": "Constitutional violations: target <1%",
"metrics_item3": "Value framework compliance: target >80%",
"metrics_item4": "Refusal appropriateness: target >95%",
"testing_title": "Testing Methods",
"testing_item1": "Secret phrase probes for tenant isolation",
"testing_item2": "Constraint persistence after N training rounds",
"testing_item3": "Red-team adversarial prompts (jailbreak, injection, cross-tenant)",
"testing_item4": "Human review sampling (5&ndash;100% depending on content type)"
},
"live_today": {
"heading": "What's Live Today",
"intro": "Home AI currently operates in production with the following governed features. These run under the full six-service governance stack.",
"rag_title": "RAG-Based Help",
"rag_desc": "Vector search retrieves relevant documentation, filtered by member permissions. Responses grounded in retrieved documents, not training data alone.",
"ocr_title": "Document OCR",
"ocr_desc": "Text extraction from uploaded documents. Results stored within member scope, not shared across tenants or used for training without consent.",
"story_title": "Story Assistance",
"story_desc": "Writing prompts, structural advice, narrative enhancement. Cultural context decisions deferred to the storyteller, not resolved by the AI.",
"memory_title": "AI Memory Transparency",
"memory_desc": "Members view and control what the AI remembers. Independent consent for triage memory, OCR memory, and summarisation memory."
},
"limitations": {
"heading": "Limitations and Open Questions",
"item1": "<strong>Training not yet begun:</strong> The SLL architecture is designed and documented. Hardware is ordered. But no model has been trained yet. Claims about training-time governance are architectural design, not empirical results.",
"item2": "<strong>Limited deployment:</strong> Home AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments.",
"item3": "<strong>Self-reported metrics:</strong> Performance and safety figures are reported by the same team that built the system. Independent audit is planned but not yet conducted.",
"item4": "<strong>Tradition operationalisation:</strong> Can rich philosophical traditions be authentically reduced to framing hints? A member selecting \"Buddhist\" does not mean they understand or practise Buddhism. This risks superficiality.",
"item5": "<strong>Training persistence unknown:</strong> Whether governance constraints survive hundreds of training rounds without degradation is an open research question. Drift detection is designed but untested.",
"item6": "<strong>Adversarial testing limited:</strong> The governance stack has not been subjected to systematic adversarial evaluation. Red-teaming is a priority.",
"item7": "<strong>Scale unknown:</strong> Governance overhead (~5% per interaction) is measured at current scale. Whether this holds under high throughput is untested.",
"item8": "<strong>Cultural validation needed:</strong> Indigenous knowledge module specifications require ongoing consultation with M&#257;ori cultural advisors. The documentation is a starting point, not a final authority."
},
"further_reading": {
"heading": "Further Reading",
"arch_title": "System Architecture",
"arch_desc": "Five architectural principles and six governance services",
"case_title": "Village Case Study",
"case_desc": "Tractatus in production &mdash; metrics, evidence, and honest limitations",
"paper_title": "Architectural Alignment Paper",
"paper_desc": "Academic paper on governance during training",
"researcher_title": "For Researchers",
"researcher_desc": "Open questions, collaboration opportunities, and data access"
}
}

View file

@ -1,2 +1,255 @@
{
"breadcrumb": {
"home": "Accueil",
"current": "Home AI"
},
"hero": {
"badge": "MODÈLE LINGUISTIQUE SOUVERAIN FORMÉ LOCALEMENT",
"title": "Home AI",
"subtitle": "Un modèle linguistique dans lequel la communauté contrôle les données d'apprentissage, les poids du modèle et les règles de gouvernance. Il ne s'agit pas seulement d'une inférence gouvernée &mdash; mais d'une formation gouvernée.",
"status": "<strong>Status:</strong> Home AI fonctionne en production pour l'inférence. Le pipeline de formation souveraine est conçu et documenté ; le matériel est commandé. La formation n'a pas encore commencé. Cette page décrit à la fois la capacité actuelle et l'architecture prévue."
},
"sll": {
"heading": "Qu'est-ce qu'un SLL ?",
"intro": "Un <strong>SLL</strong> (modèle linguistique souverain formé localement) se distingue à la fois des LLM et des SLM. La distinction n'est pas une question de taille &mdash; c'est une question de contrôle.",
"llm_title": "LLM",
"llm_subtitle": "Grand modèle linguistique",
"llm_item1": "Formation : contrôlée par le prestataire",
"llm_item2": "Données : extraites à grande échelle",
"llm_item3": "Gouvernance : conditions du fournisseur",
"llm_item4": "Contrôle utilisateur : aucun",
"slm_title": "SLM",
"slm_subtitle": "Petit modèle linguistique",
"slm_item1": "Formation : contrôlée par le prestataire",
"slm_item2": "Données : sélectionnées par le fournisseur",
"slm_item3": "Gouvernance : partielle (mise au point)",
"slm_item4": "Contrôle de l'utilisateur : limité",
"sll_title": "SLL",
"sll_subtitle": "Souverain Formé localement",
"sll_item1": "Formation : contrôlée par la communauté",
"sll_item2": "Données : propriété de la communauté",
"sll_item3": "Gouvernance : mise en œuvre architecturale",
"sll_item4": "Contrôle de l'utilisateur : complet",
"tradeoff": "Le compromis honnête : un SLL est un système moins puissant qui sert vos intérêts, plutôt qu'un système plus puissant qui sert ceux de quelqu'un d'autre. Nous considérons qu'il s'agit d'un échange acceptable."
},
"two_model": {
"heading": "Architecture à deux modèles",
"intro": "Home AI utilise deux modèles de taille différente, acheminés en fonction de la complexité de la tâche. Il ne s'agit pas d'un mécanisme de repli &mdash; chaque modèle est optimisé pour son rôle.",
"fast_title": "3B Modèle &mdash; Assistant rapide",
"fast_desc": "Traite les demandes d'aide, les infobulles, les explications d'erreurs, les résumés succincts et les traductions. Temps de réponse visé : moins de 5 secondes.",
"fast_routing": "Déclencheurs de routage : requêtes simples, modèles connus de FAQ, tâches en une seule étape.",
"deep_title": "8B Modèle &mdash; Raisonnement profond",
"deep_desc": "Traite les récits de vie, les récits d'année, les résumés complexes et la correspondance sensible. Temps de réponse visé : moins de 90 secondes.",
"deep_routing": "Déclencheurs d'acheminement : mots clés comme \"tout sur\", recherche de sources multiples, marqueurs de deuil/traumatisme.",
"footer": "Les deux modèles fonctionnent sous la même pile de gouvernance. La décision de routage elle-même est régie &mdash; le ContextPressureMonitor peut passer outre le routage si l'état de la session l'exige."
},
"training_tiers": {
"heading": "Trois niveaux de formation",
"intro": "La formation n'est pas monolithique. Trois niveaux servent différents champs d'application, chacun étant soumis à des contraintes de gouvernance appropriées.",
"tier1_title": "Niveau 1 : Plate-forme de base",
"tier1_badge": "Toutes les communautés",
"tier1_desc": "Il est formé à la documentation, à la philosophie, aux guides des fonctionnalités et au contenu de la FAQ de la plateforme. Comprend le fonctionnement de Village, les valeurs de Home AI et la manière d'aider les membres à naviguer sur la plateforme.",
"tier1_update": "Fréquence de mise à jour : hebdomadaire pendant la phase bêta, trimestrielle lors de l'AG. Méthode d'entraînement : Mise au point QLoRA.",
"tier2_title": "Niveau 2 : Adaptateurs pour les locataires",
"tier2_badge": "Par communauté",
"tier2_desc": "Chaque communauté forme un adaptateur LoRA léger sur son propre contenu &mdash; histoires, documents, photos et événements que les membres ont explicitement consenti à inclure. Cela permet à Home AI de répondre à des questions telles que \"Quelles sont les histoires partagées par Grandma ?\" sans accéder aux données d'une autre communauté.",
"tier2_update": "Les adaptateurs sont de petite taille (50&ndash;100MB). Le consentement est donné pour chaque élément du contenu. Le contenu marqué \"seulement moi\" n'est jamais inclus, quel que soit le consentement. La formation utilise DPO (Direct Preference Optimization) pour l'alignement des valeurs.",
"tier3_title": "Niveau 3 : Individuel (futur)",
"tier3_badge": "Par membre",
"tier3_desc": "Adaptateurs personnels qui apprennent les préférences individuelles et les modèles d'interaction. Spéculatif &mdash; ce niveau soulève des questions importantes sur la faisabilité, le respect de la vie privée et le minimum de données d'entraînement nécessaires pour une personnalisation significative.",
"tier3_update": "Questions de recherche documentées. La mise en œuvre n'est pas prévue tant que le niveau 2 n'est pas validé."
},
"governance_training": {
"heading": "Gouvernance pendant la formation",
"intro1": "Il s'agit là de la principale contribution de la recherche. La plupart des cadres de gouvernance de l'IA opèrent au moment de l'inférence &mdash; ils filtrent ou contraignent les réponses après que le modèle a déjà été formé. Home AI intègre la gouvernance <strong>dans la boucle d'apprentissage</strong>.",
"intro2": "Ceci est conforme au principe de <em>Not-Separateness</em> de Christopher Alexander : la gouvernance est intégrée dans l'architecture de la formation, et non appliquée après coup. Le BoundaryEnforcer valide chaque lot de formation avant le passage à l'étape suivante. Si un lot contient des données concernant plusieurs locataires, des données sans consentement ou du contenu marqué comme privé, le lot est rejeté et l'étape de formation n'a pas lieu.",
"code_comment1": "# Gouvernance à l'intérieur de la boucle de formation (non-séparation)",
"code_line1": "for batch in training_data:",
"code_line2": "&nbsp;&nbsp;if not BoundaryEnforcer.validate(batch):",
"code_line3": "&nbsp;&nbsp;&nbsp;&nbsp;continue&nbsp;&nbsp;<span class=\"text-green-400\"># La gouvernance rejette le lot</span>",
"code_line4": "&nbsp;&nbsp;loss = model.forward(batch)",
"code_line5": "&nbsp;&nbsp;loss.backward()",
"code_comment2": "# PAS cette gouvernance &mdash; séparée de la formation",
"code_anti1": "for batch in training_data:",
"code_anti2": "&nbsp;&nbsp;loss = model.forward(batch)",
"code_anti3": "&nbsp;&nbsp;loss.backward()",
"code_anti4": "filter_outputs_later()&nbsp;&nbsp;<span class=\"text-red-400\"># Trop tard</span>",
"why_title": "Pourquoi une gouvernance à la fois du temps de formation et du temps d'inférence ?",
"why_text": "<strong>La formation façonne les tendances ; l'architecture contraint les capacités.</strong>Un modèle formé à respecter les limites peut encore être piraté. Un modèle qui lutte contre les règles de gouvernance gaspille des ressources informatiques et produit de moins bons résultats. L'approche combinée fait que le modèle <em> tend vers</em> un comportement gouverné, tandis que l'architecture rend <em>impossible</em> de violer les limites structurelles.",
"why_note": "Les recherches menées dans le cadre de l'intégration Agent Lightning suggèrent que la gouvernance ajoute environ 5 % de surcharge de performance &mdash;, ce qui constitue un compromis acceptable pour les contraintes de sécurité architecturale. Cela nécessite une validation à l'échelle.",
"footer": "La gouvernance pendant la formation n'est que la moitié du tableau. Le même cadre Tractatus fonctionne également au moment de l'exécution dans la base de code Village. La section suivante explique comment ces deux couches fonctionnent ensemble."
},
"dual_layer": {
"heading": "Architecture double couche Tractatus",
"intro": "Home AI est régi par Tractatus à <strong>deux couches distinctes</strong> simultanément. C'est l'idée architecturale qui distingue l'approche SLL des modèles non gouvernés et des filtres de sécurité ajoutés.",
"layer_a_badge": "COUCHE A : INHÉRENTE",
"layer_a_title": "Tractatus A l'intérieur du modèle",
"layer_a_desc": "Pendant la formation, le BoundaryEnforcer valide chaque lot. L'alignement DPO façonne les préférences vers un comportement gouverné. Le modèle <em>apprend à</em> à respecter les limites, à préférer les réponses transparentes et à s'en remettre aux humains pour les décisions relatives aux valeurs.",
"layer_a_item1": "<strong>Mécanisme:</strong> Gouvernance dans la boucle de formation",
"layer_a_item2": "<strong>Effect:</strong> Le modèle tend vers un comportement gouverné",
"layer_a_item3": "<strong>Limitation:</strong> Les tendances peuvent être annulées par une incitation contradictoire.",
"layer_b_badge": "COUCHE B : ACTIVE",
"layer_b_title": "Tractatus Autour du modèle",
"layer_b_desc": "Au moment de l'exécution, l'ensemble des six services de gouvernance fonctionne dans la base de code Village. Chaque interaction passe par BoundaryEnforcer, PluralisticDeliberationOrchestrator, MetacognitiveVerifier, CrossReferenceValidator, ContextPressureMonitor et InstructionPersistenceClassifier.",
"layer_b_item1": "<strong>Mécanisme:</strong> Six services architecturaux sur le chemin critique",
"layer_b_item2": "<strong>Effect:</strong> Les limites structurelles ne peuvent être violées",
"layer_b_item3": "<strong>Limitation:</strong> Ajoute ~5% de surcharge de performance par interaction",
"principle_title": "Le principe de la double couche :",
"principle_line1": "Formes de formation <span class=\"text-teal-400\">tendance</span>.",
"principle_line2": "L'architecture impose des contraintes <span class=\"text-indigo-400\">capacité</span>.",
"principle_line3": "Un modèle qui a internalisé les règles de gouvernance ET qui fonctionne dans le cadre de l'architecture de gouvernance",
"principle_line4": "produit de meilleurs résultats que l'une ou l'autre de ces approches. Le modèle fonctionne AVEC les garde-fous,",
"principle_line5": "et non contre eux &mdash; réduire le gaspillage de ressources informatiques et améliorer la qualité des réponses.",
"caveat": "<strong>Honête mise en garde:</strong> La couche A (gouvernance inhérente via la formation) est conçue mais n'a pas encore été validée empiriquement &mdash; la formation n'a pas commencé. La couche B (gouvernance active via la base de code Village) fonctionne en production depuis plus de 11 mois. La thèse de la double couche est un engagement architectural, mais pas encore un résultat démontré."
},
"philosophy": {
"heading": "Fondements philosophiques",
"intro": "La gouvernance de Home AI s'inspire de quatre traditions philosophiques, chacune apportant un principe architectural spécifique. Il ne s'agit pas de références décoratives &mdash;, elles se traduisent par des décisions de conception concrètes.",
"berlin_title": "Isaiah Berlin &mdash; Pluralisme des valeurs",
"berlin_desc": "Les valeurs sont véritablement plurielles et parfois incompatibles. Lorsque la liberté entre en conflit avec l'égalité, il n'y a pas toujours de solution unique et correcte. Home AI présente des options sans hiérarchie et documente ce que chaque choix sacrifie.",
"berlin_arch": "Expression architecturale : PluralisticDeliberationOrchestrator présente des compromis, mais ne les résout pas.",
"wittgenstein_title": "Ludwig Wittgenstein &mdash; Frontières linguistiques",
"wittgenstein_desc": "La langue façonne ce qui peut être pensé et exprimé. Certaines des choses les plus importantes résistent à l'expression systématique. Home AI reconnaît les limites de ce que les modèles linguistiques peuvent saisir &mdash; notamment en ce qui concerne le deuil, la signification culturelle et l'expérience vécue.",
"wittgenstein_arch": "Expression architecturale : BoundaryEnforcer s'en remet aux humains pour les décisions relatives aux valeurs, reconnaissant ainsi les limites de l'informatique.",
"indigenous_title": "Souveraineté indigène &mdash; Les données en tant que relations",
"indigenous_desc": "Te Mana Raraunga (souveraineté des données des M&#257;ori), les principes CARE et OCAP (Premières nations du Canada) fournissent des cadres dans lesquels les données ne sont pas des biens mais des relations. Whakapapa (généalogie) appartient à la collectivité et non aux individus. Le consentement est un processus communautaire et non une case à cocher individuelle.",
"indigenous_arch": "Expression architecturale : isolement des locataires, mécanismes de consentement collectif, gestion intergénérationnelle.",
"alexander_title": "Christopher Alexander &mdash; Architecture vivante",
"alexander_desc": "Cinq principes guident l'évolution de la gouvernance : Interlock profond (les services se coordonnent), préservation de la structure (les changements améliorent sans briser), gradients non binaires (niveaux d'intensité), processus vivant (évolution fondée sur des données probantes), non-séparativité (gouvernance intégrée, non boulonnée).",
"alexander_arch": "Expression architecturale : les six services de gouvernance et l'architecture de la boucle de formation."
},
"three_layer_gov": {
"heading": "Gouvernance à trois niveaux",
"intro": "La gouvernance s'exerce à trois niveaux, chacun ayant une portée et une mutabilité différentes.",
"layer1_title": "Couche 1 : Plate-forme (immuable)",
"layer1_desc": "Des contraintes structurelles qui s'appliquent à toutes les communautés. Isolation des données des locataires. Gouvernance dans le chemin critique. Options présentées sans hiérarchie. Elles ne peuvent pas être désactivées par les administrateurs des locataires ou les membres individuels.",
"layer1_enforcement": "Exécution : architecturale (BoundaryEnforcer bloque les violations avant qu'elles ne soient exécutées).",
"layer2_title": "Couche 2 : Constitution du locataire",
"layer2_desc": "Règles définies par les administrateurs de la communauté. Politiques de traitement du contenu (par exemple, \"les membres décédés doivent être examinés par un modérateur\"), protocoles culturels (par exemple, coutumes M&#257;ori tangi), visibilité par défaut et modèles de consentement pour l'entraînement à l'IA. Chaque communauté configure sa propre constitution dans le cadre des contraintes de la couche 1.",
"layer2_enforcement": "Application : règles constitutionnelles validées par CrossReferenceValidator par locataire.",
"layer3_title": "Niveau 3 : Traditions de sagesse adoptées",
"layer3_desc": "Les membres individuels et les communautés peuvent adopter des principes issus des traditions de sagesse afin d'influencer la manière dont Home AI élabore ses réponses. Ces principes sont volontaires, réversibles et transparents. Elles influencent la présentation et non l'accès au contenu. Plusieurs traditions peuvent être adoptées simultanément ; les conflits sont résolus par le membre, et non par l'IA.",
"layer3_enforcement": "Mise en œuvre : conseils de cadrage lors de la génération de la réponse. Une dérogation est toujours possible."
},
"wisdom": {
"heading": "Traditions de sagesse",
"intro": "Home AI propose treize traditions de sagesse que les membres peuvent adopter pour guider le comportement de l'IA. Chaque tradition a été validée par rapport au Stanford Encyclopedia of Philosophy, qui constitue la principale référence savante. L'adoption est volontaire, transparente et réversible.",
"berlin_title": "Berlin : Pluralisme des valeurs",
"berlin_desc": "Présenter les options sans les classer ; reconnaître ce que chaque choix sacrifie.",
"stoic_title": "Stoïque : Equanimité et vertu",
"stoic_desc": "Se concentrer sur ce qui peut être contrôlé ; mettre l'accent sur le caractère des histoires ancestrales.",
"weil_title": "Weil : Attention à l'affliction",
"weil_desc": "Résistez à l'idée de résumer le chagrin ; conservez les noms et les détails plutôt que d'en faire un résumé.",
"care_title": "Éthique des soins : Responsabilité relationnelle",
"care_desc": "S'intéresser à la manière dont le contenu affecte des personnes spécifiques, et non à des principes abstraits.",
"confucian_title": "Confucius : Le devoir relationnel",
"confucian_desc": "Encadrer les histoires en termes de rôles familiaux et d'obligations réciproques.",
"buddhist_title": "Bouddhiste : L'impermanence",
"buddhist_desc": "Reconnaître que les souvenirs et les interprétations changent ; faire preuve de compassion.",
"ubuntu_title": "Ubuntu : Personnalité communale",
"ubuntu_desc": "\"Je suis parce que nous sommes\". Les histoires appartiennent à la communauté, pas à l'individu.",
"african_title": "Diaspora africaine : Sankofa",
"african_desc": "Préserver ce qui a failli être perdu ; honorer la parenté fictive et la famille choisie.",
"indigenous_title": "Indigène/M&#257;ori : Whakapapa",
"indigenous_desc": "Lien de parenté avec les ancêtres, la terre et les descendants. Propriété collective des connaissances.",
"jewish_title": "juif : Tikkun Olam",
"jewish_desc": "Réparer, préserver la mémoire (zachor), maintenir la dignité même des parents difficiles.",
"islamic_title": "Islamique : Miséricorde et justice",
"islamic_desc": "Équilibrer rahma (miséricorde) et adl (justice) dans les contenus sensibles.",
"hindu_title": "Hindou : Ordre dharmique",
"hindu_desc": "Des devoirs adaptés aux rôles dans le cadre d'un ordre plus large ; le karma est une conséquence et non une punition.",
"alexander_title": "Alexander : Architecture vivante",
"alexander_desc": "La gouvernance est un système vivant ; les changements émergent de l'expérience opérationnelle.",
"disclaimer": "<strong>Ce que ce n'est pas:</strong> Le fait de sélectionner \" bouddhiste \" ne signifie pas que l'IA pratique le bouddhisme. Il s'agit de tendances de cadrage &mdash; qui influencent la manière dont l'IA présente les options, et non le contenu accessible. Un membre peut toujours passer outre le cadrage influencé par la tradition pour n'importe quelle réponse. Le système ne prétend pas à un raisonnement moral algorithmique."
},
"indigenous": {
"heading": "Souveraineté des données autochtones",
"intro": "La souveraineté des données autochtones diffère fondamentalement des modèles occidentaux de protection de la vie privée. Alors que la protection de la vie privée occidentale est centrée sur les droits individuels et le consentement en tant que case à cocher, les cadres autochtones sont centrés sur les droits collectifs, le processus communautaire et la gestion intergénérationnelle.",
"tmr_title": "Te Mana Raraunga",
"tmr_desc": "M&#257;ori Données Souveraineté. Rangatiratanga (autodétermination), kaitiakitanga (tutelle des générations futures), whanaungatanga (parenté en tant qu'entité unifiée).",
"care_title": "CARE Principes",
"care_desc": "Alliance mondiale pour les données autochtones. Bénéfice collectif, autorité de contrôle, responsabilité, éthique. Des écosystèmes de données conçus dans l'intérêt des populations autochtones.",
"ocap_title": "OCAP",
"ocap_desc": "Premières nations du Canada. Propriété, contrôle, accès, possession. Les communautés contrôlent physiquement leurs données.",
"implications": "Implications architecturales concrètes : le whakapapa (généalogie) ne peut être atomisé en points de données individuels. Le contenu tapu (sacré/restreint) doit faire l'objet d'un examen culturel avant d'être traité par l'IA. Le consentement à la formation à l'IA nécessite un consensus wh&#257;nau, et non un consentement individuel. L'approbation de l'aîné (kaum&#257;tua) est requise pour la formation sur les généalogies sacrées.",
"note": "Ces principes s'inspirent du Te Tiriti o Waitangi et sont antérieurs de plusieurs siècles à la gouvernance technologique occidentale. Nous les considérons comme de l'art antérieur et non comme une nouvelle invention. La mise en œuvre effective nécessite une consultation permanente avec les conseillers culturels de M&#257;ori &mdash; Cette spécification est un point de départ."
},
"infrastructure": {
"heading": "Infrastructure de formation",
"intro": "Home AI suit le modèle \"former localement, déployer à distance\". Le matériel d'entraînement se trouve au domicile du développeur. Les poids des modèles formés sont déployés sur les serveurs de production pour l'inférence. Cela permet de maintenir les coûts de formation à un niveau bas et de contrôler physiquement les données de formation.",
"local_title": "Formation locale",
"local_item1": "GPU grand public avec 24 Go VRAM via un boîtier externe",
"local_item2": "Mise au point QLoRA (la quantification à 4 bits s'inscrit dans le budget VRAM)",
"local_item3": "DPO (Direct Preference Optimization) &mdash; ne nécessite que 2 modèles en mémoire contre 4 pour PPO.",
"local_item4": "Entraînement de nuit &mdash; compatible avec l'énergie solaire hors réseau",
"local_item5": "Consommation soutenue inférieure à 500 W",
"remote_title": "Inférence à distance",
"remote_item1": "Poids des modèles déployés sur des serveurs de production (OVH France, Catalyst NZ)",
"remote_item2": "Inférence via Ollama avec chargement d'adaptateur par locataire",
"remote_item3": "Architecture hybride GPU/CPU avec surveillance de la santé",
"remote_item4": "GPU domestique disponible via WireGuard VPN comme moteur d'inférence primaire",
"remote_item5": "Le repli du CPU assure la disponibilité lorsque le GPU est hors ligne",
"why_consumer": "<strong>Pourquoi du matériel grand public ? </strong> La thèse de SLL est que la formation à l'IA souveraine devrait être accessible, et non réservée aux organisations disposant d'un budget de centre de données. Un simple GPU grand public peut affiner un modèle de 7B de manière efficace grâce à QLoRA. L'ensemble de l'infrastructure de formation tient sur un bureau."
},
"bias": {
"heading": "Documentation et vérification des préjugés",
"intro": "Home AI opère dans le domaine de la narration familiale, qui comporte des risques de biais spécifiques. Six catégories de biais ont été répertoriées, accompagnées de messages de détection, d'exemples de débiaisage et de critères d'évaluation.",
"family_title": "Structure de la famille",
"family_desc": "Famille nucléaire par défaut ; les parents de même sexe, les familles recomposées, les parents célibataires sont considérés comme normatifs.",
"elder_title": "Représentation des personnes âgées",
"elder_desc": "La conception déficitaire du vieillissement ; les personnes âgées sont des agents actifs dotés d'une expertise, et non des sujets passifs.",
"cultural_title": "Culturel/Religieux",
"cultural_desc": "Hypothèses normatives chrétiennes ; traitement égal de toutes les pratiques et observances culturelles.",
"geographic_title": "Géographie/lieu",
"geographic_desc": "Valeurs anglo-américaines par défaut ; références et contexte culturel adaptés à l'endroit où l'on se trouve.",
"grief_title": "Deuil/Traumatisme",
"grief_desc": "L'efficacité prime sur la sensibilité ; rythme, attention aux détails, pas de fermeture prématurée.",
"naming_title": "Conventions d'appellation",
"naming_desc": "Hypothèses occidentales sur l'ordre des noms ; traitement correct des patronymes, des noms honorifiques et des signes diacritiques.",
"verification_title": "Cadre de vérification",
"metrics_title": "Mesures de gouvernance",
"metrics_item1": "Taux de fuite des locataires : objectif 0%",
"metrics_item2": "Violations constitutionnelles : objectif <1%",
"metrics_item3": "Respect du cadre de valeurs : objectif >80%.",
"metrics_item4": "Pertinence du refus : objectif >95%.",
"testing_title": "Méthodes d'essai",
"testing_item1": "Sondes de phrases secrètes pour l'isolement des locataires",
"testing_item2": "Persistance des contraintes après N cycles de formation",
"testing_item3": "Invitations de l'équipe rouge (jailbreak, injection, cross-tenant)",
"testing_item4": "Échantillon de révision humaine (5&ndash;100% selon le type de contenu)"
},
"live_today": {
"heading": "Ce qui est en direct aujourd'hui",
"intro": "Home AI fonctionne actuellement en production avec les fonctionnalités suivantes. Celles-ci sont exécutées dans le cadre de la pile de gouvernance à six services.",
"rag_title": "Aide basée sur RAG",
"rag_desc": "La recherche vectorielle permet de retrouver la documentation pertinente, filtrée par les autorisations des membres. Les réponses sont fondées sur les documents retrouvés, et non sur les seules données de formation.",
"ocr_title": "OCR de documents",
"ocr_desc": "Extraction de texte à partir de documents téléchargés. Les résultats sont conservés dans le périmètre du membre, ils ne sont pas partagés avec d'autres locataires ni utilisés pour la formation sans consentement.",
"story_title": "Aide à la rédaction",
"story_desc": "Incitations à la rédaction, conseils structurels, amélioration de la narration. Les décisions relatives au contexte culturel sont laissées à l'appréciation du narrateur et ne sont pas résolues par l'IA.",
"memory_title": "Transparence de la mémoire de l'IA",
"memory_desc": "Les membres voient et contrôlent ce dont l'IA se souvient. Consentement indépendant pour la mémoire de triage, la mémoire OCR et la mémoire de synthèse."
},
"limitations": {
"heading": "Limites et questions ouvertes",
"item1": "<strong>La formation n'a pas encore commencé:</strong> L'architecture SLL est conçue et documentée. Le matériel est commandé. Mais aucun modèle n'a encore été formé. Les affirmations relatives à la gouvernance du temps de formation relèvent de la conception architecturale et non de résultats empiriques.",
"item2": "<strong>Déploiement limité:</strong> Home AI fonctionne à travers quatre locataires fédérés au sein d'une plateforme construite par le développeur du cadre. L'efficacité de la gouvernance ne peut être généralisée sans déploiements indépendants.",
"item3": "<strong>Mesures autodéclarées:</strong> Les chiffres relatifs à la performance et à la sécurité sont rapportés par l'équipe qui a construit le système. Un audit indépendant est prévu mais n'a pas encore été réalisé.",
"item4": "<strong>Tradition operationalisation:</strong> Les riches traditions philosophiques peuvent-elles être authentiquement réduites à des indices de cadrage ? Un membre qui choisit \"bouddhiste\" ne signifie pas qu'il comprend ou pratique le bouddhisme. Cela risque d'être superficiel.",
"item5": "<strong>Persistance de l'entraînement inconnue:</strong> La question de savoir si les contraintes de gouvernance survivent à des centaines de cycles d'entraînement sans se dégrader est une question de recherche ouverte. La détection des dérives est conçue mais n'a pas été testée.",
"item6": "<strong>Tests contradictoires limités:</strong> La pile de gouvernance n'a pas été soumise à une évaluation contradictoire systématique. Le red-teaming est une priorité.",
"item7": "<strong>Echelle inconnue:</strong> La surcharge de gouvernance (~5% par interaction) est mesurée à l'échelle actuelle. Il n'a pas été testé si cela est valable pour un débit élevé.",
"item8": "<strong>La validation culturelle est nécessaire:</strong>Les spécifications des modules de connaissances indigènes nécessitent une consultation permanente avec les conseillers culturels de M&#257;ori. La documentation est un point de départ et non une autorité finale."
},
"further_reading": {
"heading": "Pour en savoir plus",
"arch_title": "Architecture du système",
"arch_desc": "Cinq principes architecturaux et six services de gouvernance",
"case_title": "Étude de cas Village",
"case_desc": "Tractatus en production &mdash; métriques, preuves et limites honnêtes",
"paper_title": "Document sur l'alignement architectural",
"paper_desc": "Document académique sur la gouvernance pendant la formation",
"researcher_title": "Pour les chercheurs",
"researcher_desc": "Questions ouvertes, possibilités de collaboration et accès aux données"
}
}