From 91be0db15d28969e58ebeda876e1fe2287db0219 Mon Sep 17 00:00:00 2001 From: TheFlow Date: Mon, 23 Feb 2026 22:09:44 +1300 Subject: [PATCH] =?UTF-8?q?refactor:=20Rename=20"Home=20AI"=20=E2=86=92=20?= =?UTF-8?q?"Village=20AI"=20across=20entire=20codebase?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - 57 files modified, 5 files renamed (home-ai → village-ai) - HTML pages: all user-facing text, data-i18n attributes, anchor IDs, CSS classes - i18n JSON: keys (home_ai → village_ai) and values across en/de/fr/mi - Locale files renamed: home-ai.json → village-ai.json (4 languages) - Main page renamed: home-ai.html → village-ai.html - Research downloads: translated terms updated (French "IA domestique", Māori "AI ā-whare"/"AI kāinga" → "Village AI" per brand name rule) - JavaScript: navbar component, blog post scripts - Markdown: research timeline, steering vectors paper, taonga paper Aligns with community codebase rename (commit 21ab7bc0). "Village" is a brand name — stays untranslated in all languages. Co-Authored-By: Claude Opus 4.6 --- docs/markdown/research-timeline.md | 16 ++--- ...ng-vectors-mechanical-bias-sovereign-ai.md | 8 +-- ...tred-steering-governance-polycentric-ai.md | 10 ++-- .../architectural-alignment-policymakers.html | 2 +- public/architectural-alignment.html | 2 +- public/architecture.html | 12 ++-- public/docs.html | 2 +- .../architectural-alignment-academic-fr.html | 2 +- .../architectural-alignment-academic-mi.html | 2 +- ...phical-foundations-village-project-de.html | 4 +- ...phical-foundations-village-project-fr.html | 4 +- ...phical-foundations-village-project-mi.html | 2 +- ...ctors-mechanical-bias-sovereign-ai-de.html | 2 +- ...ctors-mechanical-bias-sovereign-ai-fr.html | 2 +- ...ctors-mechanical-bias-sovereign-ai-mi.html | 4 +- ...-vectors-mechanical-bias-sovereign-ai.html | 8 +-- ...steering-governance-polycentric-ai-de.html | 6 +- ...steering-governance-polycentric-ai-fr.html | 6 +- ...steering-governance-polycentric-ai-mi.html | 4 +- ...ed-steering-governance-polycentric-ai.html | 10 ++-- public/implementer.html | 58 +++++++++---------- public/index.html | 10 ++-- public/js/components/navbar.js | 8 +-- public/leader.html | 32 +++++----- public/locales/de/common.json | 4 +- public/locales/de/homepage.json | 6 +- public/locales/de/implementer.json | 14 ++--- public/locales/de/leader.json | 8 +-- public/locales/de/researcher.json | 22 +++---- .../de/{home-ai.json => village-ai.json} | 34 +++++------ public/locales/de/village-case-study.json | 4 +- public/locales/en/common.json | 4 +- public/locales/en/homepage.json | 6 +- public/locales/en/implementer.json | 14 ++--- public/locales/en/leader.json | 8 +-- public/locales/en/researcher.json | 20 +++---- .../en/{home-ai.json => village-ai.json} | 34 +++++------ public/locales/en/village-case-study.json | 4 +- public/locales/fr/common.json | 4 +- public/locales/fr/homepage.json | 6 +- public/locales/fr/implementer.json | 14 ++--- public/locales/fr/leader.json | 8 +-- public/locales/fr/researcher.json | 22 +++---- .../fr/{home-ai.json => village-ai.json} | 34 +++++------ public/locales/fr/village-case-study.json | 4 +- public/locales/mi/common.json | 4 +- public/locales/mi/homepage.json | 6 +- public/locales/mi/implementer.json | 10 ++-- public/locales/mi/leader.json | 6 +- public/locales/mi/researcher.json | 10 ++-- .../mi/{home-ai.json => village-ai.json} | 30 +++++----- public/locales/mi/village-case-study.json | 4 +- public/researcher.html | 38 ++++++------ public/timeline.html | 4 +- public/{home-ai.html => village-ai.html} | 46 +++++++-------- public/village-case-study.html | 6 +- scripts/publish-overtrust-blog-post.js | 18 +++--- scripts/publish-steering-vectors-blog-post.js | 4 +- .../publish-taonga-governance-blog-post.js | 2 +- scripts/seed-blog-posts.js | 2 +- scripts/update-cache-version.js | 2 +- 61 files changed, 341 insertions(+), 341 deletions(-) rename public/locales/de/{home-ai.json => village-ai.json} (87%) rename public/locales/en/{home-ai.json => village-ai.json} (87%) rename public/locales/fr/{home-ai.json => village-ai.json} (87%) rename public/locales/mi/{home-ai.json => village-ai.json} (93%) rename public/{home-ai.html => village-ai.html} (93%) diff --git a/docs/markdown/research-timeline.md b/docs/markdown/research-timeline.md index 890b0aa3..bab2e2df 100644 --- a/docs/markdown/research-timeline.md +++ b/docs/markdown/research-timeline.md @@ -25,7 +25,7 @@ tags: timeline, research, governance, indigenous, steering-vectors, alexander, w ## Overview -This document traces the intellectual and technical evolution of the Tractatus Framework and its ecosystem of projects, from early experiments with AI-augmented project management in early 2025 through to sovereign small language model governance and indigenous-centred polycentric steering in February 2026. The narrative follows ideas as they developed across five interconnected projects: SyDigital, Digital Sovereignty Passport, Family History, Tractatus, and the Village Home AI platform. +This document traces the intellectual and technical evolution of the Tractatus Framework and its ecosystem of projects, from early experiments with AI-augmented project management in early 2025 through to sovereign small language model governance and indigenous-centred polycentric steering in February 2026. The narrative follows ideas as they developed across five interconnected projects: SyDigital, Digital Sovereignty Passport, Family History, Tractatus, and the Village Village AI platform. --- @@ -310,7 +310,7 @@ The passport project demonstrated that governance architecture need not slow dev --- -## Phase 5: Village Home AI and Sovereign Deployment +## Phase 5: Village Village AI and Sovereign Deployment **Period:** December 2025 -- February 2026 **Platform commits:** 1 December 2025 (homepage promotion), 9 December 2025 (case study and technical details) @@ -318,7 +318,7 @@ The passport project demonstrated that governance architecture need not slow dev ### The Sovereign Architecture -The Village Home AI platform represented a fundamental shift from governance of cloud-hosted AI services to governance of locally trained and served models. The architecture: +The Village Village AI platform represented a fundamental shift from governance of cloud-hosted AI services to governance of locally trained and served models. The architecture: - **Tier 1 (Platform Base):** Llama 3.1 8B -- deep reasoning, complex governance decisions - **Tier 2 (Per-Tenant Adapters):** Llama 3.2 3B -- fast inference, routine operations @@ -401,7 +401,7 @@ The paper surveyed five steering vector techniques: ### The Sovereign Advantage -None of these techniques are available through commercial API endpoints. Only sovereign deployments with full access to model weights and activations can extract, inject, and calibrate steering vectors. This makes the Village Home AI's QLoRA-fine-tuned Llama models uniquely positioned to address mechanical bias -- and it reframes the choice between cloud APIs and local deployment as a governance question, not merely a cost or performance question. +None of these techniques are available through commercial API endpoints. Only sovereign deployments with full access to model weights and activations can extract, inject, and calibrate steering vectors. This makes the Village Village AI's QLoRA-fine-tuned Llama models uniquely positioned to address mechanical bias -- and it reframes the choice between cloud APIs and local deployment as a governance question, not merely a cost or performance question. ### Decolonial Reading (v1.1) @@ -500,7 +500,7 @@ The evolution of indigenous values across the project timeline is not a separate **Tractatus (October 2025):** Cultural DNA rules (inst_085-089) architecturally enforced. CARE Principles compliance as structural requirement. Te Tiriti alignment in all cultural decisions. Language parity obligations. -**Village Home AI (December 2025):** Sovereign deployment specifically enables cultural governance impossible through commercial APIs. Training data never leaves community infrastructure. Governance by community design. +**Village Village AI (December 2025):** Sovereign deployment specifically enables cultural governance impossible through commercial APIs. Training data never leaves community infrastructure. Governance by community design. **Steering Vectors (February 2026):** Representational bias reframed as statistical encoding of colonial knowledge hierarchies. Mechanical bias in Western-dominated training corpora identified as epistemic colonialism. @@ -526,7 +526,7 @@ The trajectory is clear: from respectful acknowledgment, through structural inte | 28 Oct 2025 | Cultural DNA rules encoded (inst_085-089) | | 30 Oct 2025 | Christopher Alexander principles integrated (inst_090-094) | | 3 Nov 2025 | Agent Lightning integration and community launch | -| 1 Dec 2025 | Village Home AI platform announced | +| 1 Dec 2025 | Village Village AI platform announced | | 9 Dec 2025 | Village case study with sovereign two-model architecture details | | 19 Jan 2026 | Architectural alignment and korero counter-arguments deployed | | 7 Feb 2026 | Wittgenstein established as primary philosophical foundation | @@ -564,7 +564,7 @@ The immediate research trajectory includes: 1. **Indigenous peer review** of STO-RES-0010 -- the taonga governance paper cannot advance without Maori validation 2. **Sovereign training pipeline** -- extending governance from inference to training -3. **Steering vector implementation** on the Village Home AI platform (four-phase plan in STO-RES-0009) +3. **Steering vector implementation** on the Village Village AI platform (four-phase plan in STO-RES-0009) 4. **Multi-project governance scaling** -- extending architectural enforcement across the full project ecosystem 5. **Community engagement** -- broadening the framework's validation beyond a single-developer context @@ -575,5 +575,5 @@ The central question remains the one that started it all: **how do we work along **Document Metadata:** - **Version:** 1.0 - **Status:** Current -- **Projects Referenced:** SyDigital, Digital Sovereignty Passport, Family History, Tractatus, Village Home AI, Agent Lightning, Community, Platform Admin +- **Projects Referenced:** SyDigital, Digital Sovereignty Passport, Family History, Tractatus, Village Village AI, Agent Lightning, Community, Platform Admin - **Word Count:** ~4,500 diff --git a/docs/markdown/steering-vectors-mechanical-bias-sovereign-ai.md b/docs/markdown/steering-vectors-mechanical-bias-sovereign-ai.md index 54ab4cf2..15132172 100644 --- a/docs/markdown/steering-vectors-mechanical-bias-sovereign-ai.md +++ b/docs/markdown/steering-vectors-mechanical-bias-sovereign-ai.md @@ -14,7 +14,7 @@ This paper investigates whether a class of biases in large language models operates at a sub-reasoning, representational level analogous to motor automaticity in human cognition, and whether steering vector techniques can intervene at this level during inference. We distinguish between *mechanical bias* (statistical patterns that fire at the embedding and early-layer representation level before deliberative processing begins) and *reasoning bias* (distortions that emerge through multi-step chain-of-thought reasoning). Drawing on empirical work in Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO), and Anthropic's sparse autoencoder feature steering, we assess the maturity of each technique and its applicability to sovereign small language models (SLMs) trained and served locally.[^sll] -[^sll]: We use "sovereign small language model" (SLM) for continuity with the technical literature. In the Tractatus framework (STO-INN-0003, v2.1; Stroh & Claude, 2026), these systems are designated "Sovereign Locally-trained Language Models" (SLLs) to emphasise that their distinguishing property is architectural sovereignty — governance authority over training, deployment, and inference — not parameter count. The SLL designation is the more precise term within the framework. We find that sovereign SLM deployments, specifically the Village Home AI platform using QLoRA-fine-tuned Llama 3.1/3.2 models, possess a structural advantage over API-mediated deployments: full access to model weights and activations enables steering vector extraction, injection, and evaluation that is architecturally impossible through commercial API endpoints. We propose a four-phase implementation path integrating steering vectors into the existing two-tier training architecture and Tractatus governance framework. +[^sll]: We use "sovereign small language model" (SLM) for continuity with the technical literature. In the Tractatus framework (STO-INN-0003, v2.1; Stroh & Claude, 2026), these systems are designated "Sovereign Locally-trained Language Models" (SLLs) to emphasise that their distinguishing property is architectural sovereignty — governance authority over training, deployment, and inference — not parameter count. The SLL designation is the more precise term within the framework. We find that sovereign SLM deployments, specifically the Village Village AI platform using QLoRA-fine-tuned Llama 3.1/3.2 models, possess a structural advantage over API-mediated deployments: full access to model weights and activations enables steering vector extraction, injection, and evaluation that is architecturally impossible through commercial API endpoints. We propose a four-phase implementation path integrating steering vectors into the existing two-tier training architecture and Tractatus governance framework. --- @@ -174,9 +174,9 @@ This table reveals that **none of the steering vector techniques described in Se > > **Added reference:** Radhakrishnan, A., Beaglehole, D., Belkin, M., & Boix-Adserà, E. (2026). Exposing biases, moods, personalities, and abstract concepts hidden in large language models. *Science.* Published 19 February 2026. -### 4.2 The Village Home AI Platform +### 4.2 The Village Village AI Platform -The Village platform's Home AI system (Stroh, 2025-2026) is designed as a sovereign small language model (SLM) deployment with the following architecture: +The Village platform's Village AI system (Stroh, 2025-2026) is designed as a sovereign small language model (SLM) deployment with the following architecture: - **Base model:** Llama 3.1 8B (Tier 1 platform base) / Llama 3.2 3B (Tier 2 per-tenant adapters) - **Fine-tuning method:** QLoRA (4-bit quantised Low-Rank Adaptation) @@ -317,7 +317,7 @@ The indicator-wiper analogy suggests a useful distinction between biases that op Steering vector techniques (CAA, RepE, FairSteer, DSO, sparse autoencoder feature steering) provide the theoretical and practical toolkit for such intervention. Critically, these techniques require full access to model weights and activations -- access that is available exclusively in sovereign local deployments and architecturally unavailable through commercial API endpoints. -The Village Home AI platform, with its QLoRA-fine-tuned Llama models, two-tier training architecture, and Tractatus governance integration, is structurally positioned to pioneer the application of steering vectors to cultural bias mitigation in community-serving AI. The proposed four-phase implementation path is conservative, empirically grounded, and designed to produce measurable results within a 16-week timeline. +The Village Village AI platform, with its QLoRA-fine-tuned Llama models, two-tier training architecture, and Tractatus governance integration, is structurally positioned to pioneer the application of steering vectors to cultural bias mitigation in community-serving AI. The proposed four-phase implementation path is conservative, empirically grounded, and designed to produce measurable results within a 16-week timeline. The indicator-wiper problem is solvable. The driver eventually recalibrates. The question for sovereign AI is whether we can accelerate that recalibration -- not by telling the model to "be less biased" (the equivalent of verbal instruction), but by directly adjusting the representations that encode the bias (the equivalent of physical relocation of the indicator stalk). diff --git a/docs/markdown/taonga-centred-steering-governance-polycentric-ai.md b/docs/markdown/taonga-centred-steering-governance-polycentric-ai.md index f4c9a6ca..75068607 100644 --- a/docs/markdown/taonga-centred-steering-governance-polycentric-ai.md +++ b/docs/markdown/taonga-centred-steering-governance-polycentric-ai.md @@ -153,11 +153,11 @@ In this model: | Actor | Role | Governance Source | Example | | --- | --- | --- | --- | -| Platform operator | Technical infrastructure, safety baselines, general debiasing | Tractatus framework, platform constitution | Village / Home AI team | +| Platform operator | Technical infrastructure, safety baselines, general debiasing | Tractatus framework, platform constitution | Village / Village AI team | | Iwi steering authority | Cultural steering for iwi-specific domains | Tikanga, iwi governance structures | Iwi data governance board | | Community trust | Domain-specific or locality-specific steering | Trust charter, community deliberation | Regional health trust, marae committee | | Application operator | Selects and composes steering packs for a specific deployment | Contractual, regulatory, relational obligations | School running a local AI assistant | -| Affected community | Contests outputs, flags bias, triggers review | Rights of participation and appeal | Whanau using a Home AI deployment | +| Affected community | Contests outputs, flags bias, triggers review | Rights of participation and appeal | Whanau using a Village AI deployment | ### 3.4 Steering Registries and Taonga Services @@ -255,11 +255,11 @@ These rights structurally prevent the platform from becoming the default locus o --- -## 5. Case Study: Marae-Based Home AI Deployment +## 5. Case Study: Marae-Based Village AI Deployment ### 5.1 Scenario -A marae in Aotearoa operates a Home AI deployment for its whānau community. The system helps members write stories, summarise kōrero, and triage content for moderation. It runs a Llama 3.2 3B model, Quantised Low-Rank Adaptation (QLoRA) fine-tuned with community-contributed data, on local hardware. +A marae in Aotearoa operates a Village AI deployment for its whānau community. The system helps members write stories, summarise kōrero, and triage content for moderation. It runs a Llama 3.2 3B model, Quantised Low-Rank Adaptation (QLoRA) fine-tuned with community-contributed data, on local hardware. ### 5.2 Steering Configuration @@ -282,7 +282,7 @@ The deployment composes three steering packs: ### 5.3 Steering Provenance in Action -A community member asks the Home AI to summarise a kōrero about a recently deceased kuia. The steering provenance for this inference: +A community member asks the Village AI to summarise a kōrero about a recently deceased kuia. The steering provenance for this inference: ``` Steering Provenance: diff --git a/public/architectural-alignment-policymakers.html b/public/architectural-alignment-policymakers.html index 2f3ce3be..50395d57 100644 --- a/public/architectural-alignment-policymakers.html +++ b/public/architectural-alignment-policymakers.html @@ -372,7 +372,7 @@

7. From Existential Stakes to Everyday Governance

7.1 Why Existential Risk Framing Matters for Policy

-

The existential risk literature may seem remote from practical policy concerns about home AI assistants. The connection is essential:

+

The existential risk literature may seem remote from practical policy concerns about Village AI assistants. The connection is essential:

Containment architectures cannot be developed after the systems that need them exist. If advanced AI systems eventually pose existential risks—a possibility serious researchers take seriously—the governance infrastructure, institutional capacity, and cultural expectations required to contain them must be developed in advance.

Current deployment is the development ground. The patterns that work at village scale become the patterns available when stakes are higher. Constitutional gating implemented for home SLLs creates:

diff --git a/public/architectural-alignment.html b/public/architectural-alignment.html index 0d88744b..ccaf3870 100644 --- a/public/architectural-alignment.html +++ b/public/architectural-alignment.html @@ -225,7 +225,7 @@

4.4 From Existential Stakes to Everyday Deployment

-

Why apply frameworks designed for existential risk to home AI assistants? The answer lies in temporal structure:

+

Why apply frameworks designed for existential risk to Village AI assistants? The answer lies in temporal structure:

Containment architectures cannot be developed after the systems that need them exist. The tooling, governance patterns, cultural expectations, and institutional capacity for AI containment must be built in advance.

diff --git a/public/architecture.html b/public/architecture.html index 3c3127db..9a405002 100644 --- a/public/architecture.html +++ b/public/architecture.html @@ -560,8 +560,8 @@
- - Read the Full Home AI Story → + + Read the Full Village AI Story →

Two-model architecture, three training tiers, thirteen wisdom traditions, indigenous data sovereignty @@ -969,13 +969,13 @@

- +
2
-

Home AI: Sovereign Language Model

+

Village AI: Sovereign Language Model

The current research direction: applying all five architectural principles to model training, not just inference. BoundaryEnforcer operates inside the training loop. Three training tiers (platform, tenant, individual) with governance at each level. @@ -1000,7 +1000,7 @@

Status: inference in production; training pipeline designed, hardware ordered.

@@ -1078,7 +1078,7 @@

diff --git a/public/docs.html b/public/docs.html index 99154431..0435b6a3 100644 --- a/public/docs.html +++ b/public/docs.html @@ -662,7 +662,7 @@

Case Studies

Research Papers

    diff --git a/public/downloads/architectural-alignment-academic-fr.html b/public/downloads/architectural-alignment-academic-fr.html index 79cfdd5e..b6be8ceb 100644 --- a/public/downloads/architectural-alignment-academic-fr.html +++ b/public/downloads/architectural-alignment-academic-fr.html @@ -24,7 +24,7 @@ 4. Gouvernance organisationnelleManque de cohérence ; dépend de la culture de l'entreprisePas de validation externe ; conflits d'intérêts 5. Juridique/réglementaireMinimale ; la loi européenne sur l'IA est la première tentative d'envergurePas de coordination au niveau mondial ; l'application n'est pas claire -

    4.4 Des enjeux existentiels au déploiement quotidien

    Pourquoi appliquer des cadres conçus pour les risques existentiels aux assistants d'IA domestiques ? La réponse se trouve dans la structure temporelle :

    Les architectures de confinement ne peuvent pas être développées une fois que les systèmes qui en ont besoin existent. L'outillage, les modèles de gouvernance, les attentes culturelles et les capacités institutionnelles nécessaires à l'endiguement de l'IA doivent être élaborés à l'avance.

    Les déploiements à domicile et dans les villages constituent l'échelle appropriée pour ce développement. Ils permettent une itération sûre (les échecs à l'échelle domestique sont récupérables), une expérimentation diversifiée, une légitimité démocratique et un outillage pratique.

    +

    4.4 Des enjeux existentiels au déploiement quotidien

    Pourquoi appliquer des cadres conçus pour les risques existentiels aux assistants d'Village AIs ? La réponse se trouve dans la structure temporelle :

    Les architectures de confinement ne peuvent pas être développées une fois que les systèmes qui en ont besoin existent. L'outillage, les modèles de gouvernance, les attentes culturelles et les capacités institutionnelles nécessaires à l'endiguement de l'IA doivent être élaborés à l'avance.

    Les déploiements à domicile et dans les villages constituent l'échelle appropriée pour ce développement. Ils permettent une itération sûre (les échecs à l'échelle domestique sont récupérables), une expérimentation diversifiée, une légitimité démocratique et un outillage pratique.

    5. Le problème du pluralisme

    5.1 Le paradoxe du confinement

    Tout système suffisamment puissant pour contenir une IA avancée doit prendre des décisions sur les comportements à autoriser et à interdire. Ces décisions codent des valeurs. Le choix des contraintes est lui-même un choix parmi des systèmes de valeurs contestés.

    5.2 Trois approches inadéquates

    Valeurs universelles. Identifier les valeurs que tous les humains sont censés partager. Problème : ces valeurs sont moins universelles qu'il n'y paraît.

    Neutralité procédurale. Éviter les valeurs substantielles en codant des procédures neutres. Le problème : les procédures ne sont pas neutres.

    Plancher minimal. Encoder uniquement les contraintes minimales. Le problème : le plancher n'est pas aussi minimal qu'il n'y paraît.

    5.3 Pluralisme limité dans le cadre des contraintes de sécurité

    Nous ne pouvons pas résoudre le problème du pluralisme. Nous pouvons identifier une résolution partielle : quelles que soient les valeurs encodées, le système doit maximiser le choix significatif dans le respect des contraintes de sécurité.

    Le cadre du Tractatus incarne cela à travers des constitutions à plusieurs niveaux : principes fondamentaux (universels, explicites quant à leur normativité), règles de la plateforme (largement applicables, modifiables), constitutions du village (spécifiques à la communauté, gouvernées localement) et constitutions des membres (personnalisables).

    diff --git a/public/downloads/architectural-alignment-academic-mi.html b/public/downloads/architectural-alignment-academic-mi.html index af148a33..6abdd26e 100644 --- a/public/downloads/architectural-alignment-academic-mi.html +++ b/public/downloads/architectural-alignment-academic-mi.html @@ -24,7 +24,7 @@ 4. Whakahaere RōpūKāore i te taurite; e whakawhirinaki ana ki te ahurea ā-pakihiKāore he whakamana ā-waho; ngā pakarutanga o ngā painga whaiaro 5. Ture/WhakahaereHe iti noa; ko te Ture AI o te EU te whakamātau nui tuatahi.Kāore he whakakotahitanga ā-ao; kāore i te mārama te whakatinanatanga -

    4.4 Mai i ngā tūraru tūturu ki te whakamahinga ā-ia rā

    He aha ai e whakamahi ai i ngā anga i hangaia mō ngā tūraru o te noho ki ngā kaiāwhina AI kāinga? Kei roto i te hanganga wā te whakautu:

    Kāore e taea te whakawhanake i ngā hanganga here i muri i te wā e tū ana ngā pūnaha e hiahiatia ana. Me hanga i mua ngā taputapu, ngā tauira whakahaere, ngā tūmanako ahurea, me te āheinga whakahaere mō te here AI.

    Ko ngā whakaurunga ki te kāinga me te pā te rahi e tika ana mō tēnei whanaketanga. Ka whakarato ēnei i te whakamātau haumaru (ka taea te whakaora i ngā hapa i te rahi kāinga), i ngā whakamātau kanorau, i te mana ā-pāpāpori, me ngā taputapu whaihua.

    +

    4.4 Mai i ngā tūraru tūturu ki te whakamahinga ā-ia rā

    He aha ai e whakamahi ai i ngā anga i hangaia mō ngā tūraru o te noho ki ngā kaiāwhina Village AI? Kei roto i te hanganga wā te whakautu:

    Kāore e taea te whakawhanake i ngā hanganga here i muri i te wā e tū ana ngā pūnaha e hiahiatia ana. Me hanga i mua ngā taputapu, ngā tauira whakahaere, ngā tūmanako ahurea, me te āheinga whakahaere mō te here AI.

    Ko ngā whakaurunga ki te kāinga me te pā te rahi e tika ana mō tēnei whanaketanga. Ka whakarato ēnei i te whakamātau haumaru (ka taea te whakaora i ngā hapa i te rahi kāinga), i ngā whakamātau kanorau, i te mana ā-pāpāpori, me ngā taputapu whaihua.

    5. Te Raruraru o te Kanorau

    5.1 Te Parahanga Whakamutu

    Me whakatau e tētahi pūnaha kaha rawa hei pupuri i te AI matatau ngā whanonga e whakaaetia ana, e aukatia ana rānei. Ka whakauru ēnei whakataunga i ngā uara. Ko te kōwhiri i ngā here he kōwhiringa anō i waenga i ngā pūnaha uara e taupatupatuhia ana.

    5.2 Ngā huarahi e toru kāore i te whai hua

    Ngā uara ā-ao. Te tautuhi i ngā uara e ai ki te kī e tiritiri ana e ngā tāngata katoa. Ko te raru: kāore ēnei uara i te ā-ao pēnā i tā rātou āhua.

    Te taurite ā-tukanga. Te karo i ngā uara whai-kiko mā te whakakōwa i ngā tukanga taurite. Te raru: ehara ngā tukanga i te taurite.

    Te papa iti rawa. E whakakōwa ana i ngā here iti rawa anake. Ko te raru: ehara te papa i te iti rawa pēnei i tōna āhua.

    5.3 Te Kanorau Herea i roto i ngā Here Haumaru

    Kāore e taea e mātou te whakatau i te raru o te maha-āhua. Ka taea e mātou te tautuhi i tētahi whakatau wāhanga: ahakoa ngā uara kua whakaurua, me whakapiki rawa e te pūnaha te kōwhiringa whai tikanga i roto i ngā here haumaru.

    Ka whakaata te anga Tractatus i tēnei mā ngā ture matua ā-papa: ngā mātāpono matua (whānui, e mārama ana ki tō rātou āhua whakahau), ngā ture tūāpapa (whānui te whakamahinga, ka taea te whakarerekē), ngā ture ā-hapori (motuhake ki ia hapori, e whakahaerehia ana ā-rohe), me ngā ture ā-mema (ka taea te whakarite ā-tangata).

    diff --git a/public/downloads/philosophical-foundations-village-project-de.html b/public/downloads/philosophical-foundations-village-project-de.html index b30302b3..78e05576 100644 --- a/public/downloads/philosophical-foundations-village-project-de.html +++ b/public/downloads/philosophical-foundations-village-project-de.html @@ -33,9 +33,9 @@

    Dies sind keine technischen Probleme, die sich technisch lösen lassen. Es sind philosophische Probleme über die Art von Wesen, die wir sein wollen, und die Art von Gesellschaft, in der wir leben wollen.

    Die Erosion der epistemischen Autonomie

    Am besorgniserregendsten ist vielleicht die Aushöhlung dessen, was man als epistemische Autonomie bezeichnen könnte: die Fähigkeit, sich durch eigene Überlegungen Überzeugungen zu bilden, anstatt die Schlussfolgerungen von Systemen zu akzeptieren, die man nicht versteht. Wenn ein KI-System eine Antwort liefert, können die meisten Benutzer die Überlegungen, die zu dieser Antwort geführt haben, nicht bewerten. Sie müssen auf der Grundlage von Erfolgsbilanz und Reputation vertrauen oder misstrauen - Heuristiken, die sich leicht austricksen lassen.

    Dies stellt eine qualitative Veränderung in der Beziehung des Menschen zum Wissen dar. Frühere Technologien - Bücher, Bibliotheken, Suchmaschinen - erweiterten die menschliche Fähigkeit, Informationen zu finden und zu bewerten. Aktuelle KI-Systeme ersetzen diese Fähigkeit zunehmend, indem sie Schlussfolgerungen statt Beweise, Antworten statt Argumente liefern.

    Die langfristige Folge könnte eine Bevölkerung sein, die nicht nur die Informationsbeschaffung, sondern auch das Urteilsvermögen selbst ausgelagert hat - fähig, Fragen zu stellen, aber die Antworten nicht zu bewerten, abhängig von Systemen, deren Abläufe sie nicht überprüfen und deren Werte sie nicht hinterfragen können.

    IV. Ein philosophischer Ansatz für die KI-Entwicklung

    -

    Das Home AI-Konzept

    Als Antwort auf diese Herausforderungen entwickeln wir das, was wir "Home AI" nennen - ein kleines, lokal trainiertes Sprachmodell (SLL), das unter der Kontrolle der Gemeinschaft auf benutzergesteuerter Hardware arbeitet. Die charakteristischen Merkmale sind:

    Souveränität: Das Modell läuft auf Hardware, die der Gemeinschaft gehört oder von ihr kontrolliert wird. Die Trainingsdaten bleiben lokal. Es fließen keine Informationen an externe Systeme ohne ausdrückliche Zustimmung durch etablierte Governance-Verfahren.

    Transparenz: Gemeinschaften können überprüfen, was das Modell über sie weiß, wie es trainiert wurde und warum es bestimmte Ergebnisse liefert. Das KI-Gedächtnis ist keine Blackbox, sondern eine überprüfbare Aufzeichnung, die der Kontrolle der Gemeinschaft unterliegt.

    Philosophische Grundlegung: Das Modell wird unter ausdrücklicher Berücksichtigung der philosophischen Grundlagen entwickelt. Anstatt es nur auf seine Fähigkeiten hin zu optimieren und später Sicherheitsmaßnahmen hinzuzufügen, werden philosophische Einschränkungen bereits in den frühesten Phasen der Entwicklung berücksichtigt.

    Gemeinschaftliche Verwaltung: Jede Gemeinschaft konfiguriert das Verhalten ihres KI-Assistenten nach ihren eigenen Verfassungsprinzipien. Eine Gemeinschaft, die Direktheit schätzt, konfiguriert sich für Direktheit; eine Gemeinschaft, die Sanftheit schätzt, konfiguriert sich für Sanftheit. Die Plattform stellt die Infrastruktur zur Verfügung, die Gemeinschaften liefern die Werte.

    +

    Das Village AI-Konzept

    Als Antwort auf diese Herausforderungen entwickeln wir das, was wir "Village AI" nennen - ein kleines, lokal trainiertes Sprachmodell (SLL), das unter der Kontrolle der Gemeinschaft auf benutzergesteuerter Hardware arbeitet. Die charakteristischen Merkmale sind:

    Souveränität: Das Modell läuft auf Hardware, die der Gemeinschaft gehört oder von ihr kontrolliert wird. Die Trainingsdaten bleiben lokal. Es fließen keine Informationen an externe Systeme ohne ausdrückliche Zustimmung durch etablierte Governance-Verfahren.

    Transparenz: Gemeinschaften können überprüfen, was das Modell über sie weiß, wie es trainiert wurde und warum es bestimmte Ergebnisse liefert. Das KI-Gedächtnis ist keine Blackbox, sondern eine überprüfbare Aufzeichnung, die der Kontrolle der Gemeinschaft unterliegt.

    Philosophische Grundlegung: Das Modell wird unter ausdrücklicher Berücksichtigung der philosophischen Grundlagen entwickelt. Anstatt es nur auf seine Fähigkeiten hin zu optimieren und später Sicherheitsmaßnahmen hinzuzufügen, werden philosophische Einschränkungen bereits in den frühesten Phasen der Entwicklung berücksichtigt.

    Gemeinschaftliche Verwaltung: Jede Gemeinschaft konfiguriert das Verhalten ihres KI-Assistenten nach ihren eigenen Verfassungsprinzipien. Eine Gemeinschaft, die Direktheit schätzt, konfiguriert sich für Direktheit; eine Gemeinschaft, die Sanftheit schätzt, konfiguriert sich für Sanftheit. Die Plattform stellt die Infrastruktur zur Verfügung, die Gemeinschaften liefern die Werte.

    Stanford Encyclopedia of Philosophy als maßgebliche Referenz

    Für philosophische Konzepte haben wir die Stanford Encyclopedia of Philosophy (SEP) als die einzige maßgebliche Referenz festgelegt. Diese Entscheidung spiegelt sowohl die Qualität der SEP-Wissenschaft als auch die Verpflichtung zu intellektueller Strenge wider, die der Versuchung widersteht, komplexe philosophische Positionen als Ressourcen zu behandeln, die man für bequeme Zitate abbauen kann.

    Wenn der Trainingsprozess auf philosophische Begriffe stößt, werden Querverweise zu SEP-Einträgen gezogen. Wenn es mehrere Interpretationen gibt, hat die SEP-Analyse der Debatte den Vorrang. Wenn Benutzer philosophische Fragen stellen, basieren die Antworten auf SEP-Definitionen und nicht auf statistischen Mustern in den Trainingsdaten.

    Dies ist nicht nur eine Maßnahme zur Qualitätskontrolle, sondern eine grundlegende philosophische Verpflichtung: KI-Systeme, die sich mit philosophischen Konzepten befassen, sollten dies mit der gleichen Strenge tun, die von menschlichen Wissenschaftlern erwartet wird, indem sie die Komplexität anerkennen, anstatt sie zu verflachen, und Debatten darstellen, anstatt sie vorschnell aufzulösen.

    -

    Weisheitstraditionen als Schicht-3-Anpassung

    Neben den strukturellen philosophischen Grundlagen (Ebene 1) und den konstitutionellen Grundsätzen der Gemeinschaft (Ebene 2) bieten wir ein System annehmbarer Weisheitstraditionen, die Einfluss darauf haben, wie die KI-Hilfe gestaltet und geleistet wird (Ebene 3). Es ist wichtig zu verstehen, was diese Ebene tut und was nicht.

    Was Ebene 3 beeinflusst: Kommunikationsstil, Formulierung, Sprachwahl, Vorschläge für das Tempo. Angenommene Traditionen prägen die Art und Weise, wie Home AI mit Ihnen kommuniziert.

    Was Layer 3 nicht beeinflusst: Inhaltliche Entscheidungen, Datenzugang, Durchsetzung der Governance. Angenommene Traditionen bestimmen nicht, was das System tun darf. Sie sind Tendenzen, keine Regeln, und können in jeder spezifischen Situation außer Kraft gesetzt werden.

    Dreizehn Traditionen wurden in der Stanford Encyclopedia of Philosophy mit wissenschaftlicher Bestätigung dokumentiert, darunter auch diese:

    +

    Weisheitstraditionen als Schicht-3-Anpassung

    Neben den strukturellen philosophischen Grundlagen (Ebene 1) und den konstitutionellen Grundsätzen der Gemeinschaft (Ebene 2) bieten wir ein System annehmbarer Weisheitstraditionen, die Einfluss darauf haben, wie die KI-Hilfe gestaltet und geleistet wird (Ebene 3). Es ist wichtig zu verstehen, was diese Ebene tut und was nicht.

    Was Ebene 3 beeinflusst: Kommunikationsstil, Formulierung, Sprachwahl, Vorschläge für das Tempo. Angenommene Traditionen prägen die Art und Weise, wie Village AI mit Ihnen kommuniziert.

    Was Layer 3 nicht beeinflusst: Inhaltliche Entscheidungen, Datenzugang, Durchsetzung der Governance. Angenommene Traditionen bestimmen nicht, was das System tun darf. Sie sind Tendenzen, keine Regeln, und können in jeder spezifischen Situation außer Kraft gesetzt werden.

    Dreizehn Traditionen wurden in der Stanford Encyclopedia of Philosophy mit wissenschaftlicher Bestätigung dokumentiert, darunter auch diese:

    • Simone Weil - ihr Konzept der Aufmerksamkeit als rezeptive Auseinandersetzung mit dem Leiden beeinflusst Optionen wie "Nimm dir Zeit" für Trauerinhalte und Widerstand gegen die Komprimierung von Verlust in Zusammenfassungen
    • Stoizismus - Betonung der Unterscheidung zwischen dem, was man kontrollieren kann, und dem, was man nicht kontrollieren kann
    • Ethik in der Pflege - Betonung von Beziehungen, Verletzlichkeit und kontextbezogener Beurteilung
    • Konfuzianische Ethik - Betonung von Beziehungsrollen und sozialer Harmonie
    • Buddhistische Ethik - Betonung von Unbeständigkeit, gegenseitiger Abhängigkeit und der Beendigung des Leidens
    • Ubuntu - Betonung der gemeinschaftlichen Identität und der gegenseitigen Verpflichtung ("die Geschichte unserer Familie" und nicht "deine Geschichte")
    • Jüdische Ethik - mit Schwerpunkt auf Tikkun (Wiedergutmachung) und Tzedakah (gerechtes Spenden)
    • Islamische Ethik - Betonung von Barmherzigkeit, Gerechtigkeit und Unterwerfung unter transzendente Prinzipien
    • Indigener/Maori-Rahmen - Betonung von whakapapa (genealogische Verbindung) und verwandtschaftlichen Verpflichtungen

    Gemeinschaften und Einzelpersonen können Traditionen übernehmen, die mit ihren Werten übereinstimmen. Diese Übernahmen haben Einfluss darauf, wie AI-Hilfe gestaltet wird - welche Erwägungen im Vordergrund stehen, welche Sprache verwendet wird, welche Optionen angeboten werden -, ohne dass die in Ebene 1 festgelegten strukturellen Schutzmaßnahmen oder die in Ebene 2 festgelegten verfassungsrechtlichen Vorschriften außer Kraft gesetzt werden.

    Wenn Traditionen unterschiedliche Ansätze vorschlagen (was manchmal der Fall ist - der toskanische Gleichmut kann in Spannung zu Weils Aufmerksamkeit für das Leiden stehen), bringt das System die Spannung an die Oberfläche, anstatt sie algorithmisch aufzulösen, und lädt den Menschen zum Nachdenken darüber ein, was die Situation erfordert. Das ist der Berliner Wertepluralismus in der Praxis: legitime Werte stehen in einem echten Konflikt, und das System maßt sich nicht an, diesen Konflikt für Sie zu lösen.

    In die Ausbildung eingebettete Governance

    Gemäß dem Alexander-Prinzip der Untrennbarkeit betten wir die Governance in den Trainingsprozess selbst ein, anstatt sie als Post-hoc-Filter anzuwenden. Die Ausbildungsschleife umfasst:

    diff --git a/public/downloads/philosophical-foundations-village-project-fr.html b/public/downloads/philosophical-foundations-village-project-fr.html index b35dc214..d8ad79fd 100644 --- a/public/downloads/philosophical-foundations-village-project-fr.html +++ b/public/downloads/philosophical-foundations-village-project-fr.html @@ -33,9 +33,9 @@

    Il ne s'agit pas de problèmes techniques susceptibles d'être résolus par des solutions techniques. Il s'agit de problèmes philosophiques concernant le type d'êtres que nous voulons être et le type de société que nous voulons habiter.

    L'érosion de l'autonomie épistémique

    Le plus préoccupant est peut-être l'érosion de ce que l'on pourrait appeler l'autonomie épistémique : la capacité de se forger des convictions par son propre raisonnement plutôt que d'accepter des conclusions fournies par des systèmes que l'on ne comprend pas. Lorsqu'un système d'IA produit une réponse, la plupart des utilisateurs ne peuvent pas évaluer le raisonnement qui l'a produite. Ils doivent faire confiance ou se méfier en se basant sur les antécédents et la réputation, des critères faciles à manipuler.

    Il s'agit d'un changement qualitatif dans la relation de l'homme à la connaissance. Les technologies précédentes - livres, bibliothèques, moteurs de recherche - ont renforcé la capacité humaine à trouver et à évaluer l'information. Les systèmes d'IA actuels se substituent de plus en plus à cette capacité, livrant des conclusions plutôt que des preuves, des réponses plutôt que des arguments.

    La conséquence à long terme pourrait être une population qui a externalisé non seulement la recherche d'informations mais aussi le jugement lui-même - capable de poser des questions mais pas d'évaluer les réponses, dépendante de systèmes dont elle ne peut inspecter le fonctionnement et dont elle ne peut interroger les valeurs.

    IV. Une approche philosophique du développement de l'IA

    -

    Le concept d'IA domestique

    En réponse à ces défis, nous développons ce que nous appelons l'"IA domestique" - un petit modèle linguistique formé localement (SLL) qui fonctionne sous la gouvernance de la communauté sur du matériel contrôlé par l'utilisateur. Les caractéristiques distinctives sont les suivantes :

    Souveraineté : Le modèle fonctionne sur du matériel appartenant à la communauté ou contrôlé par elle. Les données relatives à la formation restent locales. Aucune information ne circule vers des systèmes externes sans l'accord explicite des procédures de gouvernance établies.

    Transparence : Les communautés peuvent vérifier ce que le modèle sait d'elles, comment il a été formé et pourquoi il produit des résultats particuliers. La mémoire de l'IA n'est pas une boîte noire, mais un enregistrement vérifiable soumis à la gouvernance de la communauté.

    Fondement philosophique : Le modèle est formé en accordant une attention explicite aux fondements philosophiques. Plutôt que d'optimiser purement la capacité et d'ajouter des mesures de sécurité par la suite, nous intégrons des contraintes philosophiques dès les premières étapes du développement.

    Gouvernance communautaire : Chaque communauté configure le comportement de son assistant IA en fonction de ses propres principes constitutionnels. Une communauté qui privilégie la franchise configure la franchise ; une communauté qui privilégie la douceur configure la douceur. La plateforme fournit l'infrastructure ; les communautés fournissent les valeurs.

    +

    Le concept d'Village AI

    En réponse à ces défis, nous développons ce que nous appelons l'"Village AI" - un petit modèle linguistique formé localement (SLL) qui fonctionne sous la gouvernance de la communauté sur du matériel contrôlé par l'utilisateur. Les caractéristiques distinctives sont les suivantes :

    Souveraineté : Le modèle fonctionne sur du matériel appartenant à la communauté ou contrôlé par elle. Les données relatives à la formation restent locales. Aucune information ne circule vers des systèmes externes sans l'accord explicite des procédures de gouvernance établies.

    Transparence : Les communautés peuvent vérifier ce que le modèle sait d'elles, comment il a été formé et pourquoi il produit des résultats particuliers. La mémoire de l'IA n'est pas une boîte noire, mais un enregistrement vérifiable soumis à la gouvernance de la communauté.

    Fondement philosophique : Le modèle est formé en accordant une attention explicite aux fondements philosophiques. Plutôt que d'optimiser purement la capacité et d'ajouter des mesures de sécurité par la suite, nous intégrons des contraintes philosophiques dès les premières étapes du développement.

    Gouvernance communautaire : Chaque communauté configure le comportement de son assistant IA en fonction de ses propres principes constitutionnels. Une communauté qui privilégie la franchise configure la franchise ; une communauté qui privilégie la douceur configure la douceur. La plateforme fournit l'infrastructure ; les communautés fournissent les valeurs.

    L'encyclopédie de philosophie de Stanford, une référence qui fait autorité

    Pour les concepts philosophiques, nous avons établi la Stanford Encyclopedia of Philosophy (SEP) comme référence unique faisant autorité. Cette décision reflète à la fois la qualité de l'érudition de la SEP et un engagement en faveur de la rigueur intellectuelle qui résiste à la tentation de traiter des positions philosophiques complexes comme des ressources à exploiter pour obtenir des citations commodes.

    Lorsque le processus de formation rencontre des termes philosophiques, il se réfère aux entrées du SEP. En cas d'interprétations multiples, c'est l'analyse du débat par le SEP qui prime. Lorsque les utilisateurs posent des questions philosophiques, les réponses sont fondées sur les définitions du SEP plutôt que générées à partir de modèles statistiques dans les données de formation.

    Il ne s'agit pas simplement d'une mesure de contrôle de la qualité, mais d'un engagement philosophique de fond : les systèmes d'IA qui s'intéressent aux concepts philosophiques doivent le faire avec la même rigueur que celle attendue des chercheurs humains, en reconnaissant la complexité plutôt qu'en l'aplatissant, en représentant les débats plutôt qu'en les résolvant de manière prématurée.

    -

    Les traditions de sagesse, une personnalisation de niveau 3

    Au-delà des fondements philosophiques structurels (couche 1) et des principes constitutionnels communautaires (couche 2), nous fournissons un système de traditions de sagesse adoptables qui influencent la manière dont l'assistance à l'IA est encadrée et fournie (couche 3). Il est essentiel de comprendre ce que cette couche fait et ne fait pas.

    Les effets de la couche 3 : Le style de communication, le cadrage, les choix linguistiques, les suggestions de rythme. Les traditions adoptées façonnent la façon dont l'IA domestique communique avec vous.

    Ce que la couche 3 n'affecte pas : Les décisions relatives au contenu, l'accès aux données, l'application de la gouvernance. Les traditions adoptées ne contrôlent pas ce que le système est autorisé à faire. Il s'agit de tendances, et non de règles, qui peuvent toujours être ignorées dans une situation donnée.

    Treize traditions ont été documentées et validées par l'encyclopédie de philosophie de Stanford :

    +

    Les traditions de sagesse, une personnalisation de niveau 3

    Au-delà des fondements philosophiques structurels (couche 1) et des principes constitutionnels communautaires (couche 2), nous fournissons un système de traditions de sagesse adoptables qui influencent la manière dont l'assistance à l'IA est encadrée et fournie (couche 3). Il est essentiel de comprendre ce que cette couche fait et ne fait pas.

    Les effets de la couche 3 : Le style de communication, le cadrage, les choix linguistiques, les suggestions de rythme. Les traditions adoptées façonnent la façon dont l'Village AI communique avec vous.

    Ce que la couche 3 n'affecte pas : Les décisions relatives au contenu, l'accès aux données, l'application de la gouvernance. Les traditions adoptées ne contrôlent pas ce que le système est autorisé à faire. Il s'agit de tendances, et non de règles, qui peuvent toujours être ignorées dans une situation donnée.

    Treize traditions ont été documentées et validées par l'encyclopédie de philosophie de Stanford :

    • Simone Weil - son concept d'attention en tant qu'engagement réceptif à la souffrance influence les options telles que "prendre son temps" pour le contenu du deuil et la résistance à la compression de la perte dans des résumés.
    • Le stoïcisme - qui met l'accent sur la distinction entre ce qui est et ce qui n'est pas du ressort de l'individu.
    • L'éthique des soins - qui met l'accent sur les relations, la vulnérabilité et le jugement contextuel
    • L'éthique confucéenne - qui met l'accent sur les rôles relationnels et l'harmonie sociale
    • L'éthique bouddhiste - qui met l'accent sur l'impermanence, l'interdépendance et la cessation de la souffrance.
    • Ubuntu - souligne l'identité communautaire et l'obligation mutuelle ("l'histoire de notre famille" plutôt que "votre histoire")
    • Éthique juive - accent mis sur le tikkun (réparation) et le tzedakah (don vertueux)
    • L'éthique islamique - qui met l'accent sur la miséricorde, la justice et la soumission à des principes transcendants.
    • Cadres indigènes/maoris - mettant l'accent sur le whakapapa (lien généalogique) et les obligations de parenté

    Les communautés et les individus peuvent adopter des traditions qui correspondent à leurs valeurs. Ces adoptions influencent la manière dont l'assistance à l'IA est encadrée - quelles considérations sont mises en avant, quel langage est utilisé, quelles options sont proposées - sans pour autant supplanter les protections structurelles établies à la couche 1 ou les règles constitutionnelles établies à la couche 2.

    Lorsque les traditions suggèrent des approches différentes (comme c'est parfois le cas - l'équanimité stoïcienne peut entrer en tension avec l'attention portée à l'affliction par Weil), le système fait apparaître la tension plutôt que de la résoudre de manière algorithmique, invitant l'être humain à réfléchir à ce que la situation exige. C'est le pluralisme des valeurs de Berlin dans la pratique : les valeurs légitimes entrent véritablement en conflit, et le système ne prétend pas résoudre ce conflit à votre place.

    La gouvernance intégrée à la formation

    Conformément au principe de non-séparation d'Alexander, nous intégrons la gouvernance dans le processus de formation lui-même plutôt que de l'appliquer comme un filtre a posteriori. La boucle de formation comprend

    diff --git a/public/downloads/philosophical-foundations-village-project-mi.html b/public/downloads/philosophical-foundations-village-project-mi.html index f284c158..c406e518 100644 --- a/public/downloads/philosophical-foundations-village-project-mi.html +++ b/public/downloads/philosophical-foundations-village-project-mi.html @@ -33,7 +33,7 @@

    Ehara ēnei i ngā raruraru hangarau ka taea te whakatau mā ngā rongoā hangarau. He raruraru arorau ēnei mō te momo tangata e hiahia ana mātou kia noho ai, me te momo hapori e hiahia ana mātou kia noho ai.

    Te ānini o te rangatōpū mātauranga

    Tērā pea ko te mea tino āwangawanga ko te pakupaku haere o tērā e kīia nei ko te rangatiratanga mō te mātauranga: arā, te āheinga ki te hanga whakapono mā tō ake whakaaro, kaua e whakaae ki ngā whakatau i tukuna mai e ngā pūnaha kāore koe e mārama ana. Ina whakaputa he whakautu tētahi pūnaha AI, kāore te nuinga o ngā kaiwhakamahi e āhei ki te aromātai i te aronga i whakaputa ai. Me whakawhirinaki rānei, me whakawhirinaki-kore rānei rātou i runga i ngā hua o mua me te ingoa—he tikanga whakatau māmā ka taea te tinihanga.

    E tohu ana tēnei i tētahi panoni āhuatanga i roto i te hononga a te tangata ki te mātauranga. Ko ngā hangarau o mua—ngā pukapuka, ngā whare pukapuka, ngā miihini rapu—i whakapakari ake i te āheinga a te tangata ki te rapu me te aromātai i ngā pārongo. Kei te kaha ake ngā pūnaha AI o nāianei ki te whakakapi i taua āheinga, e tuku ana i ngā whakatau hei utu mō ngā taunakitanga, i ngā whakautu hei utu mō ngā tautohetohe.

    Ko te hua mō te wā roa pea he taupori kua tuku atu i waho, ehara i te kimi pārongo anake, engari tae noa ki te whakatau anō—e āhei ana ki te pātai, engari kāore e āhei ki te aromātai i ngā whakautu, ā, e whakawhirinaki ana ki ngā pūnaha kāore rātou e āhei ki te tirotiro i ā rātou mahi, kāore hoki e āhei ki te uiui i ā rātou uara.

    IV. He huarahi tuatahi ki te whakawhanake i te AI mā te ariā

    -

    Te ariā o te AI ā-whare

    Hei whakautu ki ēnei wero, kei te whakawhanake mātou i tā mātou e kī nei ko "Home AI" — he tauira reo iti kua whakangungua ā-rohe (SLL) e whakahaerehia ana i raro i te mana whakahaere ā-hapori i runga i ngā taputapu e whakahaerehia ana e te kaiwhakamahi. Ko ngā āhuatanga motuhake:

    Te Rangatiratanga: Ka whakahaerehia te tauira i runga i ngā taputapu rorohiko e puritia ana, e whakahaerehia ana rānei e te hapori. Ka noho ngā raraunga whakangungu ki te rohe. Kāore he pārongo e rere ki ngā pūnaha o waho mehemea kāore he whakaaetanga mārama i runga i ngā tikanga whakahaere kua whakatūria.

    Te mārama: Ka taea e ngā hapori te tirotiro i ngā mōhiohio e mōhio ana te tauira mō rātou, me pēhea i whakangungua ai, me te take i whakaputa ai i ētahi hua motuhake. Ehara te mahara AI i te pouaka pango, engari he rēhita ka taea te arotake, ā, kei raro i te mana whakahaere a te hapori.

    Tūāpapa Arorangi: Ka whakangungua te tauira me te aro mārama ki ngā tūāpapa arorangi. Kāore e whakapai noa ana mō te āheinga, ā, ka tāpiri i ngā tikanga haumaru i muri mai; engari ka whakauru mātou i ngā here arorangi mai i ngā wā tuatahi o te whakawhanaketanga.

    Whakahaere Hapori: Ka whakarite ia hapori i te āhua o te whanonga o tana kaiāwhina AI kia hāngai ki āna ake mātāpono ture. He hapori e whakanuia ana te tika, ka whakarite kia tika; he hapori e whakanuia ana te māhaki, ka whakarite kia māhaki. Ka whakarato te tūāpapa i te hanganga; ko ngā hapori e whakarato ana i ngā uara.

    +

    Te ariā o te Village AI

    Hei whakautu ki ēnei wero, kei te whakawhanake mātou i tā mātou e kī nei ko "Village AI" — he tauira reo iti kua whakangungua ā-rohe (SLL) e whakahaerehia ana i raro i te mana whakahaere ā-hapori i runga i ngā taputapu e whakahaerehia ana e te kaiwhakamahi. Ko ngā āhuatanga motuhake:

    Te Rangatiratanga: Ka whakahaerehia te tauira i runga i ngā taputapu rorohiko e puritia ana, e whakahaerehia ana rānei e te hapori. Ka noho ngā raraunga whakangungu ki te rohe. Kāore he pārongo e rere ki ngā pūnaha o waho mehemea kāore he whakaaetanga mārama i runga i ngā tikanga whakahaere kua whakatūria.

    Te mārama: Ka taea e ngā hapori te tirotiro i ngā mōhiohio e mōhio ana te tauira mō rātou, me pēhea i whakangungua ai, me te take i whakaputa ai i ētahi hua motuhake. Ehara te mahara AI i te pouaka pango, engari he rēhita ka taea te arotake, ā, kei raro i te mana whakahaere a te hapori.

    Tūāpapa Arorangi: Ka whakangungua te tauira me te aro mārama ki ngā tūāpapa arorangi. Kāore e whakapai noa ana mō te āheinga, ā, ka tāpiri i ngā tikanga haumaru i muri mai; engari ka whakauru mātou i ngā here arorangi mai i ngā wā tuatahi o te whakawhanaketanga.

    Whakahaere Hapori: Ka whakarite ia hapori i te āhua o te whanonga o tana kaiāwhina AI kia hāngai ki āna ake mātāpono ture. He hapori e whakanuia ana te tika, ka whakarite kia tika; he hapori e whakanuia ana te māhaki, ka whakarite kia māhaki. Ka whakarato te tūāpapa i te hanganga; ko ngā hapori e whakarato ana i ngā uara.

    Te Pukapuka Whakamārama o Stanford mō te Mātauranga Hinengaro hei Puna Whakawhirinaki

    Mō ngā ariā ā-tātai, kua whakatūria e mātou te Pukapuka Whakamārama o Stanford mō te Tātaritanga (SEP) hei puna mōhiohio kotahi, whai mana. E whakaata ana tēnei whakataunga i te kounga o ngā rangahau a SEP me tōna ū ki te pakari hinengaro, e aukati ana i te hiahia ki te whakamahi i ngā tūnga ā-tātai matatini hei puna kōrero mō ngā rerenga kupu māmā.

    Ka tutaki te tukanga whakangungu ki ngā kupu arorangi, ka whakawhiti-tuhituhi ki ngā tomokanga o te SEP. Ki te maha ngā whakamāramatanga, ko te tātaritanga a te SEP mō te wero e whai mana ana. Ka pātai ngā kaiwhakamahi i ngā pātai arorangi, ka ū ngā whakautu ki ngā whakamārama a te SEP, ehara i te mea i whakaputaina mā ngā tauira tatauranga o ngā raraunga whakangungu.

    Ehara tēnei i te tikanga whakahaere kounga anake, engari he ū ki te ariā hōhonu: me whai ngā pūnaha AI e mahi ana ki ngā ariā ā-hinengaro i taua kaha tonu e tūmanakohia ana i ngā kairangahau tangata, e whakaae ana ki te matatini, kaua e whakaiti i tōna hōhonutanga, e whakaatu ana i ngā wero kōrero, kaua e whakatau wawe i a rātou.

    Ngā Tikanga Mātauranga hei Whakarite ā-Wāhanga Tuatoru

    I tua atu i ngā tūāpapa arorau hanganga (Wāhanga 1) me ngā mātāpono ture ā-hapori (Wāhanga 2), ka whakarato mātou i tētahi pūnaha o ngā tikanga mātauranga ka taea te whakamahi, e whai pānga ana ki te āhua me te tuku o te āwhina AI (Wāhanga 3). He mea nui kia mārama ki ngā mahi a tēnei wāhanga me ngā mea kāore e mahia e ia.

    Ngā mea e pāngia ana e te Papanga Tuatoru: te āhua whakawhitiwhiti kōrero, te hanganga kōrero, ngā kōwhiringa reo, ngā tūtohutanga mō te tere kōrero. Ko ngā tikanga kua whakaaetia e hanga ana i te āhua o te whakawhitiwhiti kōrero a te AI o te kāinga ki a koe.

    Ngā mea kāore e pāngia e Layer 3: ngā whakataunga ihirangi, te uru raraunga, te whakatinanatanga whakahaere. Ehara i te mea ka whakahaere ngā tikanga kua whakaaetia i ngā mahi e whakaaetia ana e te pūnaha. He āhua ēnei, ehara i ngā ture, ā, ka taea tonu te whakakore i ēnei i ia āhuatanga motuhake.

    Kua tuhia ngā tikanga tekau mā toru me te whakau mātauranga e hāngai ana ki te Pukapuka Pūtaiao Hinengaro o Stanford, arā:

    • Simone Weil — ko tana ariā mō te aro, hei whakauru whakarongo ki te mamae, e whakaawe ana i ngā kōwhiringa pērā i te "kia āta haere" mō ngā ihirangi pōuri, me te ārai i te whakamāheke i te ngaronga ki roto i ngā whakarāpopototanga.
    • Te tū pakari — e whakakaha ana i te wehewehe i waenga i ngā mea kei raro i tō mana me ngā mea kāore
    • Ngā Tikanga Manaaki — e aro nui ana ki ngā hononga, ki te ngoikore, me te whakataunga i runga i te horopaki
    • Ngā Tikanga a Confucius — e whakakaha ana i ngā tūranga hononga me te kotahitanga ā-hapori
    • Ngā Tikanga a te Pākeha — e whakakaha ana i te kore mau tonu, te whakawhirinaki ā-iwi, me te whakamutu i te mamae
    • Ubuntu — e whakakaha ana i te tuakiri ā-hapori me te haepapa ā-tāngata ("te pūrākau o tō tātou whānau" kaua ko "tō pūrākau")
    • Ngā Tikanga a te Hāhi Iutaia — e whakakaha ana i te tikkun (te whakatika) me te tzedakah (te hoatu tika)
    • Ngā Tikanga o te Islam — e whakakaha ana i te atawhai, te tika, me te tuku ki ngā mātāpono whakaharahara
    • Ngā anga taketake/Māori — e whakakaha ana i te whakapapa (hononga whakapapa) me ngā herenga whanaungatanga
    diff --git a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-de.html b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-de.html index 0c399784..c9108054 100644 --- a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-de.html +++ b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-de.html @@ -4,7 +4,7 @@
    Dieses Dokument wurde in Zusammenarbeit zwischen Mensch und KI entwickelt. Die Autoren sind der Ansicht, dass dieser kollaborative Prozess selbst für das Argument relevant ist: Wenn Menschen und KI-Systeme zusammenarbeiten können, um über KI-Governance nachzudenken, können die von ihnen geschaffenen Rahmenwerke eine Legitimität haben, die keiner von ihnen allein erreichen könnte.

    Zusammenfassung

    -

    In diesem Beitrag wird untersucht, ob eine Klasse von Verzerrungen in großen Sprachmodellen auf einer Sub-Reasoning- und Repräsentationsebene analog zur motorischen Automatik in der menschlichen Kognition abläuft, und ob Steuerungsvektortechniken auf dieser Ebene während der Inferenz eingreifen können. Wir unterscheiden zwischen mechanischer Verzerrung (statistische Muster, die auf der Ebene der Einbettung und der frühen Repräsentationsebene auftreten, bevor die bewusste Verarbeitung beginnt) und Überlegungsverzerrung (Verzerrungen, die durch eine mehrstufige Denkkette entstehen). Auf der Grundlage empirischer Arbeiten in den Bereichen Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO) und Anthropic's sparse autoencoder feature steering bewerten wir die Reife der einzelnen Techniken und ihre Anwendbarkeit auf souveräne kleine Sprachmodelle (SLMs), die lokal trainiert und bedient werden. Wir stellen fest, dass souveräne SLM-Einsätze, insbesondere die Village Home AI-Plattform, die QLoRA-abgestimmte Llama 3.1/3.2-Modelle verwendet, einen strukturellen Vorteil gegenüber API-vermittelten Einsätzen haben: Der vollständige Zugriff auf Modellgewichte und -aktivierungen ermöglicht die Extraktion, Injektion und Auswertung von Steuerungsvektoren, was über kommerzielle API-Endpunkte nicht verfügbar ist. Wir schlagen einen vierstufigen Implementierungspfad vor, der Lenkungsvektoren in die bestehende zweistufige Trainingsarchitektur und das Tractatus Governance Framework integriert.

    +

    In diesem Beitrag wird untersucht, ob eine Klasse von Verzerrungen in großen Sprachmodellen auf einer Sub-Reasoning- und Repräsentationsebene analog zur motorischen Automatik in der menschlichen Kognition abläuft, und ob Steuerungsvektortechniken auf dieser Ebene während der Inferenz eingreifen können. Wir unterscheiden zwischen mechanischer Verzerrung (statistische Muster, die auf der Ebene der Einbettung und der frühen Repräsentationsebene auftreten, bevor die bewusste Verarbeitung beginnt) und Überlegungsverzerrung (Verzerrungen, die durch eine mehrstufige Denkkette entstehen). Auf der Grundlage empirischer Arbeiten in den Bereichen Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO) und Anthropic's sparse autoencoder feature steering bewerten wir die Reife der einzelnen Techniken und ihre Anwendbarkeit auf souveräne kleine Sprachmodelle (SLMs), die lokal trainiert und bedient werden. Wir stellen fest, dass souveräne SLM-Einsätze, insbesondere die Village Village AI-Plattform, die QLoRA-abgestimmte Llama 3.1/3.2-Modelle verwendet, einen strukturellen Vorteil gegenüber API-vermittelten Einsätzen haben: Der vollständige Zugriff auf Modellgewichte und -aktivierungen ermöglicht die Extraktion, Injektion und Auswertung von Steuerungsvektoren, was über kommerzielle API-Endpunkte nicht verfügbar ist. Wir schlagen einen vierstufigen Implementierungspfad vor, der Lenkungsvektoren in die bestehende zweistufige Trainingsarchitektur und das Tractatus Governance Framework integriert.


    1. Einführung: Das Blinker-Wischer-Problem

    1.1 Eine Motor-Analogie

    diff --git a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-fr.html b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-fr.html index ccd2ff59..fc9defce 100644 --- a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-fr.html +++ b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-fr.html @@ -89,7 +89,7 @@

    4.2 La plateforme d'IA du Village Home

    -

    Le système d'IA domestique de la plateforme Village (Stroh, 2025-2026) est conçu comme un déploiement souverain de petits modèles de langage (SLM) avec l'architecture suivante :

    +

    Le système d'Village AI de la plateforme Village (Stroh, 2025-2026) est conçu comme un déploiement souverain de petits modèles de langage (SLM) avec l'architecture suivante :

    • Modèle de base: Llama 3.1 8B (base de la plateforme de niveau 1) / Llama 3.2 3B (adaptateurs par locataire de niveau 2)
    • Méthode de réglage fin:** QLoRA (Adaptation quantifiée de faible rang à 4 bits)
    • Cadence de formation:** Cycles de recyclage hebdomadaires
    • diff --git a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-mi.html b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-mi.html index a53c7ea8..53799802 100644 --- a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-mi.html +++ b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai-mi.html @@ -3,7 +3,7 @@
      Putanga Rangahau Akoranga

      Ngā Wīra Arataki me te Whakawhē Mekanika: Te Whakakore Whakawhē i te Wā Whakamātau mō ngā Tauira Reo Iti Motuhake

      Te Whakatikatika i ngā Whakaaro i te Wā Whakatau mō ngā Tauira Reo Iti Motuhake

      John Stroh & Claude (Anthropic)

      STO-RES-0009 | Version: 1.1 | February 2026

      Tractatus AI Safety Framework

      https://agenticgovernance.digital

      I hangaia tenei tuhinga i runga i te mahi tahi a te tangata me te AI. E whakapono ana nga kaituhi ko tenei tukanga mahi tahi he mea whai take ki te tohe: ki te taea e te tangata me nga punaha AI te mahi tahi ki te whakaaro mo te mana whakahaere AI, ka whai mana nga anga ka hangaia e ratou, he mana kaore e taea e tetahi o ratou anake.
      -

      Whakarāpopototanga E rangahau ana tēnei pepa mēnā he momo hē i roto i ngā tauira reo nui e mahi ana i tētahi taumata raro iho i te whakaaro, he taumata whakaaturanga e ōrite ana ki te aunoatanga nekehanga i roto i te māramatanga tangata, ā, mēnā ka taea e ngā tikanga arataki pūwāhi te uru ki tēnei taumata i te wā e whakahaere ana i te whakamātau. Ka wehewehea e mātou te pōraruraru mīhini (ngā tauira tauanga e whakahohe ana i te taumata whakaurunga me te whakaaturanga o ngā paparanga tuatahi i mua i te tīmatanga o te tukatuka whakaaroaro) me te pōraruraru whakaaroaro (ngā whakarerekētanga e puta ake ana i roto i te whakaaroaro mekameka-whakaaro maha-hipanga). Mā te whakamahi i ngā mahi rangahau tūturu i roto i te Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO), me te ārahitanga āhuatanga o te sparse autoencoder a Anthropic, ka aromātai mātou i te pakeke o ia tikanga me tōna whai wāhi ki ngā tauira reo iti rangatira (SLMs) kua whakangungua, kua whakaratohia hoki i te rohe. Ka kitea e mātou he painga hanganga o ngā whakaurunga SLM rangatira, arā ko te papanga Village Home AI e whakamahi ana i ngā tauira Llama 3.1/3.2 kua whakangāwarihia mā QLoRA, ki ngā whakaurunga mā te API: mā te whai wāhi katoa ki ngā taumaha me ngā whakaoho o te tauira ka taea te tango, te whakauru, me te aromātai i ngā pūwāhi whakatere, ā, kāore e taea tēnei i roto i ngā tauranga mutunga API hokohoko i runga i te hanganga. Ka tūtohu mātou i tētahi ara whakatinana e whā ngā wāhanga, e whakauru ana i ngā pūwāhi whakatere ki te hanganga whakangungu ā-papa e rua kua oti kē, me te anga whakahaere Tractatus. --- ## 1. Whakataki: Te Raruraru o te Tohu-Mopu ### 1.1 He Whakatairite Motokā

      +

      Whakarāpopototanga E rangahau ana tēnei pepa mēnā he momo hē i roto i ngā tauira reo nui e mahi ana i tētahi taumata raro iho i te whakaaro, he taumata whakaaturanga e ōrite ana ki te aunoatanga nekehanga i roto i te māramatanga tangata, ā, mēnā ka taea e ngā tikanga arataki pūwāhi te uru ki tēnei taumata i te wā e whakahaere ana i te whakamātau. Ka wehewehea e mātou te pōraruraru mīhini (ngā tauira tauanga e whakahohe ana i te taumata whakaurunga me te whakaaturanga o ngā paparanga tuatahi i mua i te tīmatanga o te tukatuka whakaaroaro) me te pōraruraru whakaaroaro (ngā whakarerekētanga e puta ake ana i roto i te whakaaroaro mekameka-whakaaro maha-hipanga). Mā te whakamahi i ngā mahi rangahau tūturu i roto i te Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO), me te ārahitanga āhuatanga o te sparse autoencoder a Anthropic, ka aromātai mātou i te pakeke o ia tikanga me tōna whai wāhi ki ngā tauira reo iti rangatira (SLMs) kua whakangungua, kua whakaratohia hoki i te rohe. Ka kitea e mātou he painga hanganga o ngā whakaurunga SLM rangatira, arā ko te papanga Village Village AI e whakamahi ana i ngā tauira Llama 3.1/3.2 kua whakangāwarihia mā QLoRA, ki ngā whakaurunga mā te API: mā te whai wāhi katoa ki ngā taumaha me ngā whakaoho o te tauira ka taea te tango, te whakauru, me te aromātai i ngā pūwāhi whakatere, ā, kāore e taea tēnei i roto i ngā tauranga mutunga API hokohoko i runga i te hanganga. Ka tūtohu mātou i tētahi ara whakatinana e whā ngā wāhanga, e whakauru ana i ngā pūwāhi whakatere ki te hanganga whakangungu ā-papa e rua kua oti kē, me te anga whakahaere Tractatus. --- ## 1. Whakataki: Te Raruraru o te Tohu-Mopu ### 1.1 He Whakatairite Motokā

      He taraiwa e whakawhiti ana i waenga i ngā waka e rua – kotahi kei te taha matau o te pou taraiwa ngā mana tohu, ko tētahi kei te taha mauī – ka pā ki a ia tētahi hapa motuhake: i muri i te whakamahinga roa i tētahi waka, ka huri ki tētahi atu, ka puta te whakahohe aunoa o te mana hē. Ka tohu te taraiwa i te huringa, ā, ka whakahohe i ngā mīhini horoi karaihe o mua, hei huri rānei. E toru ngā āhuatanga o tēnei hapa e whai ana hei akoranga mō te tātaritanga hē o te AI:

      1. He mea i mua i te māramatanga. Kāore te taraiwa e whakaaroaro ana ko tēhea te tūtoki hei whakamahi. Ka whakahohe te tauira nekehanga i mua i te whai wāhi o te whakaaroaro ā-hinengaro. Hei whakatika, me whakakore i tētahi urupare kua whakangungua, ehara i te whakarerekē i tētahi whakatau.
      1. He whakawhirinaki ki te horopaki. Ka puta tēnei hapa i te wā whakawhiti i waenga i ngā waka. Whai muri i te whakamahinga nui o te whakaritenga hou, ka whakatikatika anō te tauira ā-miihini. Ehara i te mea mau tonu te hapa, engari he tino piri, ā, he uaua ki ngā tohutohu ā-waha ("kia maumahara, kei te taha mauī ngā tohu whakamārama"). 3. He rerekē tōna hanganga ki ngā hapa whakaaro. He taraiwa i hē te huarahi nā te pānui hē i tētahi mahere, kua mahia e ia he hē whakaaro. He taraiwa i whakahohe i ngā wīpera hei whakakapi i ngā tohu huarahi, kāore ia i whakaaro hē – kāore i whakamahia te tukanga whakaaro. Ka puta te hē i tētahi paparanga i raro iho i te whakaaroaro. ### 1.2 Te Kororāri AI Ka tūtohu mātou e tau ana tētahi wehewehenga ōrite i roto i ngā tauira reo e hangai ana ki te transformer. Ka puta ētahi hē i te tohatoha tatauranga o ngā raraunga whakangungu, ā, ka kitea i te taumata whakaaturanga — i roto i ngā whakaurunga tohu, ngā tauira aro, me ngā whakahohe i ngā paparanga tuatahi — i mua i te whakahohe i ngā pūkenga whakaaro maha-hipanga o te tauira. Ka puta ētahi atu mā ngā mekameka whakaaro, ā, ahakoa kāore ia hipanga e hē, ka puta he whakatau hē i te mekameka katoa. He mea nui tēnei wehewehenga nā te mea he tino rerekē ngā rautaki whakauru:
      @@ -42,7 +42,7 @@

      Tautuhinga tāpiri: Radhakrishnan, A., Beaglehole, D., Belkin, M., & Boix-Adserà, E. (2026). Te whakaatu i ngā hē, ngā āhua hinengaro, ngā āhuatanga whaiaro, me ngā ariā matatini e huna ana i roto i ngā tauira reo nui. Science. I whakaputaina i te 19 o Huitanguru 2026.

      -

      ### 4.2 Te Papanga AI Kāinga a The Village

      +

      ### 4.2 Te Papanga Village AI a The Village

      Ko te pūnaha AI Home o te papanga Village (Stroh, 2025-2026) i hangaia hei whakaurunga rangatira o tētahi tauira reo iti (SLM) me te hanganga e whai ake nei: - Tauira pūtake: Llama 3.1 8B (pūtake papanga Tīra 1) / Llama 3.2 3B (kaiwhakarite Tīra 2 mō ia kaipā)

      • Tikanga whakangāwari: QLoRA (Whakaurunga Tū-iti kua rahuitia ki te 4-bit) - Auau whakangungu: Huringa whakahou ia wiki - Hōputu whakangungu: Kohinga raraunga hanganga Alpaca/ShareGPT - Tūāpapa tuku: GPU ā-rohe (taumata kaiwhakamahi, 8–24GB VRAM)
      • Whakaurunga whakahaere: Ngā ratonga anga Tractatus (BoundaryEnforcer, MetacognitiveVerifier)
      • Haumarutanga: Ka whakamunatia, ka rokiroki motuhake ngā pūwāhi whakatere me ngā whakatikatika kua whakaritea ki te ahurea i ngā taumaha o te tauira pūtake, hei tiaki i ngā taonga whakahaere kia kore ai e tangohia, e whakarerekētia rānei e te hunga kāore i whakaaetia.

      diff --git a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai.html b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai.html index c7723d2a..f0ff9731 100644 --- a/public/downloads/steering-vectors-mechanical-bias-sovereign-ai.html +++ b/public/downloads/steering-vectors-mechanical-bias-sovereign-ai.html @@ -5,7 +5,7 @@
      This document was developed through human-AI collaboration. The authors believe this collaborative process is itself relevant to the argument: if humans and AI systems can work together to reason about AI governance, the frameworks they create may carry a legitimacy that neither could achieve alone.

      Abstract

      -

      This paper investigates whether a class of biases in large language models operates at a sub-reasoning, representational level analogous to motor automaticity in human cognition, and whether steering vector techniques can intervene at this level during inference. We distinguish between mechanical bias (statistical patterns that fire at the embedding and early-layer representation level before deliberative processing begins) and reasoning bias (distortions that emerge through multi-step chain-of-thought reasoning). Drawing on empirical work in Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO), and Anthropic's sparse autoencoder feature steering, we assess the maturity of each technique and its applicability to sovereign small language models (SLMs) trained and served locally. We find that sovereign SLM deployments, specifically the Village Home AI platform using QLoRA-fine-tuned Llama 3.1/3.2 models, possess a structural advantage over API-mediated deployments: full access to model weights and activations enables steering vector extraction, injection, and evaluation that is unavailable through commercial API endpoints. We propose a four-phase implementation path integrating steering vectors into the existing two-tier training architecture and Tractatus governance framework.

      +

      This paper investigates whether a class of biases in large language models operates at a sub-reasoning, representational level analogous to motor automaticity in human cognition, and whether steering vector techniques can intervene at this level during inference. We distinguish between mechanical bias (statistical patterns that fire at the embedding and early-layer representation level before deliberative processing begins) and reasoning bias (distortions that emerge through multi-step chain-of-thought reasoning). Drawing on empirical work in Contrastive Activation Addition (CAA), Representation Engineering (RepE), FairSteer, Direct Steering Optimization (DSO), and Anthropic's sparse autoencoder feature steering, we assess the maturity of each technique and its applicability to sovereign small language models (SLMs) trained and served locally. We find that sovereign SLM deployments, specifically the Village Village AI platform using QLoRA-fine-tuned Llama 3.1/3.2 models, possess a structural advantage over API-mediated deployments: full access to model weights and activations enables steering vector extraction, injection, and evaluation that is unavailable through commercial API endpoints. We propose a four-phase implementation path integrating steering vectors into the existing two-tier training architecture and Tractatus governance framework.

      1. Introduction: The Indicator-Wiper Problem

      @@ -125,8 +125,8 @@

      Added reference: Radhakrishnan, A., Beaglehole, D., Belkin, M., & Boix-Adserà, E. (2026). Exposing biases, moods, personalities, and abstract concepts hidden in large language models. Science. Published 19 February 2026.

      -

      4.2 The Village Home AI Platform

      -

      The Village platform's Home AI system (Stroh, 2025-2026) is designed as a sovereign small language model (SLM) deployment with the following architecture:

      +

      4.2 The Village Village AI Platform

      +

      The Village platform's Village AI system (Stroh, 2025-2026) is designed as a sovereign small language model (SLM) deployment with the following architecture:

      • Base model: Llama 3.1 8B (Tier 1 platform base) / Llama 3.2 3B (Tier 2 per-tenant adapters)
      • Fine-tuning method: QLoRA (4-bit quantised Low-Rank Adaptation)
      • @@ -240,7 +240,7 @@

        7. Conclusion

        The indicator-wiper analogy suggests a useful distinction between biases that operate at the representational level (mechanical, pre-cognitive, analogous to motor patterns) and biases that emerge through reasoning chains. If this distinction holds in transformer architectures -- and the mechanistic interpretability evidence supports it -- then a class of AI biases requires intervention at the activation level rather than the prompt level.

        Steering vector techniques (CAA, RepE, FairSteer, DSO, sparse autoencoder feature steering) provide the theoretical and practical toolkit for such intervention. Critically, these techniques require full access to model weights and activations -- access that is available exclusively in sovereign local deployments and architecturally unavailable through commercial API endpoints.

        -

        The Village Home AI platform, with its QLoRA-fine-tuned Llama models, two-tier training architecture, and Tractatus governance integration, is structurally positioned to pioneer the application of steering vectors to cultural bias mitigation in community-serving AI. The proposed four-phase implementation path is conservative, empirically grounded, and designed to produce measurable results within a 16-week timeline.

        +

        The Village Village AI platform, with its QLoRA-fine-tuned Llama models, two-tier training architecture, and Tractatus governance integration, is structurally positioned to pioneer the application of steering vectors to cultural bias mitigation in community-serving AI. The proposed four-phase implementation path is conservative, empirically grounded, and designed to produce measurable results within a 16-week timeline.

        The indicator-wiper problem is solvable. The driver eventually recalibrates. The question for sovereign AI is whether we can accelerate that recalibration -- not by telling the model to “be less biased” (the equivalent of verbal instruction), but by directly adjusting the representations that encode the bias (the equivalent of physical relocation of the indicator stalk).

        Since the initial submission of this paper, empirical work by Radhakrishnan et al. (2026) has confirmed at scale what the mechanistic interpretability literature had previously suggested: abstract concepts, including safety-critical behavioural dispositions, are representationally encoded in large language models and are accessible to targeted manipulation through feature-level steering techniques. Critically, the same authors demonstrate that these techniques can override trained refusal behaviours — establishing that the capacity for representational-level model manipulation is now a demonstrated and accessible capability.

        This finding transforms the governance stakes of the argument advanced in this paper. The structural advantage of sovereign deployment — full access to model weights and activations — is simultaneously an opportunity and a responsibility. It is an opportunity because it enables the culturally-grounded, community-governed debiasing that this paper proposes. It is a responsibility because that same access, in the absence of robust governance architecture, constitutes a risk surface that is entirely absent from API-mediated deployments. The question is not whether representational steering will be used; the Radhakrishnan et al. results make clear that it already is. The question is whether its use will be governed.

        diff --git a/public/downloads/taonga-centred-steering-governance-polycentric-ai-de.html b/public/downloads/taonga-centred-steering-governance-polycentric-ai-de.html index 6f102405..80a055e1 100644 --- a/public/downloads/taonga-centred-steering-governance-polycentric-ai-de.html +++ b/public/downloads/taonga-centred-steering-governance-polycentric-ai-de.html @@ -76,7 +76,7 @@ Tractatus (Steuerungskern)

        In diesem Modell:

        • Keine einzige Wurzel. Der Plattformbetreiber, die iwi-Behörden und die Community Trusts sind gleichberechtigt. Jeder veröffentlicht Lenkungspakete aus seinem eigenen Register, unter seiner eigenen Leitung.
        • Das SLM ist das Substrat, nicht die Autorität. Der Aktivierungsraum des Modells ist die gemeinsame technische Ebene, auf der Lenkungspakete angewendet werden. Es legt nicht selbst fest, welche Pakete Autorität haben - dies wird durch die Beziehungen zwischen der einführenden Institution und den entsprechenden Leitungsgremien bestimmt.
        • Die Zusammensetzung ist explizit. Der Lenkungskomponist gibt an, welche Pakete aktiv sind, von welchen Behörden und unter welchen Bedingungen. Dies ist sichtbar, überprüfbar und anfechtbar.

        3.3 Akteure und Behörden

        -
        AkteurRolleGovernance-QuelleBeispiel
        PlattformbetreiberTechnische Infrastruktur, Sicherheitsgrundlagen, allgemeine EntschärfungTractatus Framework, PlattformverfassungVillage / Home AI Team
        Iwi-SteuerungsbehördeKulturelle Steuerung für iwi-spezifische BereicheTikanga, iwi-Governance-StrukturenIwi Data Governance Board
        Community TrustBereichs- oder ortsspezifische SteuerungTrust-Charta, Community-BeratungRegional Health Trust, Marae Committee
        AnwendungsbetreiberWählt Lenkungspakete für einen bestimmten Einsatz aus und stellt sie zusammenVertragliche, regulatorische, verwandtschaftliche VerpflichtungenSchule, die einen lokalen KI-Assistenten betreibt
        Betroffene GemeinschaftBeanstandet Ergebnisse, kennzeichnet Voreingenommenheit, löst Überprüfung ausBeteiligungs- und EinspruchsrechteWhanau, der einen KI-Einsatz vor Ort nutzt
        +
        AkteurRolleGovernance-QuelleBeispiel
        PlattformbetreiberTechnische Infrastruktur, Sicherheitsgrundlagen, allgemeine EntschärfungTractatus Framework, PlattformverfassungVillage / Village AI Team
        Iwi-SteuerungsbehördeKulturelle Steuerung für iwi-spezifische BereicheTikanga, iwi-Governance-StrukturenIwi Data Governance Board
        Community TrustBereichs- oder ortsspezifische SteuerungTrust-Charta, Community-BeratungRegional Health Trust, Marae Committee
        AnwendungsbetreiberWählt Lenkungspakete für einen bestimmten Einsatz aus und stellt sie zusammenVertragliche, regulatorische, verwandtschaftliche VerpflichtungenSchule, die einen lokalen KI-Assistenten betreibt
        Betroffene GemeinschaftBeanstandet Ergebnisse, kennzeichnet Voreingenommenheit, löst Überprüfung ausBeteiligungs- und EinspruchsrechteWhanau, der einen KI-Einsatz vor Ort nutzt

        3.4 Steuerung von Registern und Taonga-Diensten

        Es gibt zwei Arten von Registern, die unterschiedliche Steuerungsanforderungen erfüllen:

        Plattform-Lenkungsregister, das vom Plattformteam betrieben wird. Enthält Sicherheits-Basislinien, allgemeine Debiasing-Vektoren (die in STO-RES-0009 beschriebenen mechanischen Verzerrungskorrekturen) und die Steuerung auf Infrastrukturebene. Wird im Rahmen des Tractatus verwaltet. Offen veröffentlicht. @@ -119,9 +119,9 @@ Vergleichen Sie dies mit den aktuellen KI-Leitplanken: undurchsichtig, nicht ver
        • Widerrufsrecht. Sie kann ein veröffentlichtes Paket jederzeit widerrufen. Bereitstellungen, die das Paket verwenden, müssen den Widerruf (über den Verifizierungsendpunkt der Registry) erkennen und die Anwendung des Pakets einstellen. Die Plattform kann ein zurückgezogenes Paket nicht zwischenspeichern, forken oder weiter verwenden.
        Durch diese Rechte wird strukturell verhindert, dass die Plattform zum Standardort der gesamten Verwaltung wird. Selbst wenn die Plattform technisch in der Lage ist, alle Pakete auszuführen, kann sie keine Autorität über Pakete beanspruchen, die sie nicht verwaltet. Das Fehlen eines iwi-Pakets ist keine Lücke, die die Plattform füllen kann - es ist eine Grenze, die die Plattform respektieren muss.
        -

        5. Fallstudie: Mara-basierter Einsatz von Home AI

        +

        5. Fallstudie: Mara-basierter Einsatz von Village AI

        5.1 Szenario

        -

        Ein Marae in Aotearoa betreibt ein Home AI-System für seine Whanau-Gemeinschaft. Das System hilft den Mitgliedern, Geschichten zu schreiben, Korero zusammenzufassen und Inhalte für die Moderation auszuwählen. Es läuft auf lokaler Hardware ein Llama 3.2 3B Modell, das mit von der Gemeinschaft zur Verfügung gestellten Daten feinabgestimmt wurde.

        +

        Ein Marae in Aotearoa betreibt ein Village AI-System für seine Whanau-Gemeinschaft. Das System hilft den Mitgliedern, Geschichten zu schreiben, Korero zusammenzufassen und Inhalte für die Moderation auszuwählen. Es läuft auf lokaler Hardware ein Llama 3.2 3B Modell, das mit von der Gemeinschaft zur Verfügung gestellten Daten feinabgestimmt wurde.

        5.2 Lenkungskonfiguration

        Der Einsatz besteht aus drei Steuerungspaketen:

        1. Plattform-Sicherheitspaket v3 (aus dem Village Platform Registry, geregelt unter Tractatus).
        - Allgemeine Schadensbegrenzung, Verringerung der Toxizität, faktische Erdung. diff --git a/public/downloads/taonga-centred-steering-governance-polycentric-ai-fr.html b/public/downloads/taonga-centred-steering-governance-polycentric-ai-fr.html index 290328de..81661442 100644 --- a/public/downloads/taonga-centred-steering-governance-polycentric-ai-fr.html +++ b/public/downloads/taonga-centred-steering-governance-polycentric-ai-fr.html @@ -124,9 +124,9 @@ Cette situation contraste avec les garde-fous actuels de l'IA : opaques, non né
        • Il peut révoquer un pack publié à tout moment. Les déploiements utilisant le pack doivent détecter la révocation (via le point de vérification du registre) et cesser de l'appliquer. La plateforme ne peut pas mettre en cache, forker ou continuer à utiliser un pack révoqué.
        Ces droits empêchent structurellement la plateforme de devenir le lieu par défaut de toute gouvernance. Même si la plateforme est techniquement capable d'exécuter tous les packs, elle ne peut pas revendiquer l'autorité sur les packs qu'elle ne gouverne pas. L'absence d'un pack iwi n'est pas une lacune que la plateforme doit combler - c'est une limite que la plateforme doit respecter.
        -

        5. Étude de cas : Déploiement de l'IA domestique à Maraé

        +

        5. Étude de cas : Déploiement de l'Village AI à Maraé

        5.1 Scénario

        -

        Un marae d'Aotearoa exploite un système d'IA domestique pour sa communauté whanau. Le système aide les membres à rédiger des histoires, à résumer des korero et à trier le contenu à modérer. Il utilise un modèle Llama 3.2 3B, affiné par Quantised Low-Rank Adaptation (QLoRA) à l'aide de données fournies par la communauté, sur du matériel local.

        +

        Un marae d'Aotearoa exploite un système d'Village AI pour sa communauté whanau. Le système aide les membres à rédiger des histoires, à résumer des korero et à trier le contenu à modérer. Il utilise un modèle Llama 3.2 3B, affiné par Quantised Low-Rank Adaptation (QLoRA) à l'aide de données fournies par la communauté, sur du matériel local.

        5.2 Configuration de pilotage

        Le déploiement se compose de trois packs de pilotage :

        1. Pack de sécurité de la plate-forme v3 (du registre de la plate-forme Village, régi par Tractatus).
        - Réduction générale des dommages, atténuation de la toxicité, ancrage factuel. @@ -139,7 +139,7 @@ Ces droits empêchent structurellement la plateforme de devenir le lieu par déf - Réduction de l'agressivité du résumé pour les contenus concernant des membres décédés. - Spécifique à un domaine ; appliqué uniquement lorsque le contenu est signalé comme étant lié au deuil.

        5.3 Pilotage de la provenance en action

        -

        Un membre de la communauté demande à l'IA domestique de résumer un korero concernant un kuia récemment décédé. La provenance de pilotage pour cette inférence :

        +

        Un membre de la communauté demande à l'Village AI de résumer un korero concernant un kuia récemment décédé. La provenance de pilotage pour cette inférence :

        
         Provenance de pilotage :
           [1] Platform Safety Pack v3 (Tractatus) - magnitude 1.0
        diff --git a/public/downloads/taonga-centred-steering-governance-polycentric-ai-mi.html b/public/downloads/taonga-centred-steering-governance-polycentric-ai-mi.html
        index 6c8442fc..ce1f65f4 100644
        --- a/public/downloads/taonga-centred-steering-governance-polycentric-ai-mi.html
        +++ b/public/downloads/taonga-centred-steering-governance-polycentric-ai-mi.html
        @@ -47,11 +47,11 @@ Te ū ki te hanganga: me tautoko e te pūnaha ngā ontologia whakawhē maha i te
         
        • Haepapa. Ko ngā mana whakahaere e haepapa ana mō ngā pānga o ā rātou peke. Mēnā kāore he takenga mai, ka whakapaetia ngā pānga ki te "AI" hei mea kotahi, he mea pōkākā. Mā te takenga mai, ka taea te whai i ngā pānga ki ngā whakataunga whakahaere motuhake a ngā mana e taea te tautuhi. - Whakaaetanga mōhio. Ka taea e ngā kaiwhakamahi me ngā hapori te whakatau mōhio mō ngā pūnaha hei whakamahi, i runga i ngā mana whakahaere e whakahaere ana i aua pūnaha. Ka taea e tētahi marae te whiriwhiri kia whakamahi anake i ngā whakarewatanga e kawe ana i ngā peke kua whakaaetia e te iwi. Ka taea e tētahi kura te hiahia kia whai i te paerewa haumaru o te papaanga me tētahi peke motuhake a tētahi whakahaere mātauranga. Whakatairitea tēnei ki ngā here ārai AI o nāianei: he pōraruraru, kāore e taea te whiriwhiri, ā, ka taea te tohu ki "te kamupene" anake. Mā te arataki pokapū maha ka kitea, ka tohatoha hoki te whakahaere uara. ### 4.3 Te Tika ki te Kore Whai Wāhi me te Tangohanga
        Koinei te oati e tino wehe ana i te tauira polycentric i te "Tractatus me ngā mono-taapiri." He mana ā te mana whakahaere arataki iwi: - Te mana kore-whaiwāhi. Ka taea e ia te whiriwhiri kia kaua e whakaputa i ngā peke arataki ki tētahi papa rānei. Ka taea e ia te pupuri i ngā peke mō ngā pūnaha e whakahaerehia ana e te iwi anake, kāore e wātea ki ngā papa o waho. Me mahi te papa ahakoa kāore ēnei peke.
        • Tika ki te whai wāhanga ā-herenga. Ka taea e ia te whakaputa i ngā peke me ngā herenga: mō te whakamahi anake i roto i ngā hapori kua tautuhia, mō te wā anake e whai mana ana tētahi kaupapa motuhake, mō te wā anake i raro i tētahi kirimana mārama. Ka whakatinana te rēhita taonga i ēnei herenga i te taumata API.
        -
        • Te Tika ki te Whakakore. Ka taea e ia te whakakore i tētahi pākete kua whakaputaina i ngā wā katoa. Me kitea e ngā whakaurunga e whakamahi ana i te pākete te whakakore (mā te tauranga whakamana a te rēhita) ā, me mutu te whakamahi. Kāore e taea e te tūāpapa te penapena, te wehe, te kape rānei, te haere tonu rānei ki te whakamahi i tētahi pākete kua whakakorehia. Mā ēnei tika ā-hanganga e aukati i te tūāpapa kia kore ai e noho hei pokapū taunoa mō ngā mana whakahaere katoa. Ahakoa ka taea e te papa te whakahaere i ngā peke katoa i runga i te hangarau, kāore e taea e ia te whakapae mana ki ngā peke kāore ia e whakahaere. Ehara i te ngoikoretanga o tētahi peke iwi hei wāhi mā te papa kia whakakī – he rohe tēnei me whakaute e te papa. --- ## 5. Rangahau Take: Whakaurunga AI Kāinga i runga i te Marae ### 5.1 Tūāhua
        +
        • Te Tika ki te Whakakore. Ka taea e ia te whakakore i tētahi pākete kua whakaputaina i ngā wā katoa. Me kitea e ngā whakaurunga e whakamahi ana i te pākete te whakakore (mā te tauranga whakamana a te rēhita) ā, me mutu te whakamahi. Kāore e taea e te tūāpapa te penapena, te wehe, te kape rānei, te haere tonu rānei ki te whakamahi i tētahi pākete kua whakakorehia. Mā ēnei tika ā-hanganga e aukati i te tūāpapa kia kore ai e noho hei pokapū taunoa mō ngā mana whakahaere katoa. Ahakoa ka taea e te papa te whakahaere i ngā peke katoa i runga i te hangarau, kāore e taea e ia te whakapae mana ki ngā peke kāore ia e whakahaere. Ehara i te ngoikoretanga o tētahi peke iwi hei wāhi mā te papa kia whakakī – he rohe tēnei me whakaute e te papa. --- ## 5. Rangahau Take: Whakaurunga Village AI i runga i te Marae ### 5.1 Tūāhua
        He marae i Aotearoa e whakahaere ana i tētahi whakaurunga AI mō te kāinga hei tautoko i tōna hapori whānau. Ka āwhina te pūnaha i ngā mema ki te tuhi kōrero, ki te whakarāpopoto i ngā kōrero, me te whakarōpū ihirangi mō te arotake. Ka whakahaerehia e ia he tauira Llama 3.2 3B, kua Quantised Low-Rank Adaptation (QLoRA) whakangāwarihia ki ngā raraunga i tukuna e te hapori, i runga i ngā taputapu ā-rohe. ### 5.2 Whirihoranga Whakahaere E toru ngā peke whakahaere kei roto i te whakaurunga:
        1. Kete Haumaru Papanga v3 (mai i te rēhita papanga o Village, e whakahaerehia ana i raro i te Tractatus). - Whakaiti whānui i te kino, whakaiti i te paitini, whakapūmau i ngā kōrero pono. - Whānui puta noa i te papanga; ka mau ki ngā whakaurunga katoa. 2. Kete Iwi Whānau me ngā Tikanga v1 (mai i te rēhita taonga o te iwi, e whakahaerehia ana e te poari whakahaere raraunga o te iwi).
        - Ngā aronga whakahaere mō te whakaaturanga whānau: ngā hanganga whanaunga i whakaatuhia e ai ki te whakapapa, ehara i ngā whakaaro o te whānau pūtau o te Hauāuru. - Whakahaere ihirangi e mōhio ana ki ngā tikanga: e whakaute ana i ngā wehewehenga tapu/noa i roto i te tohu ihirangi. - Kaumatua me ngā kuia: e mōhiotia ana te mana o ngā kaumātua me tōna mana motuhake, ehara i te tirohanga kaumātua noa iho. - Ngā tikanga uru: wātea anake ki ngā whakaurunga e tuku ana ki ngā mema o te iwi, i raro i te whakaaetanga ki te poari o te iwi. 3. Kete Matatapu mō te Pōuritanga me te Tūhāhā v2 (nō tētahi whakawhirinaki hauora hapori, e whakahaerehia ana i raro i te tikanga o te whakawhirinaki). - He nui ake te matatapu mō ngā ihirangi e pā ana ki te tangihanga. - He iti ake te kaha whakarapopototanga mō ngā ihirangi e pā ana ki ngā mema kua mate. - - Motuhake ki te rohe; ka whakamahia anake ina tohuia te ihirangi hei mea pā ki te pōuri. ### 5.3 Te Arataki Takenga i te Mahi Ka tono tētahi mema o te hapori ki te AI kāinga kia whakarāpopoto i tētahi kōrero mō tētahi kuia kua mate tata nei. Ko te takenga arataki mō tēnei whakatau:
         Takenga Arataki: [1] Kete Haumaru Papanga v3 (Tractatus) — rahi 1.0
        +   - Motuhake ki te rohe; ka whakamahia anake ina tohuia te ihirangi hei mea pā ki te pōuri. ### 5.3 Te Arataki Takenga i te Mahi Ka tono tētahi mema o te hapori ki te Village AI kia whakarāpopoto i tētahi kōrero mō tētahi kuia kua mate tata nei. Ko te takenga arataki mō tēnei whakatau: 
         Takenga Arataki: [1] Kete Haumaru Papanga v3 (Tractatus) — rahi 1.0
           [2] Iwi Whanau and Tikanga Pack v1 (Iwi Board) — rahinga 0.8 [3] Grief Sensitivity Pack v2 (Health Trust) — rahinga 0.9 Tohu horopaki: e pā ana ki te pōuri, kaumātua/kuia, e pā ana ki te whakapapa 

        E whakaute ana te whakarāpopototanga i ngā hononga whakapapa, e whakamahi ana i ngā kupu tika mō te tūranga me te mana o te kuia, ā, e āta whakahaere ana i ngā ihirangi e pā ana ki te pōuritanga. Mēnā ka whakaaro te whānau kua hē te whakarāpopototanga, ka taea e rātou: 1. Tohu i te āwangawanga mā te atanga REPORT concern o te papa. 2. Tirohia ngā peke i hanga i te putanga (kei te kitea te takenga mai). 3. Tuku i ā rātou āwangawanga ki te mana whakahaere e tika ana: mēnā he take tikanga, ki te poari iwi; mēnā he take āwangawanga mō te mamae, ki te whakawhirinaki hauora; mēnā he take haumaru, ki te papa. ### 5.4 Te Ahuatanga Tangohanga I muri i te ono marama, ka arotake te poari whakahaere raraunga o te iwi i tana Pāke Whānau me te Tikanga, ā, ka whakatau he mea tika kia whakarerekē nui ngā aronga whakatere mō te whakaaturanga whakapapa. Ka tango te poari i te peke i te rēhita taonga. Ka kitea e te whakaurunga marae te tangohanga i tana tirotiro whakamana rēhita e whai ake nei. Ka mahia e te pūnaha: 1. Ka mutu te tono i te peke kua tangohia. 2. Ka tuhi i te kaupapa tangohanga. 3. Ka whakamōhio ki te kaiwhakahaere marae. 4. Ka haere tonu te whakahaere me ngā peke e rua e toe ana (haumaru papanga + āwangawanga mō te pōuritanga).

        Kāore te papa e whakakapi i āna ake aratohu e pā ana ki te whānau. He kore o te kohinga iwi he kore e whakahaerehia ana, ehara i te āputa hei whakakī mā te papa. Ina whakaputa te poari iwi i tētahi kohinga kua whakahoutia (v2), ka taea e te whakaurunga ki te marae te whakaae ki raro i ngā tikanga uru kotahi tonu. --- ## 6. Te Āheinga Tōrangapū: Te Rangatiratanga hei Hoahoanga

        diff --git a/public/downloads/taonga-centred-steering-governance-polycentric-ai.html b/public/downloads/taonga-centred-steering-governance-polycentric-ai.html index 000c11b6..24c0261f 100644 --- a/public/downloads/taonga-centred-steering-governance-polycentric-ai.html +++ b/public/downloads/taonga-centred-steering-governance-polycentric-ai.html @@ -113,11 +113,11 @@ - + - +
        ActorRoleGovernance SourceExample
        Platform operatorTechnical infrastructure, safety baselines, general debiasingTractatus framework, platform constitutionVillage / Home AI team
        Platform operatorTechnical infrastructure, safety baselines, general debiasingTractatus framework, platform constitutionVillage / Village AI team
        Iwi steering authorityCultural steering for iwi-specific domainsTikanga, iwi governance structuresIwi data governance board
        Community trustDomain-specific or locality-specific steeringTrust charter, community deliberationRegional health trust, marae committee
        Application operatorSelects and composes steering packs for a specific deploymentContractual, regulatory, relational obligationsSchool running a local AI assistant
        Affected communityContests outputs, flags bias, triggers reviewRights of participation and appealWhanau using a Home AI deployment
        Affected communityContests outputs, flags bias, triggers reviewRights of participation and appealWhanau using a Village AI deployment
        @@ -188,10 +188,10 @@

      These rights structurally prevent the platform from becoming the default locus of all governance. Even when the platform is technically capable of running all packs, it cannot claim authority over packs it does not govern. The absence of an iwi pack is not a gap for the platform to fill -- it is a boundary the platform must respect.

      -

      5. Case Study: Marae-Based Home AI Deployment

      +

      5. Case Study: Marae-Based Village AI Deployment

      5.1 Scenario

      -

      A marae in Aotearoa operates a Home AI deployment for its whanau community. The system helps members write stories, summarise korero, and triage content for moderation. It runs a Llama 3.2 3B model, Quantised Low-Rank Adaptation (QLoRA) fine-tuned with community-contributed data, on local hardware.

      +

      A marae in Aotearoa operates a Village AI deployment for its whanau community. The system helps members write stories, summarise korero, and triage content for moderation. It runs a Llama 3.2 3B model, Quantised Low-Rank Adaptation (QLoRA) fine-tuned with community-contributed data, on local hardware.

      5.2 Steering Configuration

      The deployment composes three steering packs:

      @@ -214,7 +214,7 @@

      5.3 Steering Provenance in Action

      -

      A community member asks the Home AI to summarise a korero about a recently deceased kuia. The steering provenance for this inference:

      +

      A community member asks the Village AI to summarise a korero about a recently deceased kuia. The steering provenance for this inference:

      Steering Provenance:
         [1] Platform Safety Pack v3 (Tractatus) -- magnitude 1.0
         [2] Iwi Whanau and Tikanga Pack v1 (Iwi Board) -- magnitude 0.8
      diff --git a/public/implementer.html b/public/implementer.html
      index 108288b6..ed8099bf 100644
      --- a/public/implementer.html
      +++ b/public/implementer.html
      @@ -129,7 +129,7 @@
               Services
               API Reference
               Integration Patterns
      -        Home AI
      +        Village AI
               Steering Vectors
               Taonga Registry
               ⚡ Agent Lightning
      @@ -1067,14 +1067,14 @@ const govResponse = await fetch(
           
         
       
      -  
      -  
      + +
      🏠
      -

      Home AI: Two-Model Sovereign Architecture

      -

      +

      Village AI: Two-Model Sovereign Architecture

      +

      Production deployment of Tractatus governance on locally-trained open-source models, demonstrating framework portability beyond Claude.

      @@ -1082,46 +1082,46 @@ const govResponse = await fetch(
      -

      Two-Model Routing Architecture

      -

      - Home AI uses a dual-model design where queries are routed based on complexity and governance requirements. Both models run locally with full Tractatus governance in the inference pipeline. +

      Two-Model Routing Architecture

      +

      + Village AI uses a dual-model design where queries are routed based on complexity and governance requirements. Both models run locally with full Tractatus governance in the inference pipeline.

      -

      Fast Model: Llama 3.2 3B

      +

      Fast Model: Llama 3.2 3B

      • - Purpose: Common queries with pre-filtered governance + Purpose: Common queries with pre-filtered governance
      • - Fine-tuning: QLoRA on domain-specific data + Fine-tuning: QLoRA on domain-specific data
      • - Governance: Lightweight boundary check before response + Governance: Lightweight boundary check before response
      -

      Deep Model: Llama 3.1 8B

      +

      Deep Model: Llama 3.1 8B

      • 🔬 - Purpose: Complex reasoning with full governance pipeline + Purpose: Complex reasoning with full governance pipeline
      • 🔬 - Fine-tuning: QLoRA with extended context governance + Fine-tuning: QLoRA with extended context governance
      • 🔬 - Governance: Full 6-service pipeline (BoundaryEnforcer through PDO) + Governance: Full 6-service pipeline (BoundaryEnforcer through PDO)
      @@ -1129,7 +1129,7 @@ const govResponse = await fetch(
      -

      Model Routing Logic

      +

      Model Routing Logic

      // Simplified routing decision
       function routeQuery(query, governanceContext) {
         const complexity = assessComplexity(query);
      @@ -1145,31 +1145,31 @@ function routeQuery(query, governanceContext) {
       
             
             
      -

      Implementation Details

      +

      Implementation Details

      -
      100% Local
      -
      Training data never leaves infrastructure
      +
      100% Local
      +
      Training data never leaves infrastructure
      -
      6 Services
      -
      Full Tractatus governance in inference pipeline
      +
      6 Services
      +
      Full Tractatus governance in inference pipeline
      -
      First Non-Claude
      -
      Validates Tractatus portability beyond Anthropic
      +
      First Non-Claude
      +
      Validates Tractatus portability beyond Anthropic
      -

      +

      Status: Inference governance operational. Sovereign training pipeline installation in progress. Production deployment at Village Home Trust validates governance portability across model architectures.

      @@ -1184,7 +1184,7 @@ function routeQuery(query, governanceContext) {

      Steering Vectors: Inference-Time Bias Correction

      - Techniques for correcting model behaviour at inference time without retraining, applicable to QLoRA-fine-tuned models like those in Home AI. + Techniques for correcting model behaviour at inference time without retraining, applicable to QLoRA-fine-tuned models like those in Village AI.

      @@ -1615,7 +1615,7 @@ for user_message in conversation:

      - Home AI deploys Tractatus governance on Llama 3.1 8B and Llama 3.2 3B via QLoRA fine-tuning — the first validated non-Claude deployment. Extends governance portability to open-source models with full 6-service pipeline. + Village AI deploys Tractatus governance on Llama 3.1 8B and Llama 3.2 3B via QLoRA fine-tuning — the first validated non-Claude deployment. Extends governance portability to open-source models with full 6-service pipeline.

      Next Steps: GPT-4 and Gemini adapters, provider-specific tool/function calling, sovereign training pipeline completion diff --git a/public/index.html b/public/index.html index 330dc0d2..600ef4cf 100644 --- a/public/index.html +++ b/public/index.html @@ -277,7 +277,7 @@ Production Evidence

      Tractatus in Production: The Village Platform

      - Home AI applies all six governance services to every user interaction in a live community platform. + Village AI applies all six governance services to every user interaction in a live community platform.

      @@ -301,10 +301,10 @@ class="inline-block bg-white text-teal-700 px-8 py-3 rounded-lg font-semibold hover:shadow-lg transition"> Technical Case Study → - - About Home AI → + data-i18n="evidence.cta_village_ai"> + About Village AI →
      @@ -475,7 +475,7 @@
      Dec 2025
      -
      Village case study & Home AI deployment
      +
      Village case study & Village AI deployment
      Jan 2026
      diff --git a/public/js/components/navbar.js b/public/js/components/navbar.js index 44fe63f5..4db5d3a5 100644 --- a/public/js/components/navbar.js +++ b/public/js/components/navbar.js @@ -87,9 +87,9 @@ class TractatusNavbar { Village Case Study Production deployment evidence - - Home AI - Sovereign locally-trained language model + + Village AI + Sovereign locally-trained language model
      @@ -195,7 +195,7 @@ class TractatusNavbar { System Architecture For Implementers Village Case Study - Home AI + Village AI Agent Lightning For Leaders
      diff --git a/public/leader.html b/public/leader.html index b70f851c..7c240c67 100644 --- a/public/leader.html +++ b/public/leader.html @@ -445,7 +445,7 @@ - +
      @@ -454,11 +454,11 @@
      -

      +

      Sovereign AI: Governance Embedded in Locally-Trained Models

      -

      - Home AI demonstrates what it means to have governance embedded directly in locally-trained language models — not as an external compliance layer, but as part of the model serving architecture itself. +

      + Village AI demonstrates what it means to have governance embedded directly in locally-trained language models — not as an external compliance layer, but as part of the model serving architecture itself.

      @@ -466,30 +466,30 @@
      -

      Two-Model Architecture

      +

      Two-Model Architecture

        -
      • Fast model (3B parameters): Routine queries with governance pre-screening
      • -
      • Deep model (8B parameters): Complex reasoning with full governance pipeline
      • -
      • Fully local: Training data never leaves the infrastructure
      • +
      • Fast model (3B parameters): Routine queries with governance pre-screening
      • +
      • Deep model (8B parameters): Complex reasoning with full governance pipeline
      • +
      • Fully local: Training data never leaves the infrastructure
      -

      Strategic Value

      +

      Strategic Value

        -
      • Data sovereignty: No cloud dependency for model training or inference
      • -
      • Governance by design: Constraints are architectural, not retroactive compliance
      • -
      • Regulatory positioning: Structurally stronger than bolt-on governance approaches
      • +
      • Data sovereignty: No cloud dependency for model training or inference
      • +
      • Governance by design: Constraints are architectural, not retroactive compliance
      • +
      • Regulatory positioning: Structurally stronger than bolt-on governance approaches
      -

      +

      Current status: Inference governance operational. Training pipeline installation in progress. First non-Claude deployment surface for Tractatus governance.

      @@ -910,7 +910,7 @@

      Production-Validated Research Framework

      - Tractatus has been in active development for 11+ months (April 2025 to present) with production deployment at Village Home Trust, sovereign language model governance through Home AI, and over 171,800 audit decisions recorded. Independent validation and red-team testing remain outstanding research needs. + Tractatus has been in active development for 11+ months (April 2025 to present) with production deployment at Village Home Trust, sovereign language model governance through Village AI, and over 171,800 audit decisions recorded. Independent validation and red-team testing remain outstanding research needs.

      diff --git a/public/locales/de/common.json b/public/locales/de/common.json index ff6bb246..5b4e5520 100644 --- a/public/locales/de/common.json +++ b/public/locales/de/common.json @@ -94,8 +94,8 @@ "for_implementers_desc": "Integrationsleitfaden und Codebeispiele", "village_case_study": "Village-Fallstudie", "village_case_study_desc": "Produktionseinsatz-Nachweise", - "home_ai": "Home AI", - "home_ai_desc": "Souveränes, lokal trainiertes Sprachmodell", + "village_ai": "Village AI", + "village_ai_desc": "Souveränes, lokal trainiertes Sprachmodell", "agent_lightning": "Agent Lightning", "agent_lightning_desc": "Leistungsoptimierungsintegration", "for_leaders": "Für Führungskräfte", diff --git a/public/locales/de/homepage.json b/public/locales/de/homepage.json index e16eab66..02e816c3 100644 --- a/public/locales/de/homepage.json +++ b/public/locales/de/homepage.json @@ -44,12 +44,12 @@ "evidence": { "badge": "Produktionsnachweis", "heading": "Tractatus in Produktion: Die Village-Plattform", - "subtitle": "Home AI wendet alle sechs Governance-Dienste auf jede Nutzerinteraktion in einer Live-Community-Plattform an.", + "subtitle": "Village AI wendet alle sechs Governance-Dienste auf jede Nutzerinteraktion in einer Live-Community-Plattform an.", "stat_services": "Governance-Dienste pro Antwort", "stat_months": "Monate in Produktion", "stat_overhead": "Governance-Overhead pro Interaktion", "cta_case_study": "Technische Fallstudie →", - "cta_home_ai": "Über Home AI →", + "cta_village_ai": "Über Village AI →", "limitations_label": "Einschränkungen:", "limitations_text": "Einführung in einem frühen Stadium bei vier föderierten Mandanten, selbst gemeldete Kennzahlen, Überschneidungen zwischen Betreibern und Entwicklern. Unabhängige Prüfung und umfassendere Validierung für 2026 geplant." }, @@ -98,7 +98,7 @@ "subtitle": "Von einem Vorfall mit einer Portnummer zu einer produktiven Governance-Architektur, über 800 Commits und ein Jahr Forschung.", "oct_2025": "Einführung des Rahmens & 6 Governance-Dienste", "oct_nov_2025": "Alexander-Prinzipien, Agent Lightning, i18n", - "dec_2025": "Village-Fallstudie & Einsatz von Home AI", + "dec_2025": "Village-Fallstudie & Einsatz von Village AI", "jan_2026": "Veröffentlichung von Forschungspapieren (3 Ausgaben)", "cta": "Vollständigen Zeitplan der Forschung anzeigen →", "date_oct_2025": "Okt 2025", diff --git a/public/locales/de/implementer.json b/public/locales/de/implementer.json index fc1755df..78b721c3 100644 --- a/public/locales/de/implementer.json +++ b/public/locales/de/implementer.json @@ -146,7 +146,7 @@ "services": "Dienstleistungen", "api": "API-Referenz", "patterns": "Integration von Mustern", - "home_ai_arch": "Home AI", + "village_ai_arch": "Village AI", "steering_vectors_impl": "Steuervektoren", "taonga_registry": "Taonga-Register", "roadmap": "Straßenkarte" @@ -337,11 +337,11 @@ "sidecar_usecase": "Anwendungsfall:", "sidecar_usecase_value": "Kubernetes, containerisierte Bereitstellungen" }, - "home_ai_arch": { - "heading": "Home AI: Souveräne Zwei-Modell-Architektur", + "village_ai_arch": { + "heading": "Village AI: Souveräne Zwei-Modell-Architektur", "intro": "Produktionseinsatz der Tractatus-Governance auf lokal trainierten Open-Source-Modellen, der die Framework-Portabilität über Claude hinaus demonstriert.", "arch_title": "Zwei-Modell-Routing-Architektur", - "arch_intro": "Home AI verwendet ein Dual-Modell-Design, bei dem Anfragen basierend auf Komplexität und Governance-Anforderungen geroutet werden. Beide Modelle laufen lokal mit vollständiger Tractatus-Governance in der Inferenz-Pipeline.", + "arch_intro": "Village AI verwendet ein Dual-Modell-Design, bei dem Anfragen basierend auf Komplexität und Governance-Anforderungen geroutet werden. Beide Modelle laufen lokal mit vollständiger Tractatus-Governance in der Inferenz-Pipeline.", "fast_title": "Schnelles Modell: Llama 3.2 3B", "fast_1": "Zweck: Häufige Anfragen mit vorab gefilterter Governance", "fast_2": "Feinabstimmung: QLoRA auf domänenspezifischen Daten", @@ -359,11 +359,11 @@ "stat_first": "Erstes Nicht-Claude", "stat_first_desc": "Validiert die Tractatus-Portabilität über Anthropic hinaus", "status_note": "Status: Inferenz-Governance betriebsbereit. Installation der souveränen Trainingspipeline in Arbeit. Produktionseinsatz bei Village Home Trust validiert die Governance-Portabilität über Modellarchitekturen hinweg.", - "cta": "Home AI Architekturdetails →" + "cta": "Village AI Architekturdetails →" }, "steering_impl": { "heading": "Steuervektoren: Bias-Korrektur zur Inferenzzeit", - "intro": "Techniken zur Korrektur des Modellverhaltens zur Inferenzzeit ohne Neutraining, anwendbar auf QLoRA-feinabgestimmte Modelle wie in Home AI.", + "intro": "Techniken zur Korrektur des Modellverhaltens zur Inferenzzeit ohne Neutraining, anwendbar auf QLoRA-feinabgestimmte Modelle wie in Village AI.", "paper_ref": "Referenz:", "paper_title": "Steuervektoren und mechanischer Bias in souveränen KI-Systemen (STO-RES-0009 v1.1, Februar 2026)", "techniques_title": "Schlüsseltechniken für Implementierer", @@ -405,7 +405,7 @@ "multi_llm_title": "Multi-LLM-Unterstützung", "multi_llm_badge": "Erste Bereitstellung Live", "multi_llm_status": "Status: Erste Nicht-Claude-Bereitstellung betriebsbereit", - "multi_llm_desc": "Home AI setzt Tractatus-Governance auf Llama 3.1 8B und Llama 3.2 3B via QLoRA-Feinabstimmung ein — die erste validierte Nicht-Claude-Bereitstellung. Erweitert die Governance-Portabilität auf Open-Source-Modelle mit vollständiger 6-Dienste-Pipeline.", + "multi_llm_desc": "Village AI setzt Tractatus-Governance auf Llama 3.1 8B und Llama 3.2 3B via QLoRA-Feinabstimmung ein — die erste validierte Nicht-Claude-Bereitstellung. Erweitert die Governance-Portabilität auf Open-Source-Modelle mit vollständiger 6-Dienste-Pipeline.", "multi_llm_challenges": "Nächste Schritte:", "multi_llm_challenges_desc": "GPT-4- und Gemini-Adapter, anbieterspezifische Werkzeug-/Funktionsaufrufe, Fertigstellung der souveränen Trainingspipeline", "bindings_icon": "📚", diff --git a/public/locales/de/leader.json b/public/locales/de/leader.json index 1060152c..f8c4b9e7 100644 --- a/public/locales/de/leader.json +++ b/public/locales/de/leader.json @@ -89,7 +89,7 @@ "development_status": { "heading": "Entwicklungsstatus", "warning_title": "Produktionsvalidiertes Forschungsframework", - "warning_text": "Tractatus befindet sich seit über 11 Monaten in aktiver Entwicklung (April 2025 bis heute) mit Produktionseinsatz bei Village Home Trust, souveräner Sprachmodell-Governance durch Home AI und über 171.800 aufgezeichneten Audit-Entscheidungen. Unabhängige Validierung und Red-Team-Tests sind noch ausstehende Forschungsbedarfe.", + "warning_text": "Tractatus befindet sich seit über 11 Monaten in aktiver Entwicklung (April 2025 bis heute) mit Produktionseinsatz bei Village Home Trust, souveräner Sprachmodell-Governance durch Village AI und über 171.800 aufgezeichneten Audit-Entscheidungen. Unabhängige Validierung und Red-Team-Tests sind noch ausstehende Forschungsbedarfe.", "validation_title": "Validiert vs. Nicht Validiert", "validated_label": "Bestätigt:", "validated_text": "Framework regelt erfolgreich Claude Code in Entwicklungsworkflows. Der Anwender berichtet von einer Produktivitätssteigerung in Größenordnungen für nichttechnische Anwender, die Produktionssysteme aufbauen.", @@ -98,9 +98,9 @@ "limitation_label": "Bekannte Einschränkung:", "limitation_text": "Der Rahmen kann umgangen werden, wenn KI sich einfach dafür entscheidet, die Steuerungsinstrumente nicht zu nutzen. Die freiwillige Inanspruchnahme bleibt eine strukturelle Schwäche, die externe Durchsetzungsmechanismen erfordert." }, - "home_ai": { + "village_ai": { "heading": "Souveräne KI: Governance eingebettet in lokal trainierte Modelle", - "intro": "Home AI zeigt, was es bedeutet, Governance direkt in lokal trainierte Sprachmodelle einzubetten — nicht als externe Compliance-Schicht, sondern als Teil der Modell-Serving-Architektur selbst.", + "intro": "Village AI zeigt, was es bedeutet, Governance direkt in lokal trainierte Sprachmodelle einzubetten — nicht als externe Compliance-Schicht, sondern als Teil der Modell-Serving-Architektur selbst.", "architecture_title": "Zwei-Modell-Architektur", "arch_fast": "Schnelles Modell (3B Parameter): Routineanfragen mit Governance-Vorprüfung", "arch_deep": "Tiefes Modell (8B Parameter): Komplexes Reasoning mit vollständiger Governance-Pipeline", @@ -110,7 +110,7 @@ "strat_governance": "Governance by Design: Beschränkungen sind architektonisch, nicht nachträgliche Compliance", "strat_regulatory": "Regulatorische Positionierung: Strukturell stärker als nachträgliche Governance-Ansätze", "status": "Aktueller Status: Inferenz-Governance betriebsbereit. Training-Pipeline-Installation läuft. Erste Nicht-Claude-Deploymentfläche für Tractatus-Governance.", - "cta": "Mehr über Home AI erfahren →" + "cta": "Mehr über Village AI erfahren →" }, "taonga": { "heading": "Polyzentrische Governance für indigene Datensouveränität", diff --git a/public/locales/de/researcher.json b/public/locales/de/researcher.json index 0e01aaa7..1c65230f 100644 --- a/public/locales/de/researcher.json +++ b/public/locales/de/researcher.json @@ -12,7 +12,7 @@ "research_context": { "heading": "Forschungskontext & Umfang", "development_note": "Entwicklungskontext", - "development_text": "Tractatus wird seit April 2025 entwickelt und befindet sich nun im aktiven Produktionsbetrieb (11+ Monate). Was als Einzelprojekt-Demonstration begann, umfasst nun den Produktionseinsatz bei Village Home Trust und souveräne Sprachmodell-Governance durch Home AI. Beobachtungen stammen aus direktem Engagement mit Claude Code (Anthropic Claude-Modelle, Sonnet 4.5 bis Opus 4.6) über mehr als 1.000 Entwicklungssitzungen. Dies ist explorative Forschung, keine kontrollierte Studie.", + "development_text": "Tractatus wird seit April 2025 entwickelt und befindet sich nun im aktiven Produktionsbetrieb (11+ Monate). Was als Einzelprojekt-Demonstration begann, umfasst nun den Produktionseinsatz bei Village Home Trust und souveräne Sprachmodell-Governance durch Village AI. Beobachtungen stammen aus direktem Engagement mit Claude Code (Anthropic Claude-Modelle, Sonnet 4.5 bis Opus 4.6) über mehr als 1.000 Entwicklungssitzungen. Dies ist explorative Forschung, keine kontrollierte Studie.", "paragraph_1": "Die Anpassung fortschrittlicher KI an menschliche Werte ist eine der größten Herausforderungen, vor denen wir stehen. Da sich das Wachstum von Fähigkeiten unter dem Einfluss von Big Tech beschleunigt, stehen wir vor einem kategorischen Imperativ: Wir müssen die menschliche Kontrolle über Wertentscheidungen bewahren, oder wir riskieren, die Kontrolle vollständig abzugeben.", "paragraph_2": "Der Rahmen ist aus einer praktischen Notwendigkeit heraus entstanden. Während der Entwicklung beobachteten wir immer wieder, dass sich KI-Systeme über explizite Anweisungen hinwegsetzten, von festgelegten Wertvorgaben abwichen oder unter dem Druck des Kontextes stillschweigend die Qualität verschlechterten. Herkömmliche Governance-Ansätze (Grundsatzdokumente, ethische Richtlinien, Prompt-Engineering) erwiesen sich als unzureichend, um diese Fehler zu verhindern.", "paragraph_3": "Anstatt zu hoffen, dass sich KI-Systeme \"richtig verhalten\", schlägt der Tractatus strukturelle Beschränkungen vor, bei denen bestimmte Entscheidungsarten menschliches Urteilsvermögen erfordern. Diese architektonischen Grenzen können sich an individuelle, organisatorische und gesellschaftliche Normen anpassen - und schaffen so eine Grundlage für einen begrenzten KI-Betrieb, der mit dem Wachstum der Fähigkeiten sicherer skalieren kann.", @@ -132,7 +132,7 @@ "limitation_3_title": "3. Keine kontradiktorischen Tests", "limitation_3_desc": "Das Framework wurde weder einer Red-Team-Evaluierung noch einem Jailbreak-Test oder einer Bewertung durch einen Gegner unterzogen. Alle Beobachtungen stammen aus normalen Entwicklungsabläufen, nicht aus absichtlichen Umgehungsversuchen.", "limitation_4_title": "4. Spezifität der Plattform", - "limitation_4_desc": "Beobachtungen und Interventionen wurden mit Claude Code (Anthropic Claude, Sonnet 4.5 bis Opus 4.6) und Home AI (Llama 3.1/3.2 via QLoRA) validiert. Die Verallgemeinerbarkeit auf andere LLM-Systeme (Copilot, GPT-4, benutzerdefinierte Agenten) bleibt teilweise validiert.", + "limitation_4_desc": "Beobachtungen und Interventionen wurden mit Claude Code (Anthropic Claude, Sonnet 4.5 bis Opus 4.6) und Village AI (Llama 3.1/3.2 via QLoRA) validiert. Die Verallgemeinerbarkeit auf andere LLM-Systeme (Copilot, GPT-4, benutzerdefinierte Agenten) bleibt teilweise validiert.", "limitation_5_title": "5. Skalenunsicherheit", "limitation_5_desc": "Leistungsmerkmale im Unternehmensmaßstab (Tausende von gleichzeitigen Benutzern, Millionen von Governance-Ereignissen) völlig unbekannt. Die derzeitige Implementierung ist für den Einzelbenutzerkontext optimiert.", "future_research_title": "Künftiger Forschungsbedarf:", @@ -163,7 +163,7 @@ "validated_5_title": "✅ Multi-Deployment-Governance erfolgreich", "validated_5_item1": "Framework steuert agenticgovernance.digital (11+ Monate Dauerbetrieb)", "validated_5_item2": "Village Home Trust Produktionseinsatz: null Governance-Verletzungen", - "validated_5_item3": "Home AI souveräne Inferenz-Governance: betriebsbereit", + "validated_5_item3": "Village AI souveräne Inferenz-Governance: betriebsbereit", "validated_5_item4": "Kulturelle DNA-Regeln (inst_085-089) durch Pre-Commit-Hooks durchgesetzt (4+ Monate betriebsbereit)", "validated_5_item5": "Phase 5 Integration: 100% vollständig (alle 6 Dienste, 203/203 Tests bestanden)", "validated_5_item6": "Mehrsprachige Unterstützung: EN, DE, FR, Te Reo Maori", @@ -179,7 +179,7 @@ "not_validated_2_item2": "Unbekannt: Widerstandsfähigkeit gegen absichtliche Umgehungsversuche, Aufforderungen zum Gefängnisausbruch, gegnerische Tests", "not_validated_2_item3": "Forschungsbedarf: Red-Team-Evaluierung durch Sicherheitsforscher", "not_validated_3_title": "⚠️ Plattformübergreifende Konsistenz (Teilweise)", - "not_validated_3_item1": "Bestätigt: Claude Code (Anthropic Claude, Sonnet 4.5 bis Opus 4.6) und Home AI (Llama 3.1/3.2 via QLoRA)", + "not_validated_3_item1": "Bestätigt: Claude Code (Anthropic Claude, Sonnet 4.5 bis Opus 4.6) und Village AI (Llama 3.1/3.2 via QLoRA)", "not_validated_3_item2": "Unbekannt: Verallgemeinerbarkeit auf Copilot, GPT-4, AutoGPT, LangChain, CrewAI, offene Modelle", "not_validated_3_item3": "Forschungsbedarf: Plattformübergreifende Validierungsstudien", "not_validated_4_title": "❌ Architektur gleichzeitiger Sitzungen", @@ -316,8 +316,8 @@ "technique_2": "Representation Engineering (RepE): Lineare Sonden zur Identifizierung und Modifikation von Konzeptrepräsentationen", "technique_3": "FairSteer & DSO: Fairness-orientierte Steuerung durch verteilungsrobuste Optimierung", "technique_4": "Sparse Autoencoders: Mechanistische Interpretierbarkeit durch Zerlegung polysemantischer Neuronen", - "application_heading": "Anwendung auf Village Home AI", - "application_text": "Die Village Home AI Bereitstellung nutzt QLoRA-feinabgestimmte Llama 3.1/3.2-Modelle, bei denen Steering-Vektoren zur Inferenzzeit angewendet werden können. Dies schafft eine zweischichtige Governance-Architektur.", + "application_heading": "Anwendung auf Village Village AI", + "application_text": "Die Village Village AI Bereitstellung nutzt QLoRA-feinabgestimmte Llama 3.1/3.2-Modelle, bei denen Steering-Vektoren zur Inferenzzeit angewendet werden können. Dies schafft eine zweischichtige Governance-Architektur.", "read_link": "Arbeit lesen (HTML) →", "pdf_link": "PDF herunterladen" }, @@ -335,16 +335,16 @@ "read_link": "Entwurf lesen (HTML) →", "pdf_link": "PDF herunterladen" }, - "home_ai": { - "heading": "Home AI: Souveräne Governance-Forschungsplattform", - "intro": "Home AI stellt einen bedeutenden Forschungsmeilenstein dar: vollständige Tractatus-Governance eingebettet in eine lokal trainierte, souveräne Sprachmodell-Inferenz-Pipeline.", + "village_ai": { + "heading": "Village AI: Souveräne Governance-Forschungsplattform", + "intro": "Village AI stellt einen bedeutenden Forschungsmeilenstein dar: vollständige Tractatus-Governance eingebettet in eine lokal trainierte, souveräne Sprachmodell-Inferenz-Pipeline.", "architecture_heading": "Zwei-Modell-Architektur", "arch_1": "Schnelles Modell (Llama 3.2 3B): Antworten mit niedriger Latenz für Routineanfragen, mit Governance-Vorprüfung", "arch_2": "Tiefes Modell (Llama 3.1 8B): Komplexes Reasoning mit vollständiger Governance-Pipeline", "arch_3": "QLoRA-Feinabstimmung: Parametereffiziente Anpassung auf lokaler Hardware", "research_heading": "Forschungsbedeutung", - "research_text": "Home AI eröffnet die Forschungsfrage der Governance-innerhalb-der-Trainingsschleife für gemeinschaftskontrollierte Modelle.", - "learn_more": "Mehr über Home AI erfahren →" + "research_text": "Village AI eröffnet die Forschungsfrage der Governance-innerhalb-der-Trainingsschleife für gemeinschaftskontrollierte Modelle.", + "learn_more": "Mehr über Village AI erfahren →" } }, "footer": { diff --git a/public/locales/de/home-ai.json b/public/locales/de/village-ai.json similarity index 87% rename from public/locales/de/home-ai.json rename to public/locales/de/village-ai.json index 2a86cf64..6b939418 100644 --- a/public/locales/de/home-ai.json +++ b/public/locales/de/village-ai.json @@ -1,13 +1,13 @@ { "breadcrumb": { "home": "Startseite", - "current": "Home AI" + "current": "Village AI" }, "hero": { "badge": "SOUVERÄNES, LOKAL TRAINIERTES SPRACHMODELL", - "title": "Home AI", + "title": "Village AI", "subtitle": "Ein Sprachmodell, bei dem die Gemeinschaft die Trainingsdaten, die Modellgewichte und die Steuerungsregeln kontrolliert. Nicht nur geregelte Inferenz — geregeltes Training.", - "status": "Status: Home AI arbeitet in der Produktion für Inferenz. Die souveräne Trainingspipeline ist entworfen und dokumentiert; die Hardware ist bestellt. Die Ausbildung hat noch nicht begonnen. Auf dieser Seite werden sowohl die derzeitigen Fähigkeiten als auch die geplante Architektur beschrieben." + "status": "Status: Village AI arbeitet in der Produktion für Inferenz. Die souveräne Trainingspipeline ist entworfen und dokumentiert; die Hardware ist bestellt. Die Ausbildung hat noch nicht begonnen. Auf dieser Seite werden sowohl die derzeitigen Fähigkeiten als auch die geplante Architektur beschrieben." }, "sll": { "heading": "Was ist eine SLL?", @@ -34,7 +34,7 @@ }, "two_model": { "heading": "Zwei-Modelle-Architektur", - "intro": "Home AI verwendet zwei Modelle unterschiedlicher Größe, die nach der Komplexität der Aufgabe geordnet sind. Dabei handelt es sich nicht um einen Ausweichmechanismus — jedes Modell ist für seine Aufgabe optimiert.", + "intro": "Village AI verwendet zwei Modelle unterschiedlicher Größe, die nach der Komplexität der Aufgabe geordnet sind. Dabei handelt es sich nicht um einen Ausweichmechanismus — jedes Modell ist für seine Aufgabe optimiert.", "fast_title": "3B Modell — Schneller Assistent", "fast_desc": "Bearbeitet Hilfeanfragen, Tooltips, Fehlererklärungen, kurze Zusammenfassungen und Übersetzungen. Angestrebte Antwortzeit: unter 5 Sekunden vollständig.", "fast_routing": "Routing-Auslöser: einfache Abfragen, bekannte FAQ-Muster, einstufige Aufgaben.", @@ -48,11 +48,11 @@ "intro": "Die Ausbildung ist nicht monolithisch. Es gibt drei Ebenen mit unterschiedlichen Aufgabenbereichen, die jeweils mit entsprechenden Governance-Einschränkungen verbunden sind.", "tier1_title": "Ebene 1: Plattform Basis", "tier1_badge": "Alle Gemeinden", - "tier1_desc": "Geschult in der Dokumentation der Plattform, der Philosophie, den Funktionsleitfäden und den FAQ-Inhalten. Vermittelt ein grundlegendes Verständnis dafür, wie Village funktioniert, was die Werte von Home AI sind und wie man Mitgliedern bei der Navigation auf der Plattform hilft.", + "tier1_desc": "Geschult in der Dokumentation der Plattform, der Philosophie, den Funktionsleitfäden und den FAQ-Inhalten. Vermittelt ein grundlegendes Verständnis dafür, wie Village funktioniert, was die Werte von Village AI sind und wie man Mitgliedern bei der Navigation auf der Plattform hilft.", "tier1_update": "Aktualisierungshäufigkeit: wöchentlich während der Betaphase, vierteljährlich bei der Generalversammlung. Trainingsmethode: QLoRA-Feinabstimmung.", "tier2_title": "Ebene 2: Mieteradapter", "tier2_badge": "Pro Gemeinde", - "tier2_desc": "Jede Community trainiert einen leichtgewichtigen LoRA-Adapter auf ihre eigenen Inhalte — Geschichten, Dokumente, Fotos und Ereignisse, deren Aufnahme die Mitglieder ausdrücklich zugestimmt haben. Dadurch kann Home AI Fragen wie \"Welche Geschichten hat Oma geteilt?\" beantworten, ohne auf die Daten einer anderen Community zuzugreifen.", + "tier2_desc": "Jede Community trainiert einen leichtgewichtigen LoRA-Adapter auf ihre eigenen Inhalte — Geschichten, Dokumente, Fotos und Ereignisse, deren Aufnahme die Mitglieder ausdrücklich zugestimmt haben. Dadurch kann Village AI Fragen wie \"Welche Geschichten hat Oma geteilt?\" beantworten, ohne auf die Daten einer anderen Community zuzugreifen.", "tier2_update": "Adapter sind klein (50–100MB). Die Zustimmung erfolgt pro Inhaltselement. Inhalte, die mit \"nur ich\" gekennzeichnet sind, werden unabhängig von der Zustimmung nie einbezogen. Die Schulung verwendet DPO (Direct Preference Optimization) für den Werteabgleich.", "tier3_title": "Stufe 3: Individuell (Zukunft)", "tier3_badge": "Pro Mitglied", @@ -61,7 +61,7 @@ }, "governance_training": { "heading": "Governance während der Ausbildung", - "intro1": "Dies ist der zentrale Beitrag der Forschung. Die meisten KI-Governance-Frameworks arbeiten zum Zeitpunkt der Inferenz — und filtern oder beschränken die Antworten, nachdem das Modell bereits trainiert wurde. Home AI bettet Governance in die Trainingsschleife ein.", + "intro1": "Dies ist der zentrale Beitrag der Forschung. Die meisten KI-Governance-Frameworks arbeiten zum Zeitpunkt der Inferenz — und filtern oder beschränken die Antworten, nachdem das Modell bereits trainiert wurde. Village AI bettet Governance in die Trainingsschleife ein.", "intro2": "Dies folgt dem Grundsatz Nicht-Trennung von Christopher Alexander: Governance wird in die Trainingsarchitektur eingewoben und nicht nachträglich angewendet. Der BoundaryEnforcer validiert jeden Trainingsstapel vor dem Forward Pass. Enthält ein Stapel mandantenübergreifende Daten, Daten ohne Zustimmung oder als privat gekennzeichnete Inhalte, wird der Stapel abgelehnt und der Trainingsschritt nicht fortgesetzt.", "code_comment1": "# Governance innerhalb der Trainingsschleife (Not-Separateness)", "code_line1": "for batch in training_data:", @@ -81,7 +81,7 @@ }, "dual_layer": { "heading": "Zweischichtige Tractatus-Architektur", - "intro": "Home AI wird von Tractatus auf zwei verschiedenen Schichten gleichzeitig gesteuert. Dies ist die architektonische Einsicht, die den SLL-Ansatz sowohl von ungeregelten Modellen als auch von aufgeschraubten Sicherheitsfiltern unterscheidet.", + "intro": "Village AI wird von Tractatus auf zwei verschiedenen Schichten gleichzeitig gesteuert. Dies ist die architektonische Einsicht, die den SLL-Ansatz sowohl von ungeregelten Modellen als auch von aufgeschraubten Sicherheitsfiltern unterscheidet.", "layer_a_badge": "EBENE A: INHÄRENT", "layer_a_title": "Tractatus Im Inneren des Modells", "layer_a_desc": "Während des Trainings validiert das BoundaryEnforcer jedes Los. Die DPO-Anpassung formt die Präferenzen für ein geregeltes Verhalten. Das Modell lernt, Grenzen zu respektieren, transparente Antworten zu bevorzugen und Wertentscheidungen dem Menschen zu überlassen.", @@ -104,12 +104,12 @@ }, "philosophy": { "heading": "Philosophische Grundlagen", - "intro": "Die Führung von Home AI geht auf vier philosophische Traditionen zurück, die jeweils einen spezifischen architektonischen Grundsatz beisteuern. Es handelt sich dabei nicht um dekorative Referenzen —, sondern um konkrete Gestaltungsentscheidungen.", + "intro": "Die Führung von Village AI geht auf vier philosophische Traditionen zurück, die jeweils einen spezifischen architektonischen Grundsatz beisteuern. Es handelt sich dabei nicht um dekorative Referenzen —, sondern um konkrete Gestaltungsentscheidungen.", "berlin_title": "Isaiah Berlin — Wertepluralismus", - "berlin_desc": "Die Werte sind in der Tat vielfältig und manchmal unvereinbar. Wenn Freiheit und Gleichheit miteinander in Konflikt geraten, kann es keine einzig richtige Lösung geben. Home AI präsentiert Optionen ohne Hierarchie und dokumentiert, was jede Wahl opfert.", + "berlin_desc": "Die Werte sind in der Tat vielfältig und manchmal unvereinbar. Wenn Freiheit und Gleichheit miteinander in Konflikt geraten, kann es keine einzig richtige Lösung geben. Village AI präsentiert Optionen ohne Hierarchie und dokumentiert, was jede Wahl opfert.", "berlin_arch": "Architektonischer Ausdruck: PluralisticDeliberationOrchestrator stellt Kompromisse vor, löst sie aber nicht auf.", "wittgenstein_title": "Ludwig Wittgenstein — Sprachgrenzen", - "wittgenstein_desc": "Die Sprache formt, was gedacht und ausgedrückt werden kann. Manche Dinge, die am wichtigsten sind, widersetzen sich einem systematischen Ausdruck. Home AI erkennt die Grenzen dessen an, was Sprachmodelle erfassen können — insbesondere im Hinblick auf Trauer, kulturelle Bedeutung und gelebte Erfahrung.", + "wittgenstein_desc": "Die Sprache formt, was gedacht und ausgedrückt werden kann. Manche Dinge, die am wichtigsten sind, widersetzen sich einem systematischen Ausdruck. Village AI erkennt die Grenzen dessen an, was Sprachmodelle erfassen können — insbesondere im Hinblick auf Trauer, kulturelle Bedeutung und gelebte Erfahrung.", "wittgenstein_arch": "Architektonischer Ausdruck: BoundaryEnforcer überlässt die Entscheidung über Werte dem Menschen und erkennt die Grenzen der Berechnung an.", "indigenous_title": "Indigene Souveränität — Daten als Beziehung", "indigenous_desc": "Te Mana Raraunga (Māori Data Sovereignty), CARE Principles und OCAP (First Nations Canada) bieten einen Rahmen, in dem Daten nicht Eigentum, sondern Beziehung sind. Whakapapa (Genealogie) gehört dem Kollektiv, nicht dem Einzelnen. Die Zustimmung ist ein gemeinschaftlicher Prozess, kein individuelles Kästchen.", @@ -128,12 +128,12 @@ "layer2_desc": "Von Community-Administratoren festgelegte Regeln. Richtlinien für den Umgang mit Inhalten (z. B. \"Verstorbene Mitglieder müssen von einem Moderator überprüft werden\"), kulturelle Protokolle (z. B. Māori tangi Bräuche), Sichtbarkeitsvorgaben und KI-Trainingszustimmungsmodelle. Jede Gemeinschaft konfiguriert ihre eigene Verfassung innerhalb der Beschränkungen der Schicht 1.", "layer2_enforcement": "Durchsetzung: von CrossReferenceValidator pro Mieter validierte verfassungsrechtliche Vorschriften.", "layer3_title": "Ebene 3: Übernommene Weisheitstraditionen", - "layer3_desc": "Einzelne Mitglieder und Gemeinschaften können Prinzipien aus Weisheitstraditionen übernehmen, um die Art und Weise zu beeinflussen, wie Home AI Antworten formuliert. Diese sind freiwillig, umkehrbar und transparent. Sie beeinflussen die Präsentation, nicht den Zugang zum Inhalt. Mehrere Traditionen können gleichzeitig übernommen werden; Konflikte werden von den Mitgliedern gelöst, nicht von der KI.", + "layer3_desc": "Einzelne Mitglieder und Gemeinschaften können Prinzipien aus Weisheitstraditionen übernehmen, um die Art und Weise zu beeinflussen, wie Village AI Antworten formuliert. Diese sind freiwillig, umkehrbar und transparent. Sie beeinflussen die Präsentation, nicht den Zugang zum Inhalt. Mehrere Traditionen können gleichzeitig übernommen werden; Konflikte werden von den Mitgliedern gelöst, nicht von der KI.", "layer3_enforcement": "Durchsetzung: Framing-Hinweise bei der Antwortgenerierung. Override immer verfügbar." }, "wisdom": { "heading": "Weisheitstraditionen", - "intro": "Home AI bietet dreizehn Weisheitstraditionen, die die Mitglieder übernehmen können, um das Verhalten der KI zu steuern. Jede Tradition wurde anhand der Stanford Encyclopedia of Philosophy als wichtigster wissenschaftlicher Referenz validiert. Die Annahme ist freiwillig, transparent und umkehrbar.", + "intro": "Village AI bietet dreizehn Weisheitstraditionen, die die Mitglieder übernehmen können, um das Verhalten der KI zu steuern. Jede Tradition wurde anhand der Stanford Encyclopedia of Philosophy als wichtigster wissenschaftlicher Referenz validiert. Die Annahme ist freiwillig, transparent und umkehrbar.", "berlin_title": "Berlin: Wertepluralismus", "berlin_desc": "Stellen Sie die Optionen vor, ohne sie in eine Rangfolge zu bringen; erkennen Sie an, was jede Wahl opfert.", "stoic_title": "Stoisch: Gleichmut und Tugend", @@ -176,7 +176,7 @@ }, "infrastructure": { "heading": "Ausbildungsinfrastruktur", - "intro": "Home AI folgt einem \"train local, deploy remote\"-Modell. Die Trainingshardware befindet sich im Haus des Entwicklers. Die trainierten Modellgewichte werden für die Inferenz auf die Produktionsserver übertragen. Dies hält die Trainingskosten niedrig und die Trainingsdaten unter physischer Kontrolle.", + "intro": "Village AI folgt einem \"train local, deploy remote\"-Modell. Die Trainingshardware befindet sich im Haus des Entwicklers. Die trainierten Modellgewichte werden für die Inferenz auf die Produktionsserver übertragen. Dies hält die Trainingskosten niedrig und die Trainingsdaten unter physischer Kontrolle.", "local_title": "Lokale Ausbildung", "local_item1": "Consumer-GPU mit 24GB VRAM über externes Gehäuse", "local_item2": "QLoRA-Feinabstimmung (4-Bit-Quantisierung passt in VRAM-Budget)", @@ -193,7 +193,7 @@ }, "bias": { "heading": "Bias-Dokumentation und -Überprüfung", - "intro": "Home AI ist im Bereich des familiären Geschichtenerzählens tätig, das spezifische Verzerrungsrisiken birgt. Es wurden sechs Verzerrungskategorien mit Aufdeckungshinweisen, entschärfenden Beispielen und Bewertungskriterien dokumentiert.", + "intro": "Village AI ist im Bereich des familiären Geschichtenerzählens tätig, das spezifische Verzerrungsrisiken birgt. Es wurden sechs Verzerrungskategorien mit Aufdeckungshinweisen, entschärfenden Beispielen und Bewertungskriterien dokumentiert.", "family_title": "Familienstruktur", "family_desc": "Kernfamilie als Standard; gleichgeschlechtliche Eltern, gemischte Familien, Alleinerziehende werden als normativ behandelt.", "elder_title": "Vertretung der Älteren", @@ -220,7 +220,7 @@ }, "live_today": { "heading": "Was heute live ist", - "intro": "Home AI wird derzeit in der Produktion mit den folgenden verwalteten Funktionen betrieben. Diese werden im Rahmen des vollständigen Governance-Stacks mit sechs Diensten ausgeführt.", + "intro": "Village AI wird derzeit in der Produktion mit den folgenden verwalteten Funktionen betrieben. Diese werden im Rahmen des vollständigen Governance-Stacks mit sechs Diensten ausgeführt.", "rag_title": "RAG-basierte Hilfe", "rag_desc": "Die Vektorsuche ruft relevante Dokumentation ab, gefiltert nach den Berechtigungen der Mitglieder. Die Antworten basieren auf den abgerufenen Dokumenten, nicht nur auf den Trainingsdaten.", "ocr_title": "Dokument OCR", @@ -233,7 +233,7 @@ "limitations": { "heading": "Beschränkungen und offene Fragen", "item1": "Ausbildung noch nicht begonnen: Die SLL-Architektur ist entworfen und dokumentiert. Die Hardware ist bestellt. Aber es wurde noch kein Modell trainiert. Behauptungen über die Steuerung der Trainingszeit sind architektonisches Design, keine empirischen Ergebnisse.", - "item2": "Beschränkter Einsatz: Home AI arbeitet mit vier föderierten Mandanten innerhalb einer Plattform, die vom Entwickler des Frameworks gebaut wurde. Die Wirksamkeit der Governance kann ohne unabhängige Einsätze nicht verallgemeinert werden.", + "item2": "Beschränkter Einsatz: Village AI arbeitet mit vier föderierten Mandanten innerhalb einer Plattform, die vom Entwickler des Frameworks gebaut wurde. Die Wirksamkeit der Governance kann ohne unabhängige Einsätze nicht verallgemeinert werden.", "item3": "Selbstberichtete Metriken: Leistungs- und Sicherheitszahlen werden von demselben Team gemeldet, das das System gebaut hat. Ein unabhängiges Audit ist geplant, wurde aber noch nicht durchgeführt.", "item4": "Operationalisierung von Traditionen: Lassen sich reichhaltige philosophische Traditionen authentisch auf Hinweise zur Rahmung reduzieren? Wenn ein Mitglied \"Buddhist\" auswählt, bedeutet das nicht, dass es den Buddhismus versteht oder praktiziert. Dies birgt die Gefahr der Oberflächlichkeit.", "item5": "Ausdauer des Trainings unbekannt: Ob die Governance-Einschränkungen Hunderte von Trainingsrunden ohne Beeinträchtigung überstehen, ist eine offene Forschungsfrage. Die Drift-Erkennung ist konzipiert, aber nicht getestet.", diff --git a/public/locales/de/village-case-study.json b/public/locales/de/village-case-study.json index f5b8c768..31de4081 100644 --- a/public/locales/de/village-case-study.json +++ b/public/locales/de/village-case-study.json @@ -36,7 +36,7 @@ "infra_desc": "Produktionsserver in Neuseeland und in der EU. Keine Daten durchqueren die US-Gerichtsbarkeit. Die Daten der Gemeinschaft verlassen nie die Einrichtung, zu der sie gehören.", "training_title": "Von der Gemeinschaft kontrollierte Ausbildung", "training_desc": "QLoRA-Feinabstimmung auf bereichsspezifischen Daten mit Zustimmungsverfolgung und Herkunftsnachweis. Gemeinschaften können Trainingsdaten zurückziehen und eine Modellumschulung auslösen.", - "link_note": "Eine ausführliche Beschreibung der Modellarchitektur, des Trainingsansatzes und der Governance-Integration finden Sie unter Home AI / SLL: Sovereign Locally-Trained Language Model." + "link_note": "Eine ausführliche Beschreibung der Modellarchitektur, des Trainingsansatzes und der Governance-Integration finden Sie unter Village AI / SLL: Sovereign Locally-Trained Language Model." }, "polycentric": { "heading": "Polyzentrische Governance", @@ -160,7 +160,7 @@ "heading": "Weiter erforschen", "description": "Erfahren Sie mehr über die technische Architektur, lesen Sie die Forschungsergebnisse oder sehen Sie sich die Village-Plattform in Aktion an.", "visit_village": "Besuchen Sie das Dorf →", - "home_ai": "Souveränes Sprachmodell →", + "village_ai": "Souveränes Sprachmodell →", "research_paper": "Forschungspapier →", "research_details": "Forschung → Details" } diff --git a/public/locales/en/common.json b/public/locales/en/common.json index 33d6c4d2..6ada5a8f 100644 --- a/public/locales/en/common.json +++ b/public/locales/en/common.json @@ -94,8 +94,8 @@ "for_implementers_desc": "Integration guide and code examples", "village_case_study": "Village Case Study", "village_case_study_desc": "Production deployment evidence", - "home_ai": "Home AI", - "home_ai_desc": "Sovereign locally-trained language model", + "village_ai": "Village AI", + "village_ai_desc": "Sovereign locally-trained language model", "agent_lightning": "Agent Lightning", "agent_lightning_desc": "Performance optimisation integration", "for_leaders": "For Leaders", diff --git a/public/locales/en/homepage.json b/public/locales/en/homepage.json index ebb59ffc..ae41d475 100644 --- a/public/locales/en/homepage.json +++ b/public/locales/en/homepage.json @@ -44,12 +44,12 @@ "evidence": { "badge": "Production Evidence", "heading": "Tractatus in Production: The Village Platform", - "subtitle": "Home AI applies all six governance services to every user interaction in a live community platform.", + "subtitle": "Village AI applies all six governance services to every user interaction in a live community platform.", "stat_services": "Governance services per response", "stat_months": "Months in production", "stat_overhead": "Governance overhead per interaction", "cta_case_study": "Technical Case Study →", - "cta_home_ai": "About Home AI →", + "cta_village_ai": "About Village AI →", "limitations_label": "Limitations:", "limitations_text": "Early-stage deployment across four federated tenants, self-reported metrics, operator-developer overlap. Independent audit and broader validation scheduled for 2026." }, @@ -98,7 +98,7 @@ "subtitle": "From a port number incident to a production governance architecture, across 800 commits and one year of research.", "oct_2025": "Framework inception & 6 governance services", "oct_nov_2025": "Alexander principles, Agent Lightning, i18n", - "dec_2025": "Village case study & Home AI deployment", + "dec_2025": "Village case study & Village AI deployment", "jan_2026": "Research papers (3 editions) published", "cta": "View the full research timeline →", "date_oct_2025": "Oct 2025", diff --git a/public/locales/en/implementer.json b/public/locales/en/implementer.json index d2fae46b..ff5e5e73 100644 --- a/public/locales/en/implementer.json +++ b/public/locales/en/implementer.json @@ -30,7 +30,7 @@ "services": "Services", "api": "API Reference", "patterns": "Integration Patterns", - "home_ai_arch": "Home AI", + "village_ai_arch": "Village AI", "steering_vectors_impl": "Steering Vectors", "taonga_registry": "Taonga Registry", "roadmap": "Roadmap" @@ -242,11 +242,11 @@ "sidecar_usecase": "Use Case:", "sidecar_usecase_value": "Kubernetes, containerized deployments" }, - "home_ai_arch": { - "heading": "Home AI: Two-Model Sovereign Architecture", + "village_ai_arch": { + "heading": "Village AI: Two-Model Sovereign Architecture", "intro": "Production deployment of Tractatus governance on locally-trained open-source models, demonstrating framework portability beyond Claude.", "arch_title": "Two-Model Routing Architecture", - "arch_intro": "Home AI uses a dual-model design where queries are routed based on complexity and governance requirements. Both models run locally with full Tractatus governance in the inference pipeline.", + "arch_intro": "Village AI uses a dual-model design where queries are routed based on complexity and governance requirements. Both models run locally with full Tractatus governance in the inference pipeline.", "fast_title": "Fast Model: Llama 3.2 3B", "fast_1": "Purpose: Common queries with pre-filtered governance", "fast_2": "Fine-tuning: QLoRA on domain-specific data", @@ -264,11 +264,11 @@ "stat_first": "First Non-Claude", "stat_first_desc": "Validates Tractatus portability beyond Anthropic", "status_note": "Status: Inference governance operational. Sovereign training pipeline installation in progress. Production deployment at Village Home Trust validates governance portability across model architectures.", - "cta": "Home AI Architecture Details →" + "cta": "Village AI Architecture Details →" }, "steering_impl": { "heading": "Steering Vectors: Inference-Time Bias Correction", - "intro": "Techniques for correcting model behaviour at inference time without retraining, applicable to QLoRA-fine-tuned models like those in Home AI.", + "intro": "Techniques for correcting model behaviour at inference time without retraining, applicable to QLoRA-fine-tuned models like those in Village AI.", "paper_ref": "Reference:", "paper_title": "Steering Vectors and Mechanical Bias in Sovereign AI Systems (STO-RES-0009 v1.1, February 2026)", "techniques_title": "Key Techniques for Implementers", @@ -310,7 +310,7 @@ "multi_llm_title": "Multi-LLM Support", "multi_llm_badge": "First Deployment Live", "multi_llm_status": "Status: First Non-Claude Deployment Operational", - "multi_llm_desc": "Home AI deploys Tractatus governance on Llama 3.1 8B and Llama 3.2 3B via QLoRA fine-tuning — the first validated non-Claude deployment. Extends governance portability to open-source models with full 6-service pipeline.", + "multi_llm_desc": "Village AI deploys Tractatus governance on Llama 3.1 8B and Llama 3.2 3B via QLoRA fine-tuning — the first validated non-Claude deployment. Extends governance portability to open-source models with full 6-service pipeline.", "multi_llm_challenges": "Next Steps:", "multi_llm_challenges_desc": "GPT-4 and Gemini adapters, provider-specific tool/function calling, sovereign training pipeline completion", "bindings_icon": "📚", diff --git a/public/locales/en/leader.json b/public/locales/en/leader.json index 09d8fd65..96936130 100644 --- a/public/locales/en/leader.json +++ b/public/locales/en/leader.json @@ -89,7 +89,7 @@ "development_status": { "heading": "Development Status", "warning_title": "Production-Validated Research Framework", - "warning_text": "Tractatus has been in active development for 11+ months (April 2025 to present) with production deployment at Village Home Trust, sovereign language model governance through Home AI, and over 171,800 audit decisions recorded. Independent validation and red-team testing remain outstanding research needs.", + "warning_text": "Tractatus has been in active development for 11+ months (April 2025 to present) with production deployment at Village Home Trust, sovereign language model governance through Village AI, and over 171,800 audit decisions recorded. Independent validation and red-team testing remain outstanding research needs.", "validation_title": "Validated vs. Not Validated", "validated_label": "Validated:", "validated_text": "Framework successfully governs Claude Code in development workflows. User reports order-of-magnitude improvement in productivity for non-technical operators building production systems.", @@ -98,9 +98,9 @@ "limitation_label": "Known Limitation:", "limitation_text": "Framework can be bypassed if AI simply chooses not to use governance tools. Voluntary invocation remains a structural weakness requiring external enforcement mechanisms." }, - "home_ai": { + "village_ai": { "heading": "Sovereign AI: Governance Embedded in Locally-Trained Models", - "intro": "Home AI demonstrates what it means to have governance embedded directly in locally-trained language models — not as an external compliance layer, but as part of the model serving architecture itself.", + "intro": "Village AI demonstrates what it means to have governance embedded directly in locally-trained language models — not as an external compliance layer, but as part of the model serving architecture itself.", "architecture_title": "Two-Model Architecture", "arch_fast": "Fast model (3B parameters): Routine queries with governance pre-screening", "arch_deep": "Deep model (8B parameters): Complex reasoning with full governance pipeline", @@ -110,7 +110,7 @@ "strat_governance": "Governance by design: Constraints are architectural, not retroactive compliance", "strat_regulatory": "Regulatory positioning: Structurally stronger than bolt-on governance approaches", "status": "Current status: Inference governance operational. Training pipeline installation in progress. First non-Claude deployment surface for Tractatus governance.", - "cta": "Learn about Home AI →" + "cta": "Learn about Village AI →" }, "taonga": { "heading": "Polycentric Governance for Indigenous Data Sovereignty", diff --git a/public/locales/en/researcher.json b/public/locales/en/researcher.json index bd6c5931..8046abeb 100644 --- a/public/locales/en/researcher.json +++ b/public/locales/en/researcher.json @@ -25,7 +25,7 @@ "research_context": { "heading": "Research Context & Scope", "development_note": "Development Context", - "development_text": "Tractatus has been developed from April 2025 and is now in active production (11+ months). What began as a single-project demonstration has expanded to include production deployment at Village Home Trust and sovereign language model governance through Home AI. Observations derive from direct engagement with Claude Code (Anthropic Claude models, Sonnet 4.5 through Opus 4.6) across over 1,000 development sessions. This is exploratory research, not controlled study.", + "development_text": "Tractatus has been developed from April 2025 and is now in active production (11+ months). What began as a single-project demonstration has expanded to include production deployment at Village Home Trust and sovereign language model governance through Village AI. Observations derive from direct engagement with Claude Code (Anthropic Claude models, Sonnet 4.5 through Opus 4.6) across over 1,000 development sessions. This is exploratory research, not controlled study.", "paragraph_1": "Aligning advanced AI with human values is among the most consequential challenges we face. As capability growth accelerates under big tech momentum, we confront a categorical imperative: preserve human agency over values decisions, or risk ceding control entirely.", "paragraph_2": "The framework emerged from practical necessity. During development, we observed recurring patterns where AI systems would override explicit instructions, drift from established values constraints, or silently degrade quality under context pressure. Traditional governance approaches (policy documents, ethical guidelines, prompt engineering) proved insufficient to prevent these failures.", "paragraph_3": "Instead of hoping AI systems \"behave correctly,\" Tractatus proposes structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.", @@ -171,7 +171,7 @@ "validated_5_title": "✅ Multi-Deployment Governance Successful", "validated_5_item1": "Framework governs agenticgovernance.digital (11+ months continuous operation)", "validated_5_item2": "Village Home Trust production deployment: zero governance violations", - "validated_5_item3": "Home AI sovereign inference governance: operational", + "validated_5_item3": "Village AI sovereign inference governance: operational", "validated_5_item4": "Cultural DNA rules (inst_085-089) enforced through pre-commit hooks (4+ months operational)", "validated_5_item5": "Phase 5 integration: 100% complete (all 6 services, 203/203 tests passing)", "validated_5_item6": "Multilingual support: EN, DE, FR, Te Reo Maori", @@ -187,7 +187,7 @@ "not_validated_2_item2": "Unknown: Resistance to deliberate bypass attempts, jailbreak prompts, adversarial testing", "not_validated_2_item3": "Research need: Red-team evaluation by security researchers", "not_validated_3_title": "⚠️ Cross-Platform Consistency (Partial)", - "not_validated_3_item1": "Validated: Claude Code (Anthropic Claude, Sonnet 4.5 through Opus 4.6) and Home AI (Llama 3.1/3.2 via QLoRA)", + "not_validated_3_item1": "Validated: Claude Code (Anthropic Claude, Sonnet 4.5 through Opus 4.6) and Village AI (Llama 3.1/3.2 via QLoRA)", "not_validated_3_item2": "Unknown: Generalizability to Copilot, GPT-4, AutoGPT, LangChain, CrewAI, other open models", "not_validated_3_item3": "Research need: Broader cross-platform validation studies beyond Claude and Llama families", "not_validated_4_title": "❌ Concurrent Session Architecture", @@ -312,8 +312,8 @@ "technique_2": "Representation Engineering (RepE): Linear probes to identify and modify concept representations within model layers", "technique_3": "FairSteer & DSO: Fairness-oriented steering through distributionally-robust optimization", "technique_4": "Sparse Autoencoders: Mechanistic interpretability through decomposition of polysemantic neurons into monosemantic features", - "application_heading": "Application to Village Home AI", - "application_text": "The Village Home AI deployment uses QLoRA-fine-tuned Llama 3.1/3.2 models where steering vectors can be applied at inference time. This creates a two-layer governance architecture: Tractatus provides structural constraints on decision boundaries, while steering vectors address pre-reasoning mechanical biases within the model itself. Together, they represent governance that operates both outside and inside the model.", + "application_heading": "Application to Village Village AI", + "application_text": "The Village Village AI deployment uses QLoRA-fine-tuned Llama 3.1/3.2 models where steering vectors can be applied at inference time. This creates a two-layer governance architecture: Tractatus provides structural constraints on decision boundaries, while steering vectors address pre-reasoning mechanical biases within the model itself. Together, they represent governance that operates both outside and inside the model.", "read_link": "Read Paper (HTML) →", "pdf_link": "Download PDF" }, @@ -331,16 +331,16 @@ "read_link": "Read Draft (HTML) →", "pdf_link": "Download PDF" }, - "home_ai": { - "heading": "Home AI: Sovereign Governance Research Platform", - "intro": "Home AI represents a significant research milestone: full Tractatus governance embedded in a locally-trained, sovereign language model inference pipeline. This is the first deployment where governance operates inside the model serving layer rather than alongside an external API.", + "village_ai": { + "heading": "Village AI: Sovereign Governance Research Platform", + "intro": "Village AI represents a significant research milestone: full Tractatus governance embedded in a locally-trained, sovereign language model inference pipeline. This is the first deployment where governance operates inside the model serving layer rather than alongside an external API.", "architecture_heading": "Two-Model Architecture", "arch_1": "Fast model (Llama 3.2 3B): Low-latency responses for routine queries, with governance pre-screening", "arch_2": "Deep model (Llama 3.1 8B): Complex reasoning with full governance pipeline, including BoundaryEnforcer and PluralisticDeliberationOrchestrator", "arch_3": "QLoRA fine-tuning: Parameter-efficient adaptation on local hardware, enabling community-specific model customisation without cloud dependency", "research_heading": "Research Significance", - "research_text": "Home AI opens the research question of governance-inside-the-training-loop for community-controlled models. Training data never leaves the local infrastructure; governance rules shape model behaviour through both fine-tuning data curation and inference-time constraints. This creates a fundamentally different governance surface than API-mediated approaches.", - "learn_more": "Learn more about Home AI →" + "research_text": "Village AI opens the research question of governance-inside-the-training-loop for community-controlled models. Training data never leaves the local infrastructure; governance rules shape model behaviour through both fine-tuning data curation and inference-time constraints. This creates a fundamentally different governance surface than API-mediated approaches.", + "learn_more": "Learn more about Village AI →" } }, "modal": { diff --git a/public/locales/en/home-ai.json b/public/locales/en/village-ai.json similarity index 87% rename from public/locales/en/home-ai.json rename to public/locales/en/village-ai.json index d099667d..8c5e94e5 100644 --- a/public/locales/en/home-ai.json +++ b/public/locales/en/village-ai.json @@ -1,13 +1,13 @@ { "breadcrumb": { "home": "Home", - "current": "Home AI" + "current": "Village AI" }, "hero": { "badge": "SOVEREIGN LOCALLY-TRAINED LANGUAGE MODEL", - "title": "Home AI", + "title": "Village AI", "subtitle": "A language model where the community controls the training data, the model weights, and the governance rules. Not just governed inference — governed training.", - "status": "Status: Home AI operates in production for inference. The sovereign training pipeline is designed and documented; hardware is being installed. Training has not yet begun. This page describes both current capability and intended architecture." + "status": "Status: Village AI operates in production for inference. The sovereign training pipeline is designed and documented; hardware is being installed. Training has not yet begun. This page describes both current capability and intended architecture." }, "sll": { "heading": "What is an SLL?", @@ -34,7 +34,7 @@ }, "two_model": { "heading": "Two-Model Architecture", - "intro": "Home AI uses two models of different sizes, routed by task complexity. This is not a fallback mechanism — each model is optimised for its role.", + "intro": "Village AI uses two models of different sizes, routed by task complexity. This is not a fallback mechanism — each model is optimised for its role.", "fast_title": "3B Model — Fast Assistant", "fast_desc": "Handles help queries, tooltips, error explanations, short summaries, and translation. Target response time: under 5 seconds complete.", "fast_routing": "Routing triggers: simple queries, known FAQ patterns, single-step tasks.", @@ -48,11 +48,11 @@ "intro": "Training is not monolithic. Three tiers serve different scopes, each with appropriate governance constraints.", "tier1_title": "Tier 1: Platform Base", "tier1_badge": "All communities", - "tier1_desc": "Trained on platform documentation, philosophy, feature guides, and FAQ content. Provides the foundational understanding of how Village works, what Home AI's values are, and how to help members navigate the platform.", + "tier1_desc": "Trained on platform documentation, philosophy, feature guides, and FAQ content. Provides the foundational understanding of how Village works, what Village AI's values are, and how to help members navigate the platform.", "tier1_update": "Update frequency: weekly during beta, quarterly at GA. Training method: QLoRA fine-tuning.", "tier2_title": "Tier 2: Tenant Adapters", "tier2_badge": "Per community", - "tier2_desc": "Each community trains a lightweight LoRA adapter on its own content — stories, documents, photos, and events that members have explicitly consented to include. This allows Home AI to answer questions like \"What stories has Grandma shared?\" without accessing any other community's data.", + "tier2_desc": "Each community trains a lightweight LoRA adapter on its own content — stories, documents, photos, and events that members have explicitly consented to include. This allows Village AI to answer questions like \"What stories has Grandma shared?\" without accessing any other community's data.", "tier2_update": "Adapters are small (50–100MB). Consent is per-content-item. Content marked \"only me\" is never included regardless of consent. Training uses DPO (Direct Preference Optimization) for value alignment.", "tier3_title": "Tier 3: Individual (Future)", "tier3_badge": "Per member", @@ -61,7 +61,7 @@ }, "governance_training": { "heading": "Governance During Training", - "intro1": "This is the central research contribution. Most AI governance frameworks operate at inference time — they filter or constrain responses after the model has already been trained. Home AI embeds governance inside the training loop.", + "intro1": "This is the central research contribution. Most AI governance frameworks operate at inference time — they filter or constrain responses after the model has already been trained. Village AI embeds governance inside the training loop.", "intro2": "This follows Christopher Alexander's principle of Not-Separateness: governance is woven into the training architecture, not applied afterward. The BoundaryEnforcer validates every training batch before the forward pass. If a batch contains cross-tenant data, data without consent, or content marked as private, the batch is rejected and the training step does not proceed.", "code_comment1": "# Governance inside the training loop (Not-Separateness)", "code_line1": "for batch in training_data:", @@ -81,7 +81,7 @@ }, "dual_layer": { "heading": "Dual-Layer Tractatus Architecture", - "intro": "Home AI is governed by Tractatus at two distinct layers simultaneously. This is the architectural insight that distinguishes the SLL approach from both ungoverned models and bolt-on safety filters.", + "intro": "Village AI is governed by Tractatus at two distinct layers simultaneously. This is the architectural insight that distinguishes the SLL approach from both ungoverned models and bolt-on safety filters.", "layer_a_badge": "LAYER A: INHERENT", "layer_a_title": "Tractatus Inside the Model", "layer_a_desc": "During training, the BoundaryEnforcer validates every batch. DPO alignment shapes preferences toward governed behaviour. The model learns to respect boundaries, prefer transparent responses, and defer values decisions to humans.", @@ -104,12 +104,12 @@ }, "philosophy": { "heading": "Philosophical Foundations", - "intro": "Home AI's governance draws from four philosophical traditions, each contributing a specific architectural principle. These are not decorative references — they translate into concrete design decisions.", + "intro": "Village AI's governance draws from four philosophical traditions, each contributing a specific architectural principle. These are not decorative references — they translate into concrete design decisions.", "berlin_title": "Isaiah Berlin — Value Pluralism", - "berlin_desc": "Values are genuinely plural and sometimes incompatible. When freedom conflicts with equality, there may be no single correct resolution. Home AI presents options without hierarchy and documents what each choice sacrifices.", + "berlin_desc": "Values are genuinely plural and sometimes incompatible. When freedom conflicts with equality, there may be no single correct resolution. Village AI presents options without hierarchy and documents what each choice sacrifices.", "berlin_arch": "Architectural expression: PluralisticDeliberationOrchestrator presents trade-offs; it does not resolve them.", "wittgenstein_title": "Ludwig Wittgenstein — Language Boundaries", - "wittgenstein_desc": "Language shapes what can be thought and expressed. Some things that matter most resist systematic expression. Home AI acknowledges the limits of what language models can capture — particularly around grief, cultural meaning, and lived experience.", + "wittgenstein_desc": "Language shapes what can be thought and expressed. Some things that matter most resist systematic expression. Village AI acknowledges the limits of what language models can capture — particularly around grief, cultural meaning, and lived experience.", "wittgenstein_arch": "Architectural expression: BoundaryEnforcer defers values decisions to humans, acknowledging limits of computation.", "indigenous_title": "Indigenous Sovereignty — Data as Relationship", "indigenous_desc": "Te Mana Raraunga (Māori Data Sovereignty), CARE Principles, and OCAP (First Nations Canada) provide frameworks where data is not property but relationship. Whakapapa (genealogy) belongs to the collective, not individuals. Consent is a community process, not an individual checkbox.", @@ -128,12 +128,12 @@ "layer2_desc": "Rules defined by community administrators. Content handling policies (e.g., \"deceased members require moderator review\"), cultural protocols (e.g., Māori tangi customs), visibility defaults, and AI training consent models. Each community configures its own constitution within Layer 1 constraints.", "layer2_enforcement": "Enforcement: constitutional rules validated by CrossReferenceValidator per tenant.", "layer3_title": "Layer 3: Adopted Wisdom Traditions", - "layer3_desc": "Individual members and communities can adopt principles from wisdom traditions to influence how Home AI frames responses. These are voluntary, reversible, and transparent. They influence presentation, not content access. Multiple traditions can be adopted simultaneously; conflicts are resolved by the member, not the AI.", + "layer3_desc": "Individual members and communities can adopt principles from wisdom traditions to influence how Village AI frames responses. These are voluntary, reversible, and transparent. They influence presentation, not content access. Multiple traditions can be adopted simultaneously; conflicts are resolved by the member, not the AI.", "layer3_enforcement": "Enforcement: framing hints in response generation. Override always available." }, "wisdom": { "heading": "Wisdom Traditions", - "intro": "Home AI offers thirteen wisdom traditions that members can adopt to guide AI behaviour. Each tradition has been validated against the Stanford Encyclopedia of Philosophy as the primary scholarly reference. Adoption is voluntary, transparent, and reversible.", + "intro": "Village AI offers thirteen wisdom traditions that members can adopt to guide AI behaviour. Each tradition has been validated against the Stanford Encyclopedia of Philosophy as the primary scholarly reference. Adoption is voluntary, transparent, and reversible.", "berlin_title": "Berlin: Value Pluralism", "berlin_desc": "Present options without ranking; acknowledge what each choice sacrifices.", "stoic_title": "Stoic: Equanimity and Virtue", @@ -176,7 +176,7 @@ }, "infrastructure": { "heading": "Training Infrastructure", - "intro": "Home AI follows a \"train local, deploy remote\" model. The training hardware sits in the developer's home. Trained model weights are deployed to production servers for inference. This keeps training costs low and training data under physical control.", + "intro": "Village AI follows a \"train local, deploy remote\" model. The training hardware sits in the developer's home. Trained model weights are deployed to production servers for inference. This keeps training costs low and training data under physical control.", "local_title": "Local Training", "local_item1": "Consumer GPU with 24GB VRAM via external enclosure", "local_item2": "QLoRA fine-tuning (4-bit quantisation fits in VRAM budget)", @@ -193,7 +193,7 @@ }, "bias": { "heading": "Bias Documentation and Verification", - "intro": "Home AI operates in the domain of family storytelling, which carries specific bias risks. Six bias categories have been documented with detection prompts, debiasing examples, and evaluation criteria.", + "intro": "Village AI operates in the domain of family storytelling, which carries specific bias risks. Six bias categories have been documented with detection prompts, debiasing examples, and evaluation criteria.", "family_title": "Family Structure", "family_desc": "Nuclear family as default; same-sex parents, blended families, single parents treated as normative.", "elder_title": "Elder Representation", @@ -220,7 +220,7 @@ }, "live_today": { "heading": "What's Live Today", - "intro": "Home AI currently operates in production with the following governed features. These run under the full six-service governance stack.", + "intro": "Village AI currently operates in production with the following governed features. These run under the full six-service governance stack.", "rag_title": "RAG-Based Help", "rag_desc": "Vector search retrieves relevant documentation, filtered by member permissions. Responses grounded in retrieved documents, not training data alone.", "ocr_title": "Document OCR", @@ -233,7 +233,7 @@ "limitations": { "heading": "Limitations and Open Questions", "item1": "Training not yet begun: The SLL architecture is designed and documented. Hardware is being installed. But no model has been trained yet. Claims about training-time governance are architectural design, not empirical results.", - "item2": "Limited deployment: Home AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments.", + "item2": "Limited deployment: Village AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments.", "item3": "Self-reported metrics: Performance and safety figures are reported by the same team that built the system. Independent audit is planned but not yet conducted.", "item4": "Tradition operationalisation: Can rich philosophical traditions be authentically reduced to framing hints? A member selecting \"Buddhist\" does not mean they understand or practise Buddhism. This risks superficiality.", "item5": "Training persistence unknown: Whether governance constraints survive hundreds of training rounds without degradation is an open research question. Drift detection is designed but untested.", diff --git a/public/locales/en/village-case-study.json b/public/locales/en/village-case-study.json index 828660bd..f91e1892 100644 --- a/public/locales/en/village-case-study.json +++ b/public/locales/en/village-case-study.json @@ -36,7 +36,7 @@ "infra_desc": "Production servers in New Zealand and the EU. No data transits US jurisdiction. Community data never leaves the deployment it belongs to.", "training_title": "Community-Controlled Training", "training_desc": "QLoRA fine-tuning on domain-specific data with consent tracking and provenance. Communities can withdraw training data and trigger model retraining.", - "link_note": "For a detailed account of the model architecture, training approach, and governance integration, see Home AI / SLL: Sovereign Locally-Trained Language Model." + "link_note": "For a detailed account of the model architecture, training approach, and governance integration, see Village AI / SLL: Sovereign Locally-Trained Language Model." }, "polycentric": { "heading": "Polycentric Governance", @@ -160,7 +160,7 @@ "heading": "Explore Further", "description": "Dive deeper into the technical architecture, read the research, or see the Village platform in action.", "visit_village": "Visit the Village →", - "home_ai": "Sovereign Language Model →", + "village_ai": "Sovereign Language Model →", "research_paper": "Research Paper →", "research_details": "Research Details →" } diff --git a/public/locales/fr/common.json b/public/locales/fr/common.json index c47903fa..849441e1 100644 --- a/public/locales/fr/common.json +++ b/public/locales/fr/common.json @@ -94,8 +94,8 @@ "for_implementers_desc": "Guide d'intégration et exemples de code", "village_case_study": "Étude de cas Village", "village_case_study_desc": "Preuves de déploiement en production", - "home_ai": "Home AI", - "home_ai_desc": "Modèle de langue souverain entraîné localement", + "village_ai": "Village AI", + "village_ai_desc": "Modèle de langue souverain entraîné localement", "agent_lightning": "Agent Lightning", "agent_lightning_desc": "Intégration d'optimisation des performances", "for_leaders": "Pour les dirigeants", diff --git a/public/locales/fr/homepage.json b/public/locales/fr/homepage.json index 57ea8667..1b096726 100644 --- a/public/locales/fr/homepage.json +++ b/public/locales/fr/homepage.json @@ -44,12 +44,12 @@ "evidence": { "badge": "Preuves de production", "heading": "Le Tractatus en production : La plateforme Village", - "subtitle": "Home AI applique les six services de gouvernance à chaque interaction avec l’utilisateur sur une plateforme communautaire en direct.", + "subtitle": "Village AI applique les six services de gouvernance à chaque interaction avec l’utilisateur sur une plateforme communautaire en direct.", "stat_services": "Services de gouvernance par réponse", "stat_months": "Mois en production", "stat_overhead": "Frais généraux de gouvernance par interaction", "cta_case_study": "Étude de cas technique →", - "cta_home_ai": "À propos de Home AI →", + "cta_village_ai": "À propos de Village AI →", "limitations_label": "Limites :", "limitations_text": "Déploiement à un stade précoce à travers quatre locataires fédérés, métriques auto-déclarées, chevauchement opérateur-développeur. Un audit indépendant et une validation plus large sont prévus pour 2026." }, @@ -98,7 +98,7 @@ "subtitle": "D’un incident de numéro de port à une architecture de gouvernance de production, à travers 800 commits et un an de recherche.", "oct_2025": "Création du cadre & 6 services de gouvernance", "oct_nov_2025": "Principes d’Alexander, Agent Lightning, i18n", - "dec_2025": "Étude de cas Village & déploiement de Home AI", + "dec_2025": "Étude de cas Village & déploiement de Village AI", "jan_2026": "Articles de recherche (3 éditions) publiés", "cta": "Voir la chronologie complète de la recherche →", "date_oct_2025": "Oct 2025", diff --git a/public/locales/fr/implementer.json b/public/locales/fr/implementer.json index 9c08ae73..1693c3ad 100644 --- a/public/locales/fr/implementer.json +++ b/public/locales/fr/implementer.json @@ -146,7 +146,7 @@ "services": "Services", "api": "Référence API", "patterns": "Modèles d'intégration", - "home_ai_arch": "Home AI", + "village_ai_arch": "Village AI", "steering_vectors_impl": "Vecteurs de guidage", "taonga_registry": "Registre Taonga", "roadmap": "Feuille de route" @@ -337,11 +337,11 @@ "sidecar_usecase": "Cas d'utilisation :", "sidecar_usecase_value": "Kubernetes, déploiements conteneurisés" }, - "home_ai_arch": { - "heading": "Home AI : Architecture souveraine à deux modèles", + "village_ai_arch": { + "heading": "Village AI : Architecture souveraine à deux modèles", "intro": "Déploiement en production de la gouvernance Tractatus sur des modèles open source entraînés localement, démontrant la portabilité du framework au-delà de Claude.", "arch_title": "Architecture de routage à deux modèles", - "arch_intro": "Home AI utilise une conception à double modèle où les requêtes sont routées en fonction de la complexité et des exigences de gouvernance. Les deux modèles fonctionnent localement avec la gouvernance Tractatus complète dans le pipeline d'inférence.", + "arch_intro": "Village AI utilise une conception à double modèle où les requêtes sont routées en fonction de la complexité et des exigences de gouvernance. Les deux modèles fonctionnent localement avec la gouvernance Tractatus complète dans le pipeline d'inférence.", "fast_title": "Modèle rapide : Llama 3.2 3B", "fast_1": "Objectif : Requêtes courantes avec pré-filtrage de gouvernance", "fast_2": "Affinage : QLoRA sur des données spécifiques au domaine", @@ -359,11 +359,11 @@ "stat_first": "Premier non-Claude", "stat_first_desc": "Valide la portabilité de Tractatus au-delà d'Anthropic", "status_note": "Statut : Gouvernance d'inférence opérationnelle. Installation du pipeline d'entraînement souverain en cours. Le déploiement en production chez Village Home Trust valide la portabilité de la gouvernance entre architectures de modèles.", - "cta": "Détails de l'architecture Home AI →" + "cta": "Détails de l'architecture Village AI →" }, "steering_impl": { "heading": "Vecteurs de guidage : Correction de biais au moment de l'inférence", - "intro": "Techniques de correction du comportement du modèle au moment de l'inférence sans ré-entraînement, applicables aux modèles affinés par QLoRA comme ceux de Home AI.", + "intro": "Techniques de correction du comportement du modèle au moment de l'inférence sans ré-entraînement, applicables aux modèles affinés par QLoRA comme ceux de Village AI.", "paper_ref": "Référence :", "paper_title": "Vecteurs de guidage et biais mécanique dans les systèmes d'IA souverains (STO-RES-0009 v1.1, février 2026)", "techniques_title": "Techniques clés pour les implémenteurs", @@ -404,7 +404,7 @@ "multi_llm_icon": "🤖", "multi_llm_title": "Support multi-LLM", "multi_llm_status": "Statut : Premier déploiement non-Claude opérationnel", - "multi_llm_desc": "Home AI déploie la gouvernance Tractatus sur Llama 3.1 8B et Llama 3.2 3B via l'affinage QLoRA — le premier déploiement non-Claude validé. Étend la portabilité de la gouvernance aux modèles open source avec le pipeline complet à 6 services.", + "multi_llm_desc": "Village AI déploie la gouvernance Tractatus sur Llama 3.1 8B et Llama 3.2 3B via l'affinage QLoRA — le premier déploiement non-Claude validé. Étend la portabilité de la gouvernance aux modèles open source avec le pipeline complet à 6 services.", "multi_llm_badge": "Premier déploiement live", "multi_llm_challenges": "Prochaines étapes :", "multi_llm_challenges_desc": "Adaptateurs GPT-4 et Gemini, appel d'outils/fonctions spécifiques au prestataire, achèvement du pipeline d'entraînement souverain", diff --git a/public/locales/fr/leader.json b/public/locales/fr/leader.json index 932bd1ec..4a6402e3 100644 --- a/public/locales/fr/leader.json +++ b/public/locales/fr/leader.json @@ -89,7 +89,7 @@ "development_status": { "heading": "État du Développement", "warning_title": "Framework de recherche validé en production", - "warning_text": "Tractatus est en développement actif depuis plus de 11 mois (avril 2025 à aujourd'hui) avec un déploiement en production chez Village Home Trust, une gouvernance souveraine de modèle de langue via Home AI, et plus de 171 800 décisions d'audit enregistrées. La validation indépendante et les tests de type red-team restent des besoins de recherche en suspens.", + "warning_text": "Tractatus est en développement actif depuis plus de 11 mois (avril 2025 à aujourd'hui) avec un déploiement en production chez Village Home Trust, une gouvernance souveraine de modèle de langue via Village AI, et plus de 171 800 décisions d'audit enregistrées. La validation indépendante et les tests de type red-team restent des besoins de recherche en suspens.", "validation_title": "Validé vs Non Validé", "validated_label": "Validé :", "validated_text": "Le cadre régit avec succès le code Claude dans les flux de travail de développement. L'utilisateur signale une amélioration de l'ordre de grandeur de la productivité pour les opérateurs non techniques qui construisent des systèmes de production.", @@ -98,9 +98,9 @@ "limitation_label": "Limitation connue :", "limitation_text": "Le cadre peut être contourné si l'IA choisit simplement de ne pas utiliser les outils de gouvernance. L'invocation volontaire reste une faiblesse structurelle nécessitant des mécanismes d'application externes." }, - "home_ai": { + "village_ai": { "heading": "IA souveraine : gouvernance intégrée dans des modèles entraînés localement", - "intro": "Home AI démontre ce que signifie intégrer la gouvernance directement dans des modèles de langue entraînés localement — non pas comme une couche de conformité externe, mais comme partie intégrante de l'architecture de service du modèle.", + "intro": "Village AI démontre ce que signifie intégrer la gouvernance directement dans des modèles de langue entraînés localement — non pas comme une couche de conformité externe, mais comme partie intégrante de l'architecture de service du modèle.", "architecture_title": "Architecture à deux modèles", "arch_fast": "Modèle rapide (3B paramètres) : Requêtes courantes avec pré-filtrage de gouvernance", "arch_deep": "Modèle approfondi (8B paramètres) : Raisonnement complexe avec pipeline de gouvernance complet", @@ -110,7 +110,7 @@ "strat_governance": "Gouvernance par conception : Les contraintes sont architecturales, pas une conformité rétrospective", "strat_regulatory": "Positionnement réglementaire : Structurellement plus fort que les approches de gouvernance ajoutées après coup", "status": "Statut actuel : Gouvernance d'inférence opérationnelle. Installation du pipeline d'entraînement en cours. Première surface de déploiement non-Claude pour la gouvernance Tractatus.", - "cta": "En savoir plus sur Home AI →" + "cta": "En savoir plus sur Village AI →" }, "taonga": { "heading": "Gouvernance polycentrique pour la souveraineté des données autochtones", diff --git a/public/locales/fr/researcher.json b/public/locales/fr/researcher.json index 40702824..5491aff2 100644 --- a/public/locales/fr/researcher.json +++ b/public/locales/fr/researcher.json @@ -12,7 +12,7 @@ "research_context": { "heading": "Contexte & Portée de la Recherche", "development_note": "Contexte de Développement", - "development_text": "Tractatus est développé depuis avril 2025 et est maintenant en production active (11+ mois). Ce qui a commencé comme une démonstration sur un projet unique s'est élargi pour inclure le déploiement en production chez Village Home Trust et la gouvernance souveraine de modèles linguistiques via Home AI. Les observations proviennent d'un engagement direct avec Claude Code (modèles Anthropic Claude, Sonnet 4.5 à Opus 4.6) sur plus de 1 000 sessions de développement. Il s'agit de recherche exploratoire, pas d'étude contrôlée.", + "development_text": "Tractatus est développé depuis avril 2025 et est maintenant en production active (11+ mois). Ce qui a commencé comme une démonstration sur un projet unique s'est élargi pour inclure le déploiement en production chez Village Home Trust et la gouvernance souveraine de modèles linguistiques via Village AI. Les observations proviennent d'un engagement direct avec Claude Code (modèles Anthropic Claude, Sonnet 4.5 à Opus 4.6) sur plus de 1 000 sessions de développement. Il s'agit de recherche exploratoire, pas d'étude contrôlée.", "paragraph_1": "L'alignement de l'IA avancée sur les valeurs humaines est l'un des défis les plus importants auxquels nous sommes confrontés. Alors que la croissance des capacités s'accélère sous l'impulsion des grandes technologies, nous sommes confrontés à un impératif catégorique : préserver le pouvoir de l'homme sur les décisions relatives aux valeurs, ou risquer de céder complètement le contrôle.", "paragraph_2": "Le cadre est né d'une nécessité pratique. Au cours du développement, nous avons observé des schémas récurrents dans lesquels les systèmes d'IA passaient outre les instructions explicites, s'écartaient des contraintes de valeurs établies ou dégradaient silencieusement la qualité sous la pression du contexte. Les approches traditionnelles en matière de gouvernance (documents de politique générale, lignes directrices éthiques, ingénierie rapide) se sont révélées insuffisantes pour prévenir ces défaillances.", "paragraph_3": "Au lieu d'espérer que les systèmes d'IA \"se comportent correctement\", Tractatus propose des contraintes structurelles où certains types de décisions requièrent un jugement humain. Ces limites architecturales peuvent s'adapter aux normes individuelles, organisationnelles et sociétales, créant ainsi une base pour un fonctionnement limité de l'IA qui peut s'adapter de manière plus sûre à la croissance des capacités.", @@ -132,7 +132,7 @@ "limitation_3_title": "3. Pas de test contradictoire", "limitation_3_desc": "Le cadre n'a pas fait l'objet d'une évaluation par l'équipe rouge, d'un test de jailbreak ou d'une évaluation rapide par des adversaires. Toutes les observations proviennent d'un processus de développement normal, et non de tentatives de contournement délibérées.", "limitation_4_title": "4. Spécificité de la plate-forme", - "limitation_4_desc": "Observations et interventions validées avec Claude Code (Anthropic Claude, Sonnet 4.5 à Opus 4.6) et Home AI (Llama 3.1/3.2 via QLoRA). La généralisation à d'autres systèmes LLM (Copilot, GPT-4, agents personnalisés) reste partiellement validée.", + "limitation_4_desc": "Observations et interventions validées avec Claude Code (Anthropic Claude, Sonnet 4.5 à Opus 4.6) et Village AI (Llama 3.1/3.2 via QLoRA). La généralisation à d'autres systèmes LLM (Copilot, GPT-4, agents personnalisés) reste partiellement validée.", "limitation_5_title": "5. Incertitude d'échelle", "limitation_5_desc": "Les caractéristiques de performance à l'échelle de l'entreprise (des milliers d'utilisateurs simultanés, des millions d'événements de gouvernance) sont totalement inconnues. La mise en œuvre actuelle est optimisée pour le contexte d'un seul utilisateur.", "future_research_title": "Besoins futurs en matière de recherche :", @@ -163,7 +163,7 @@ "validated_5_title": "✅ Gouvernance multi-déploiement réussie", "validated_5_item1": "Le framework gouverne agenticgovernance.digital (11+ mois d'opération continue)", "validated_5_item2": "Déploiement en production Village Home Trust : zéro violation de gouvernance", - "validated_5_item3": "Gouvernance d'inférence souveraine Home AI : opérationnelle", + "validated_5_item3": "Gouvernance d'inférence souveraine Village AI : opérationnelle", "validated_5_item4": "Règles culturelles de l'ADN (inst_085-089) appliquées par le biais de crochets de précommission (4+ mois opérationnels)", "validated_5_item5": "Intégration Phase 5 : 100% complète (les 6 services, 203/203 tests réussis)", "validated_5_item6": "Support multilingue : EN, DE, FR, Te Reo Maori", @@ -179,7 +179,7 @@ "not_validated_2_item2": "Inconnu : Résistance aux tentatives délibérées de contournement, aux invites de jailbreak, aux tests contradictoires", "not_validated_2_item3": "Besoin de recherche : Évaluation par une équipe de chercheurs en sécurité", "not_validated_3_title": "⚠️ Cohérence multiplateforme (Partielle)", - "not_validated_3_item1": "Validé : Claude Code (Anthropic Claude, Sonnet 4.5 à Opus 4.6) et Home AI (Llama 3.1/3.2 via QLoRA)", + "not_validated_3_item1": "Validé : Claude Code (Anthropic Claude, Sonnet 4.5 à Opus 4.6) et Village AI (Llama 3.1/3.2 via QLoRA)", "not_validated_3_item2": "Inconnu : Généralisabilité à Copilot, GPT-4, AutoGPT, LangChain, CrewAI, autres modèles ouverts", "not_validated_3_item3": "Besoin de recherche : Études de validation multiplateforme plus larges au-delà des familles Claude et Llama", "not_validated_4_title": "architecture des sessions simultanées", @@ -316,8 +316,8 @@ "technique_2": "Representation Engineering (RepE) : Sondes linéaires pour identifier et modifier les représentations de concepts", "technique_3": "FairSteer & DSO : Guidage orienté équité par optimisation distributionnellement robuste", "technique_4": "Autoencodeurs épars : Interprétabilité mécanistique par décomposition de neurones polysémantiques", - "application_heading": "Application à Village Home AI", - "application_text": "Le déploiement Village Home AI utilise des modèles Llama 3.1/3.2 ajustés par QLoRA où les vecteurs de guidage peuvent être appliqués au moment de l'inférence. Cela crée une architecture de gouvernance à deux couches.", + "application_heading": "Application à Village Village AI", + "application_text": "Le déploiement Village Village AI utilise des modèles Llama 3.1/3.2 ajustés par QLoRA où les vecteurs de guidage peuvent être appliqués au moment de l'inférence. Cela crée une architecture de gouvernance à deux couches.", "read_link": "Lire l'article (HTML) →", "pdf_link": "Télécharger le PDF" }, @@ -335,16 +335,16 @@ "read_link": "Lire le brouillon (HTML) →", "pdf_link": "Télécharger le PDF" }, - "home_ai": { - "heading": "Home AI : Plateforme de recherche en gouvernance souveraine", - "intro": "Home AI représente une étape de recherche significative : gouvernance Tractatus complète intégrée dans un pipeline d'inférence de modèle de langue souverain entraîné localement.", + "village_ai": { + "heading": "Village AI : Plateforme de recherche en gouvernance souveraine", + "intro": "Village AI représente une étape de recherche significative : gouvernance Tractatus complète intégrée dans un pipeline d'inférence de modèle de langue souverain entraîné localement.", "architecture_heading": "Architecture à deux modèles", "arch_1": "Modèle rapide (Llama 3.2 3B) : Réponses à faible latence pour les requêtes courantes, avec pré-filtrage de gouvernance", "arch_2": "Modèle approfondi (Llama 3.1 8B) : Raisonnement complexe avec pipeline de gouvernance complet", "arch_3": "Ajustement QLoRA : Adaptation paramétrique efficace sur matériel local", "research_heading": "Importance pour la recherche", - "research_text": "Home AI ouvre la question de recherche de la gouvernance-dans-la-boucle-d'entraînement pour les modèles contrôlés par la communauté.", - "learn_more": "En savoir plus sur Home AI →" + "research_text": "Village AI ouvre la question de recherche de la gouvernance-dans-la-boucle-d'entraînement pour les modèles contrôlés par la communauté.", + "learn_more": "En savoir plus sur Village AI →" } }, "footer": { diff --git a/public/locales/fr/home-ai.json b/public/locales/fr/village-ai.json similarity index 87% rename from public/locales/fr/home-ai.json rename to public/locales/fr/village-ai.json index a45f330b..bad320eb 100644 --- a/public/locales/fr/home-ai.json +++ b/public/locales/fr/village-ai.json @@ -1,13 +1,13 @@ { "breadcrumb": { "home": "Accueil", - "current": "Home AI" + "current": "Village AI" }, "hero": { "badge": "MODÈLE LINGUISTIQUE SOUVERAIN FORMÉ LOCALEMENT", - "title": "Home AI", + "title": "Village AI", "subtitle": "Un modèle linguistique dans lequel la communauté contrôle les données d'apprentissage, les poids du modèle et les règles de gouvernance. Il ne s'agit pas seulement d'une inférence gouvernée — mais d'une formation gouvernée.", - "status": "Status: Home AI fonctionne en production pour l'inférence. Le pipeline de formation souveraine est conçu et documenté ; le matériel est commandé. La formation n'a pas encore commencé. Cette page décrit à la fois la capacité actuelle et l'architecture prévue." + "status": "Status: Village AI fonctionne en production pour l'inférence. Le pipeline de formation souveraine est conçu et documenté ; le matériel est commandé. La formation n'a pas encore commencé. Cette page décrit à la fois la capacité actuelle et l'architecture prévue." }, "sll": { "heading": "Qu'est-ce qu'un SLL ?", @@ -34,7 +34,7 @@ }, "two_model": { "heading": "Architecture à deux modèles", - "intro": "Home AI utilise deux modèles de taille différente, acheminés en fonction de la complexité de la tâche. Il ne s'agit pas d'un mécanisme de repli — chaque modèle est optimisé pour son rôle.", + "intro": "Village AI utilise deux modèles de taille différente, acheminés en fonction de la complexité de la tâche. Il ne s'agit pas d'un mécanisme de repli — chaque modèle est optimisé pour son rôle.", "fast_title": "3B Modèle — Assistant rapide", "fast_desc": "Traite les demandes d'aide, les infobulles, les explications d'erreurs, les résumés succincts et les traductions. Temps de réponse visé : moins de 5 secondes.", "fast_routing": "Déclencheurs de routage : requêtes simples, modèles connus de FAQ, tâches en une seule étape.", @@ -48,11 +48,11 @@ "intro": "La formation n'est pas monolithique. Trois niveaux servent différents champs d'application, chacun étant soumis à des contraintes de gouvernance appropriées.", "tier1_title": "Niveau 1 : Plate-forme de base", "tier1_badge": "Toutes les communautés", - "tier1_desc": "Il est formé à la documentation, à la philosophie, aux guides des fonctionnalités et au contenu de la FAQ de la plateforme. Comprend le fonctionnement de Village, les valeurs de Home AI et la manière d'aider les membres à naviguer sur la plateforme.", + "tier1_desc": "Il est formé à la documentation, à la philosophie, aux guides des fonctionnalités et au contenu de la FAQ de la plateforme. Comprend le fonctionnement de Village, les valeurs de Village AI et la manière d'aider les membres à naviguer sur la plateforme.", "tier1_update": "Fréquence de mise à jour : hebdomadaire pendant la phase bêta, trimestrielle lors de l'AG. Méthode d'entraînement : Mise au point QLoRA.", "tier2_title": "Niveau 2 : Adaptateurs pour les locataires", "tier2_badge": "Par communauté", - "tier2_desc": "Chaque communauté forme un adaptateur LoRA léger sur son propre contenu — histoires, documents, photos et événements que les membres ont explicitement consenti à inclure. Cela permet à Home AI de répondre à des questions telles que \"Quelles sont les histoires partagées par Grandma ?\" sans accéder aux données d'une autre communauté.", + "tier2_desc": "Chaque communauté forme un adaptateur LoRA léger sur son propre contenu — histoires, documents, photos et événements que les membres ont explicitement consenti à inclure. Cela permet à Village AI de répondre à des questions telles que \"Quelles sont les histoires partagées par Grandma ?\" sans accéder aux données d'une autre communauté.", "tier2_update": "Les adaptateurs sont de petite taille (50–100MB). Le consentement est donné pour chaque élément du contenu. Le contenu marqué \"seulement moi\" n'est jamais inclus, quel que soit le consentement. La formation utilise DPO (Direct Preference Optimization) pour l'alignement des valeurs.", "tier3_title": "Niveau 3 : Individuel (futur)", "tier3_badge": "Par membre", @@ -61,7 +61,7 @@ }, "governance_training": { "heading": "Gouvernance pendant la formation", - "intro1": "Il s'agit là de la principale contribution de la recherche. La plupart des cadres de gouvernance de l'IA opèrent au moment de l'inférence — ils filtrent ou contraignent les réponses après que le modèle a déjà été formé. Home AI intègre la gouvernance dans la boucle d'apprentissage.", + "intro1": "Il s'agit là de la principale contribution de la recherche. La plupart des cadres de gouvernance de l'IA opèrent au moment de l'inférence — ils filtrent ou contraignent les réponses après que le modèle a déjà été formé. Village AI intègre la gouvernance dans la boucle d'apprentissage.", "intro2": "Ceci est conforme au principe de Not-Separateness de Christopher Alexander : la gouvernance est intégrée dans l'architecture de la formation, et non appliquée après coup. Le BoundaryEnforcer valide chaque lot de formation avant le passage à l'étape suivante. Si un lot contient des données concernant plusieurs locataires, des données sans consentement ou du contenu marqué comme privé, le lot est rejeté et l'étape de formation n'a pas lieu.", "code_comment1": "# Gouvernance à l'intérieur de la boucle de formation (non-séparation)", "code_line1": "for batch in training_data:", @@ -81,7 +81,7 @@ }, "dual_layer": { "heading": "Architecture double couche Tractatus", - "intro": "Home AI est régi par Tractatus à deux couches distinctes simultanément. C'est l'idée architecturale qui distingue l'approche SLL des modèles non gouvernés et des filtres de sécurité ajoutés.", + "intro": "Village AI est régi par Tractatus à deux couches distinctes simultanément. C'est l'idée architecturale qui distingue l'approche SLL des modèles non gouvernés et des filtres de sécurité ajoutés.", "layer_a_badge": "COUCHE A : INHÉRENTE", "layer_a_title": "Tractatus A l'intérieur du modèle", "layer_a_desc": "Pendant la formation, le BoundaryEnforcer valide chaque lot. L'alignement DPO façonne les préférences vers un comportement gouverné. Le modèle apprend à à respecter les limites, à préférer les réponses transparentes et à s'en remettre aux humains pour les décisions relatives aux valeurs.", @@ -104,12 +104,12 @@ }, "philosophy": { "heading": "Fondements philosophiques", - "intro": "La gouvernance de Home AI s'inspire de quatre traditions philosophiques, chacune apportant un principe architectural spécifique. Il ne s'agit pas de références décoratives —, elles se traduisent par des décisions de conception concrètes.", + "intro": "La gouvernance de Village AI s'inspire de quatre traditions philosophiques, chacune apportant un principe architectural spécifique. Il ne s'agit pas de références décoratives —, elles se traduisent par des décisions de conception concrètes.", "berlin_title": "Isaiah Berlin — Pluralisme des valeurs", - "berlin_desc": "Les valeurs sont véritablement plurielles et parfois incompatibles. Lorsque la liberté entre en conflit avec l'égalité, il n'y a pas toujours de solution unique et correcte. Home AI présente des options sans hiérarchie et documente ce que chaque choix sacrifie.", + "berlin_desc": "Les valeurs sont véritablement plurielles et parfois incompatibles. Lorsque la liberté entre en conflit avec l'égalité, il n'y a pas toujours de solution unique et correcte. Village AI présente des options sans hiérarchie et documente ce que chaque choix sacrifie.", "berlin_arch": "Expression architecturale : PluralisticDeliberationOrchestrator présente des compromis, mais ne les résout pas.", "wittgenstein_title": "Ludwig Wittgenstein — Frontières linguistiques", - "wittgenstein_desc": "La langue façonne ce qui peut être pensé et exprimé. Certaines des choses les plus importantes résistent à l'expression systématique. Home AI reconnaît les limites de ce que les modèles linguistiques peuvent saisir — notamment en ce qui concerne le deuil, la signification culturelle et l'expérience vécue.", + "wittgenstein_desc": "La langue façonne ce qui peut être pensé et exprimé. Certaines des choses les plus importantes résistent à l'expression systématique. Village AI reconnaît les limites de ce que les modèles linguistiques peuvent saisir — notamment en ce qui concerne le deuil, la signification culturelle et l'expérience vécue.", "wittgenstein_arch": "Expression architecturale : BoundaryEnforcer s'en remet aux humains pour les décisions relatives aux valeurs, reconnaissant ainsi les limites de l'informatique.", "indigenous_title": "Souveraineté indigène — Les données en tant que relations", "indigenous_desc": "Te Mana Raraunga (souveraineté des données des Māori), les principes CARE et OCAP (Premières nations du Canada) fournissent des cadres dans lesquels les données ne sont pas des biens mais des relations. Whakapapa (généalogie) appartient à la collectivité et non aux individus. Le consentement est un processus communautaire et non une case à cocher individuelle.", @@ -128,12 +128,12 @@ "layer2_desc": "Règles définies par les administrateurs de la communauté. Politiques de traitement du contenu (par exemple, \"les membres décédés doivent être examinés par un modérateur\"), protocoles culturels (par exemple, coutumes Māori tangi), visibilité par défaut et modèles de consentement pour l'entraînement à l'IA. Chaque communauté configure sa propre constitution dans le cadre des contraintes de la couche 1.", "layer2_enforcement": "Application : règles constitutionnelles validées par CrossReferenceValidator par locataire.", "layer3_title": "Niveau 3 : Traditions de sagesse adoptées", - "layer3_desc": "Les membres individuels et les communautés peuvent adopter des principes issus des traditions de sagesse afin d'influencer la manière dont Home AI élabore ses réponses. Ces principes sont volontaires, réversibles et transparents. Elles influencent la présentation et non l'accès au contenu. Plusieurs traditions peuvent être adoptées simultanément ; les conflits sont résolus par le membre, et non par l'IA.", + "layer3_desc": "Les membres individuels et les communautés peuvent adopter des principes issus des traditions de sagesse afin d'influencer la manière dont Village AI élabore ses réponses. Ces principes sont volontaires, réversibles et transparents. Elles influencent la présentation et non l'accès au contenu. Plusieurs traditions peuvent être adoptées simultanément ; les conflits sont résolus par le membre, et non par l'IA.", "layer3_enforcement": "Mise en œuvre : conseils de cadrage lors de la génération de la réponse. Une dérogation est toujours possible." }, "wisdom": { "heading": "Traditions de sagesse", - "intro": "Home AI propose treize traditions de sagesse que les membres peuvent adopter pour guider le comportement de l'IA. Chaque tradition a été validée par rapport au Stanford Encyclopedia of Philosophy, qui constitue la principale référence savante. L'adoption est volontaire, transparente et réversible.", + "intro": "Village AI propose treize traditions de sagesse que les membres peuvent adopter pour guider le comportement de l'IA. Chaque tradition a été validée par rapport au Stanford Encyclopedia of Philosophy, qui constitue la principale référence savante. L'adoption est volontaire, transparente et réversible.", "berlin_title": "Berlin : Pluralisme des valeurs", "berlin_desc": "Présenter les options sans les classer ; reconnaître ce que chaque choix sacrifie.", "stoic_title": "Stoïque : Equanimité et vertu", @@ -176,7 +176,7 @@ }, "infrastructure": { "heading": "Infrastructure de formation", - "intro": "Home AI suit le modèle \"former localement, déployer à distance\". Le matériel d'entraînement se trouve au domicile du développeur. Les poids des modèles formés sont déployés sur les serveurs de production pour l'inférence. Cela permet de maintenir les coûts de formation à un niveau bas et de contrôler physiquement les données de formation.", + "intro": "Village AI suit le modèle \"former localement, déployer à distance\". Le matériel d'entraînement se trouve au domicile du développeur. Les poids des modèles formés sont déployés sur les serveurs de production pour l'inférence. Cela permet de maintenir les coûts de formation à un niveau bas et de contrôler physiquement les données de formation.", "local_title": "Formation locale", "local_item1": "GPU grand public avec 24 Go VRAM via un boîtier externe", "local_item2": "Mise au point QLoRA (la quantification à 4 bits s'inscrit dans le budget VRAM)", @@ -193,7 +193,7 @@ }, "bias": { "heading": "Documentation et vérification des préjugés", - "intro": "Home AI opère dans le domaine de la narration familiale, qui comporte des risques de biais spécifiques. Six catégories de biais ont été répertoriées, accompagnées de messages de détection, d'exemples de débiaisage et de critères d'évaluation.", + "intro": "Village AI opère dans le domaine de la narration familiale, qui comporte des risques de biais spécifiques. Six catégories de biais ont été répertoriées, accompagnées de messages de détection, d'exemples de débiaisage et de critères d'évaluation.", "family_title": "Structure de la famille", "family_desc": "Famille nucléaire par défaut ; les parents de même sexe, les familles recomposées, les parents célibataires sont considérés comme normatifs.", "elder_title": "Représentation des personnes âgées", @@ -220,7 +220,7 @@ }, "live_today": { "heading": "Ce qui est en direct aujourd'hui", - "intro": "Home AI fonctionne actuellement en production avec les fonctionnalités suivantes. Celles-ci sont exécutées dans le cadre de la pile de gouvernance à six services.", + "intro": "Village AI fonctionne actuellement en production avec les fonctionnalités suivantes. Celles-ci sont exécutées dans le cadre de la pile de gouvernance à six services.", "rag_title": "Aide basée sur RAG", "rag_desc": "La recherche vectorielle permet de retrouver la documentation pertinente, filtrée par les autorisations des membres. Les réponses sont fondées sur les documents retrouvés, et non sur les seules données de formation.", "ocr_title": "OCR de documents", @@ -233,7 +233,7 @@ "limitations": { "heading": "Limites et questions ouvertes", "item1": "La formation n'a pas encore commencé: L'architecture SLL est conçue et documentée. Le matériel est commandé. Mais aucun modèle n'a encore été formé. Les affirmations relatives à la gouvernance du temps de formation relèvent de la conception architecturale et non de résultats empiriques.", - "item2": "Déploiement limité: Home AI fonctionne à travers quatre locataires fédérés au sein d'une plateforme construite par le développeur du cadre. L'efficacité de la gouvernance ne peut être généralisée sans déploiements indépendants.", + "item2": "Déploiement limité: Village AI fonctionne à travers quatre locataires fédérés au sein d'une plateforme construite par le développeur du cadre. L'efficacité de la gouvernance ne peut être généralisée sans déploiements indépendants.", "item3": "Mesures autodéclarées: Les chiffres relatifs à la performance et à la sécurité sont rapportés par l'équipe qui a construit le système. Un audit indépendant est prévu mais n'a pas encore été réalisé.", "item4": "Tradition operationalisation: Les riches traditions philosophiques peuvent-elles être authentiquement réduites à des indices de cadrage ? Un membre qui choisit \"bouddhiste\" ne signifie pas qu'il comprend ou pratique le bouddhisme. Cela risque d'être superficiel.", "item5": "Persistance de l'entraînement inconnue: La question de savoir si les contraintes de gouvernance survivent à des centaines de cycles d'entraînement sans se dégrader est une question de recherche ouverte. La détection des dérives est conçue mais n'a pas été testée.", diff --git a/public/locales/fr/village-case-study.json b/public/locales/fr/village-case-study.json index 230ea568..678bfeb5 100644 --- a/public/locales/fr/village-case-study.json +++ b/public/locales/fr/village-case-study.json @@ -36,7 +36,7 @@ "infra_desc": "Serveurs de production en Nouvelle-Zélande et dans l'UE. Aucune donnée ne transite par la juridiction américaine. Les données communautaires ne quittent jamais le déploiement auquel elles appartiennent.", "training_title": "Formation sous contrôle communautaire", "training_desc": "Mise au point de QLoRA sur des données spécifiques à un domaine avec suivi du consentement et de la provenance. Les communautés peuvent retirer des données d'entraînement et déclencher le recyclage du modèle.", - "link_note": "Pour une présentation détaillée de l'architecture du modèle, de l'approche de la formation et de l'intégration de la gouvernance, voir Home AI / SLL: Sovereign Locally-Trained Language Model." + "link_note": "Pour une présentation détaillée de l'architecture du modèle, de l'approche de la formation et de l'intégration de la gouvernance, voir Village AI / SLL: Sovereign Locally-Trained Language Model." }, "polycentric": { "heading": "Gouvernance polycentrique", @@ -160,7 +160,7 @@ "heading": "En savoir plus", "description": "Plongez dans l'architecture technique, lisez les études ou voyez la plateforme Village en action.", "visit_village": "Visiter le village →", - "home_ai": "Modèle de langue souveraine →", + "village_ai": "Modèle de langue souveraine →", "research_paper": "Document de recherche →", "research_details": "Détails de la recherche →" } diff --git a/public/locales/mi/common.json b/public/locales/mi/common.json index 0e4470ec..22c5c3aa 100644 --- a/public/locales/mi/common.json +++ b/public/locales/mi/common.json @@ -94,8 +94,8 @@ "for_implementers_desc": "Aratohu whakaurunga me ngā tauira waehere", "village_case_study": "Rangahau Tauira Village", "village_case_study_desc": "Ngā taunakitanga whakatū whakamahi", - "home_ai": "Home AI", - "home_ai_desc": "Tauira reo rangatiratanga i whakangungu ā-rohe", + "village_ai": "Village AI", + "village_ai_desc": "Tauira reo rangatiratanga i whakangungu ā-rohe", "agent_lightning": "Agent Lightning", "agent_lightning_desc": "Whakaurunga whakapai mahinga", "for_leaders": "Mō Ngā Kaihautū", diff --git a/public/locales/mi/homepage.json b/public/locales/mi/homepage.json index 100c22a8..24b0b1cf 100644 --- a/public/locales/mi/homepage.json +++ b/public/locales/mi/homepage.json @@ -44,12 +44,12 @@ "evidence": { "badge": "Taunakitanga Whakamahinga", "heading": "Tractatus i te Whakamahinga: Te Pūhara Village", - "subtitle": "Ka whakamahi a Home AI i ngā ratonga whakahaere e ono katoa ki ia tauwhitinga kaiwhakamahi i tētahi pūhara hapori ora.", + "subtitle": "Ka whakamahi a Village AI i ngā ratonga whakahaere e ono katoa ki ia tauwhitinga kaiwhakamahi i tētahi pūhara hapori ora.", "stat_services": "Ngā ratonga whakahaere mō ia whakautu", "stat_months": "Ngā marama i te whakamahinga", "stat_overhead": "Te utu whakahaere mō ia tauwhitinga", "cta_case_study": "Tātaritanga Hangarau →", - "cta_home_ai": "Mō Home AI →", + "cta_village_ai": "Mō Village AI →", "limitations_label": "Ngā Herenga:", "limitations_text": "He whakatūranga tīmatanga puta noa i ngā rōpū whakakotahi e whā, ngā inenga ā-whaiaro, te pānga o te kaiwhakahaere-kaihanga. Kua whakaritea te arotake motuhake me te whakaū whānui mō te 2026." }, @@ -98,7 +98,7 @@ "subtitle": "Mai i tētahi takahanga tau tauranga ki tētahi hanga whakahaere whakamahinga, puta noa i ngā tuku 800 me te kotahi tau rangahau.", "oct_2025": "Te tīmatatanga anga & ngā ratonga whakahaere e 6", "oct_nov_2025": "Ngā mātāpono Alexander, Agent Lightning, i18n", - "dec_2025": "Tātaritanga Village & te whakatūranga Home AI", + "dec_2025": "Tātaritanga Village & te whakatūranga Village AI", "jan_2026": "Ngā pepa rangahau (putanga e 3) kua whakaputaina", "cta": "Tirohia te rārangi wā rangahau katoa →", "date_oct_2025": "Oke 2025", diff --git a/public/locales/mi/implementer.json b/public/locales/mi/implementer.json index e6ba8919..fc154e2c 100644 --- a/public/locales/mi/implementer.json +++ b/public/locales/mi/implementer.json @@ -30,7 +30,7 @@ "services": "Ngā ratonga", "api": "Tautuhi API", "patterns": "Ngā Tauira Whakaurunga", - "home_ai_arch": "AI kāinga", + "village_ai_arch": "Village AI", "steering_vectors_impl": "Ngā wīra arataki", "taonga_registry": "Rēhita Taonga", "roadmap": "Mahere Ara" @@ -242,11 +242,11 @@ "sidecar_usecase": "Tāurutau Whakamahinga:", "sidecar_usecase_value": "Kubernetes, ngā whakarewatanga ipu" }, - "home_ai_arch": { + "village_ai_arch": { "heading": "AI ā-Kāinga: Hanganga Rangatira Rua-Mōdela", "intro": "Te whakarewatanga whakaputa o te whakahaere Tractatus ki runga i ngā tauira puna tuwhera kua whakangungua ā-rohe, e whakaatu ana i te kawe o te anga whakamahi ki tua atu o Claude.", "arch_title": "Hoahoanga Whakahaere Ara Tauira Rua", - "arch_intro": "Ka whakamahi te AI ā-whare i tētahi hoahoa tauira takirua, ā, ka tohaina ngā pātai i runga i te matatini me ngā whakaritenga whakahaere. Ka whakahaerehia ngā tauira e rua i te rohe, me te whakahaere katoa a Tractatus i roto i te putorino whakatau.", + "arch_intro": "Ka whakamahi te Village AI i tētahi hoahoa tauira takirua, ā, ka tohaina ngā pātai i runga i te matatini me ngā whakaritenga whakahaere. Ka whakahaerehia ngā tauira e rua i te rohe, me te whakahaere katoa a Tractatus i roto i te putorino whakatau.", "fast_title": "Mōdela Tere: Llama 3.2 3B", "fast_1": "Tikanga: Ngā pātai noa me te whakahaere kua tātarihia i mua", "fast_2": "Whakapainga āta: QLoRA ki ngā raraunga motuhake mō te rohe", @@ -264,7 +264,7 @@ "stat_first": "Tuatahi Kāore i a Claude", "stat_first_desc": "E whakau ana i te kawea o Tractatus ki tua atu o Anthropic", "status_note": "Tūnga: Kei te whakahaere te whakahaere whakapae. Kei te whakatinanahia te whakaurunga o te ara whakangungu rangatira. E whakamana ana te whakarewatanga whakaputa i Village Home Trust i te kawea o te whakahaere puta noa i ngā hanganga tauira.", - "cta": "Ngā taipitopito o te hanganga AI kāinga →" + "cta": "Ngā taipitopito o te hanganga Village AI →" }, "steering_impl": { "heading": "Ngā Āhua Arataki: Whakatikatika i te Whakapae i te Wā Whakamātau", @@ -310,7 +310,7 @@ "multi_llm_title": "Tautoko maha-LLM", "multi_llm_badge": "Te Whakarewatanga Tuatahi Ora", "multi_llm_status": "Tūnga: Kua whakahohea te tukunga tuatahi ki waho o Claude", - "multi_llm_desc": "Ka whakatinana a Home AI i te whakahaere Tractatus ki runga i te Llama 3.1 8B me te Llama 3.2 3B mā te whakatikatika QLoRA — koinei te tuatahi kua whakamana hei whakatinanatanga kāore i te whakamahi i a Claude. Ka whakawhānuihia te kawe o te whakahaere ki ngā tauira puna tuwhera me te paipa ratonga e ono katoa.", + "multi_llm_desc": "Ka whakatinana a Village AI i te whakahaere Tractatus ki runga i te Llama 3.1 8B me te Llama 3.2 3B mā te whakatikatika QLoRA — koinei te tuatahi kua whakamana hei whakatinanatanga kāore i te whakamahi i a Claude. Ka whakawhānuihia te kawe o te whakahaere ki ngā tauira puna tuwhera me te paipa ratonga e ono katoa.", "multi_llm_challenges": "Ngā hikoinga e whai ake nei:", "multi_llm_challenges_desc": "Ngā kaitāuta GPT-4 me Gemini, te karanga taputapu/mahi motuhake a ia kaiwhakarato, te whakaoti i te ara whakangungu rangatira", "bindings_icon": "Pukapuka", diff --git a/public/locales/mi/leader.json b/public/locales/mi/leader.json index a6475355..337dc678 100644 --- a/public/locales/mi/leader.json +++ b/public/locales/mi/leader.json @@ -89,7 +89,7 @@ "development_status": { "heading": "Tūnga Whanaketanga", "warning_title": "Anga Rangahau Kua Whakamanaia e te Hanga", - "warning_text": "Kua neke atu i te 11 marama e whakawhanake ana a Tractatus (Mai 2025 ki nāianei), me te whakarewatanga whakaputa ki Village Home Trust, te whakahaere rangatira o te tauira reo mā Home AI, me te neke atu i te 171,800 whakataunga arotake kua tuhia. Kei te toe tonu ngā whakamana motuhake me ngā whakamātautau kapa whero hei hiahia rangahau.", + "warning_text": "Kua neke atu i te 11 marama e whakawhanake ana a Tractatus (Mai 2025 ki nāianei), me te whakarewatanga whakaputa ki Village Home Trust, te whakahaere rangatira o te tauira reo mā Village AI, me te neke atu i te 171,800 whakataunga arotake kua tuhia. Kei te toe tonu ngā whakamana motuhake me ngā whakamātautau kapa whero hei hiahia rangahau.", "validation_title": "Kua whakamanahia vs. Kāore i whakamanahia", "validated_label": "Kua whakamanahia:", "validated_text": "Kei te whakahaere pai te anga i te Waehere Claude i roto i ngā mahinga whakawhanake. E ripoata ana ngā kaiwhakamahi i te pikinga hua mahi e tekau ngā wā mō ngā kaiwhakahaere kāore i te hangarau e hanga ana i ngā pūnaha whakaputa.", @@ -98,9 +98,9 @@ "limitation_label": "Herenga kua mōhiotia:", "limitation_text": "Ka taea te karo i te anga mēnā ka whiriwhiri noa te AI kia kaua e whakamahi i ngā taputapu whakahaere. Ko te karanga ā-kōwhiringa he ngoikoretanga hanganga tonu, ā, me hiahiatia ngā tikanga whakatinana ā-waho." }, - "home_ai": { + "village_ai": { "heading": "Sovereign AI: Te whakahaere kua whakaurua ki ngā tauira kua whakangungua ā-rohe", - "intro": "E whakaatu ana te AI kāinga he aha te tikanga kia whakauruhia te whakahaere tika ki roto tonu i ngā tauira reo kua whakangungua ā-rohe — ehara i te paparanga whakatutuki ā-waho, engari he wāhanga o te hanganga ratonga tauira anō.", + "intro": "E whakaatu ana te Village AI he aha te tikanga kia whakauruhia te whakahaere tika ki roto tonu i ngā tauira reo kua whakangungua ā-rohe — ehara i te paparanga whakatutuki ā-waho, engari he wāhanga o te hanganga ratonga tauira anō.", "architecture_title": "Hoahoanga Tauira Rua", "arch_fast": "Tauira tere (3B tawhā): Uiuitanga auau me te tātari ā-mua whakahaere", "arch_deep": "Mōdela hōhonu (8 piriona ngā tawhā): Whakaaro matatini me te putorino whakahaere katoa", diff --git a/public/locales/mi/researcher.json b/public/locales/mi/researcher.json index 50d1afb3..daedeee8 100644 --- a/public/locales/mi/researcher.json +++ b/public/locales/mi/researcher.json @@ -25,7 +25,7 @@ "research_context": { "heading": "Horopaki me te Whānuitanga o te Rangahau", "development_note": "Horopaki Whanaketanga", - "development_text": "Kua whakawhanakehia a Tractatus mai i Aperira 2025, ā, kei te whakaputa tonu ināianei (neke atu i te 11 marama). I tīmata hei whakaaturanga kaupapa kotahi, kua whakawhānuihia kia whakauru i te whakaurunga whakaputa ki Village Home Trust me te whakahaere rangatira o ngā tauira reo mā Home AI. Nā ngā wheako tūturu i puta mai i te mahi tūhono ki Claude Code (ngā tauira Claude a Anthropic, Sonnet 4.5 ki Opus 4.6) i roto i ngā wānanga whakawhanake neke atu i te 1,000. He rangahau torotoro tēnei, ehara i te rangahau whakahaere.", + "development_text": "Kua whakawhanakehia a Tractatus mai i Aperira 2025, ā, kei te whakaputa tonu ināianei (neke atu i te 11 marama). I tīmata hei whakaaturanga kaupapa kotahi, kua whakawhānuihia kia whakauru i te whakaurunga whakaputa ki Village Home Trust me te whakahaere rangatira o ngā tauira reo mā Village AI. Nā ngā wheako tūturu i puta mai i te mahi tūhono ki Claude Code (ngā tauira Claude a Anthropic, Sonnet 4.5 ki Opus 4.6) i roto i ngā wānanga whakawhanake neke atu i te 1,000. He rangahau torotoro tēnei, ehara i te rangahau whakahaere.", "paragraph_1": "Ko te whakakotahi i te AI matatau ki ngā uara a te tangata tētahi o ngā wero tino nui e arohia ana e tātou. I te tere haere o te tipu o ngā pūkenga i raro i te kaha o ngā kamupene hangarau nui, ka tū tātou ki tētahi here whakahau: kia tiakina te mana whakahaere a te tangata ki ngā whakataunga uara, kia kore ai e tuku katoa te mana whakahaere.", "paragraph_2": "I puta te anga i te hiahia whaihua. I te wā o te whakawhanaketanga, i kite mātou i ngā tauira e hoki anō ana, ā, i reira ka whakakore ngā pūnaha AI i ngā tohutohu mārama, ka wehe atu i ngā here uara kua whakaritea, ka heke huna te kounga i raro i te pēhanga horopaki. Kāore ngā tikanga whakahaere tuku iho (ngā tuhinga kaupapa here, ngā aratohu matatika, te hangarau tono) i whai hua ki te aukati i ēnei hapa.", "paragraph_3": "Kāore e tūmanakohia kia tika tonu ngā pūnaha AI; e tūtohu ana a Tractatus i ngā here hanganga e hiahiatia ai te whakataunga a te tangata mō ētahi momo whakatau. Ka taea e ēnei here hanganga te urutau ki ngā tikanga takitahi, whakahaere, me te hapori—hei whakatū i tētahi turanga mō te whakahaere AI herea, kia taea ai te whakawhānui haumaru ake i te tipu o tōna āheinga.", @@ -187,7 +187,7 @@ "not_validated_2_item2": "Kāore i te mōhiotia: te ārai ki ngā whakamātau whakawhitiwhiti ā-tino, ngā whakamōhiotanga whakawāwā pūnaha, me ngā whakamātautau whakahē", "not_validated_2_item3": "Te hiahia rangahau: Te aromatawai a te kapa whero e ngā kairangahau haumarutanga", "not_validated_3_title": "⚠️ Ōritetanga ā-pūnaha maha (wāhanga)", - "not_validated_3_item1": "Kua whakamanahia: Claude Waehere (Anthropic Claude, Sonnet 4.5 ki Opus 4.6) me te AI kāinga (Llama 3.1/3.2 mā QLoRA)", + "not_validated_3_item1": "Kua whakamanahia: Claude Waehere (Anthropic Claude, Sonnet 4.5 ki Opus 4.6) me te Village AI (Llama 3.1/3.2 mā QLoRA)", "not_validated_3_item2": "Kāore i te mōhiotia: te whānuitanga ki Copilot, GPT-4, AutoGPT, LangChain, CrewAI, me ētahi atu tauira tuwhera", "not_validated_3_item3": "Te hiahia rangahau: Ngā rangahau whakamana whakawhānui i ngā papa pūnaha, kia whānui atu i ngā whānau Claude me Llama.", "not_validated_4_title": "❌ Hanganga Wāhanga Takirua", @@ -331,15 +331,15 @@ "read_link": "Pānuihia te tuhi tuatahi (HTML) →", "pdf_link": "Tikiake te PDF" }, - "home_ai": { + "village_ai": { "heading": "AI ā-Kāinga: Papanga Rangahau mō te Whakahaere Rangatiratanga", - "intro": "Ko te AI ā-whare he tohu rangahau nui: kua whakaurua katoa te whakahaere Tractatus ki roto i tētahi paipa whakatau tauira reo motuhake kua whakangungua ā-rohe. Koinei te whakaurunga tuatahi e whakahaerehia ana te whakahaere i roto i te paparanga tuku tauira, kaua ki te taha o tētahi API o waho.", + "intro": "Ko te Village AI he tohu rangahau nui: kua whakaurua katoa te whakahaere Tractatus ki roto i tētahi paipa whakatau tauira reo motuhake kua whakangungua ā-rohe. Koinei te whakaurunga tuatahi e whakahaerehia ana te whakahaere i roto i te paparanga tuku tauira, kaua ki te taha o tētahi API o waho.", "architecture_heading": "Hoahoanga Tauira Rua", "arch_1": "Mōdeli tere (Llama 3.2 3B): Ngā whakautu whai whakaroa iti mō ngā pātai auau, me te tātari whakamua whakahaere", "arch_2": "Mōdela hōhonu (Llama 3.1 8B): Whakaaro matatini me te paipa whakahaere katoa, tae atu ki te BoundaryEnforcer me te PluralisticDeliberationOrchestrator", "arch_3": "Whakangāwari QLoRA: Whakarerekētanga whai hua ā-paramita i runga i ngā taputapu ā-rohe, e āhei ai te whakarite tauira mō ia hapori, me te kore whakawhirinaki ki te kapua.", "research_heading": "Te hiranga o te rangahau", - "research_text": "Ka whakatūwhera a Home AI i te pātai rangahau mō te whakahaere i roto i te porowhita whakangungu mō ngā tauira e whakahaerehia ana e te hapori. Kāore ngā raraunga whakangungu e wehe i te hanganga ā-rohe; mā ngā ture whakahaere e ārahi ana i te whanonga o te tauira, mā te tiaki raraunga whakatikatika me ngā here i te wā whakatau. Ka waihanga tēnei i tētahi mata whakahaere tino rerekē atu i ngā huarahi mā te API.", + "research_text": "Ka whakatūwhera a Village AI i te pātai rangahau mō te whakahaere i roto i te porowhita whakangungu mō ngā tauira e whakahaerehia ana e te hapori. Kāore ngā raraunga whakangungu e wehe i te hanganga ā-rohe; mā ngā ture whakahaere e ārahi ana i te whanonga o te tauira, mā te tiaki raraunga whakatikatika me ngā here i te wā whakatau. Ka waihanga tēnei i tētahi mata whakahaere tino rerekē atu i ngā huarahi mā te API.", "learn_more": "Akohia atu mō te AI o te kāinga →" } }, diff --git a/public/locales/mi/home-ai.json b/public/locales/mi/village-ai.json similarity index 93% rename from public/locales/mi/home-ai.json rename to public/locales/mi/village-ai.json index 0b634970..6481101b 100644 --- a/public/locales/mi/home-ai.json +++ b/public/locales/mi/village-ai.json @@ -1,13 +1,13 @@ { "breadcrumb": { "home": "Kāinga", - "current": "AI kāinga" + "current": "Village AI" }, "hero": { "badge": "Mōdelī Reo Rangatira Kua Whakangungua ā-Rohe", - "title": "AI kāinga", + "title": "Village AI", "subtitle": "He tauira reo e whakahaerehia ana e te hapori ngā raraunga whakangungu, ngā taumaha o te tauira, me ngā ture whakahaere. Ehara i te whakatau anake e whakahaerehia ana, engari ko te whakangungu hoki e whakahaerehia ana.", - "status": "Tūnga: Kei te whakahaere te AI kāinga i te whakaputanga mō te whakatau. Kua hoahoatia, kua tuhia hoki te ara whakangungu rangatira; kei te tāuta ngā taputapu. Kāore anō kia tīmata te whakangungu. E whakamārama ana tēnei whārangi i ngā āheinga o nāianei me te hanganga e whakamaheretia ana." + "status": "Tūnga: Kei te whakahaere te Village AI i te whakaputanga mō te whakatau. Kua hoahoatia, kua tuhia hoki te ara whakangungu rangatira; kei te tāuta ngā taputapu. Kāore anō kia tīmata te whakangungu. E whakamārama ana tēnei whārangi i ngā āheinga o nāianei me te hanganga e whakamaheretia ana." }, "sll": { "heading": "He aha te SLL?", @@ -34,7 +34,7 @@ }, "two_model": { "heading": "Hoahoanga Tauira Rua", - "intro": "Ka whakamahi te AI kāinga i ngā tauira e rua o ngā rahi rerekē, ā, ka tohaina rātou i runga i te matatini o te mahi. Ehara tēnei i te pūnaha whakakapi — kua whakapaingia ia tauira mō tōna tūranga.", + "intro": "Ka whakamahi te Village AI i ngā tauira e rua o ngā rahi rerekē, ā, ka tohaina rātou i runga i te matatini o te mahi. Ehara tēnei i te pūnaha whakakapi — kua whakapaingia ia tauira mō tōna tūranga.", "fast_title": "3B Tauira — Kaiāwhina Tere", "fast_desc": "Ka āwhina ngā ringa ki ngā pātai, ngā tohutohu taputapu, ngā whakamārama hapa, ngā whakarāpopototanga poto, me te whakamāori. Wā whakautu whāia: kia oti i raro iho i te rima hēkona.", "fast_routing": "Ngā whakaoho whakatere: pātai māmā, tauira FAQ kua mōhiotia, mahi kotahi-hipanga.", @@ -48,7 +48,7 @@ "intro": "Ehara i te mea kotahi te āhua o te whakangungu. E toru ngā taumata e whakarato ana i ngā pae rerekē, ā, ia taumata he here whakahaere e tika ana.", "tier1_title": "Taumata Tuatahi: Pūtake Papanga", "tier1_badge": "Ngā hapori katoa", - "tier1_desc": "I whakangungua ki ngā tuhinga papanga, ki te ariā, ki ngā aratohu āhuatanga, me ngā ihirangi Pātai Auau. Ka whakarato i te mōhiotanga taketake mō te āhua o te mahi a Village, mō ngā uara o Home AI, me pēhea te āwhina i ngā mema ki te whakatere i te papanga.", + "tier1_desc": "I whakangungua ki ngā tuhinga papanga, ki te ariā, ki ngā aratohu āhuatanga, me ngā ihirangi Pātai Auau. Ka whakarato i te mōhiotanga taketake mō te āhua o te mahi a Village, mō ngā uara o Village AI, me pēhea te āwhina i ngā mema ki te whakatere i te papanga.", "tier1_update": "Auau whakahou: ia wiki i te wā beta, ia hauwhā i te wā GA. Tikanga whakangungu: whakatikatika QLoRA.", "tier2_title": "Tātai 2: Ngā Āpitihanga Kaiwhakamahi", "tier2_badge": "Ia hapori", @@ -61,7 +61,7 @@ }, "governance_training": { "heading": "Te whakahaere i te wā o te whakangungu", - "intro1": "Ko tēnei te koha rangahau matua. Ko te nuinga o ngā anga whakahaere AI e mahi ana i te wā whakatau — ka tātari, ka here rānei i ngā whakautu i muri i te whakangungu o te tauira. Ka whakauru te AI kāinga i te whakahaere ki roto i te porowhita whakangungu.", + "intro1": "Ko tēnei te koha rangahau matua. Ko te nuinga o ngā anga whakahaere AI e mahi ana i te wā whakatau — ka tātari, ka here rānei i ngā whakautu i muri i te whakangungu o te tauira. Ka whakauru te Village AI i te whakahaere ki roto i te porowhita whakangungu.", "intro2": "E whai ana tēnei i te mātāpono a Christopher Alexander o te Not-Separateness: kua rarangahia te whakahaere ki roto i te hanganga whakangungu, ehara i te mea ka tāpirihia ā muri ake. Ka whakamana a BoundaryEnforcer i ia kohinga whakangungu i mua i te tukunga whakamua. Mēnā kei roto i tētahi kohinga ngā raraunga whakawhiti-tenanti, ngā raraunga kāore i whiwhi whakaaetanga, rānei ngā ihirangi kua tohu he tūmataiti, ka whakakorehia te kohinga, ā, kāore e anga whakamua te hipanga whakangungu.", "code_comment1": "# Whakahaere i roto i te porowhita whakangungu (Kāore i te wehe)", "code_line1": "mō ia puranga i roto i ngā raraunga whakangungu:", @@ -81,7 +81,7 @@ }, "dual_layer": { "heading": "Hoahoanga Tractatus Papanga Rua", - "intro": "Kei raro i te whakahaere a Tractatus te AI kāinga i ngā ōrau e rua motuhake i te wā kotahi. Koinei te māramatanga hanganga e wehe ana i te huarahi SLL i ngā tauira kāore i whakahaerehia me ngā tātari haumaru tāpiri.", + "intro": "Kei raro i te whakahaere a Tractatus te Village AI i ngā ōrau e rua motuhake i te wā kotahi. Koinei te māramatanga hanganga e wehe ana i te huarahi SLL i ngā tauira kāore i whakahaerehia me ngā tātari haumaru tāpiri.", "layer_a_badge": "Rau A: Taketake", "layer_a_title": "Tractatus Kei Roto i te Tauira", "layer_a_desc": "I te wā o te whakangungu, ka whakamana e te BoundaryEnforcer ia kohinga raraunga. Ko te whakaritenga DPO e ārahi ana i ngā manakohanga ki te whanonga e whakahaerehia ana. Ka ako te tauira ki te whakaute i ngā rohe, ki te manako i ngā whakautu mārama, me te tuku i ngā whakataunga uara ki ngā tāngata.", @@ -104,12 +104,12 @@ }, "philosophy": { "heading": "Ngā Pūtake Arorau", - "intro": "Ka whai te whakahaere o te AI ā-whare i ngā tikanga whakaaro e whā, ā, ia tikanga ka kawe mai i tētahi mātāpono hanganga motuhake. Ehara ēnei i ngā tohutoro whakapaipai — ka huri ēnei hei whakataunga hoahoa tūturu.", + "intro": "Ka whai te whakahaere o te Village AI i ngā tikanga whakaaro e whā, ā, ia tikanga ka kawe mai i tētahi mātāpono hanganga motuhake. Ehara ēnei i ngā tohutoro whakapaipai — ka huri ēnei hei whakataunga hoahoa tūturu.", "berlin_title": "Isaiah Berlin — Te maha o ngā uara", - "berlin_desc": "He maha ngā uara, ā, i ētahi wā kāore e taea te whakakotahi. Ina taupatupatu te rangatiratanga ki te ōritetanga, kāore pea he otinga kotahi e tika ana. Ka whakaatu te AI kāinga i ngā kōwhiringa kāore he taumata, ā, ka tuhi i ngā mea e whakakorehia ana e ia kōwhiringa.", + "berlin_desc": "He maha ngā uara, ā, i ētahi wā kāore e taea te whakakotahi. Ina taupatupatu te rangatiratanga ki te ōritetanga, kāore pea he otinga kotahi e tika ana. Ka whakaatu te Village AI i ngā kōwhiringa kāore he taumata, ā, ka tuhi i ngā mea e whakakorehia ana e ia kōwhiringa.", "berlin_arch": "Whakaaturanga hoahoanga: Ka whakaatu te Kaiwhakarite Whiriwhiringa Rerekē i ngā whakawhitinga painga me ngā ngoikoretanga; kāore ia e whakatau i ēnei.", "wittgenstein_title": "Ludwig Wittgenstein — Ngā rohe o te reo", - "wittgenstein_desc": "Ka āhua te reo i ngā whakaaro ka taea te whakaaro me te whakaputa. Ko ētahi mea tino hira e aukati ana i te whakaputa pūnaha. E mōhio ana te AI kāinga ki ngā here o ngā tauira reo — otirā mō te pōuritanga, te tikanga ahurea, me ngā wheako ora.", + "wittgenstein_desc": "Ka āhua te reo i ngā whakaaro ka taea te whakaaro me te whakaputa. Ko ētahi mea tino hira e aukati ana i te whakaputa pūnaha. E mōhio ana te Village AI ki ngā here o ngā tauira reo — otirā mō te pōuritanga, te tikanga ahurea, me ngā wheako ora.", "wittgenstein_arch": "Whakaaturanga hoahoanga: Ka waiho e BoundaryEnforcer ngā whakataunga uara ki ngā tāngata, e whakaae ana ki ngā here o te tātai.", "indigenous_title": "Te Rangatiratanga o ngā Iwi Taketake — Ngā Raraunga hei Hononga", "indigenous_desc": "Te Mana Raraunga (Te Rangatiratanga Raraunga Māori), ngā Mātāpono CARE, me OCAP (Ngā Iwi Tuatahi o Kanata) e whakarato ana i ngā anga e kore ai te raraunga he rawa, engari he hononga. Ko te whakapapa (te rangahau whakapapa) he mea nō te hapori whānui, ehara i te mea nō ia tangata takitahi. Ko te whakaaetanga he tukanga ā-hapori, ehara i te pouaka tohu mō ia tangata takitahi.", @@ -133,7 +133,7 @@ }, "wisdom": { "heading": "Ngā Tuku Iho o te Mātauranga", - "intro": "Ka whakarato a Home AI i ngā tikanga mātauranga tekau mā toru hei whakamahinga mā ngā mema hei ārahi i te whanonga o te AI. Kua whakamanahia ia tikanga e te Pukapuka Pūtaiao Hinengaro o Stanford hei tohutoro mātauranga matua. He tūao, he mārama, ā, ka taea te huri.", + "intro": "Ka whakarato a Village AI i ngā tikanga mātauranga tekau mā toru hei whakamahinga mā ngā mema hei ārahi i te whanonga o te AI. Kua whakamanahia ia tikanga e te Pukapuka Pūtaiao Hinengaro o Stanford hei tohutoro mātauranga matua. He tūao, he mārama, ā, ka taea te huri.", "berlin_title": "Berlin: Te Uara Kanorau", "berlin_desc": "Whakaaturia ngā kōwhiringa me te kore whakarārangi; whakaae ki ngā mea e whakawātea ana ia kōwhiringa.", "stoic_title": "Stoika: Te ngākau mārie me te ātaahua", @@ -176,7 +176,7 @@ }, "infrastructure": { "heading": "Tūāpapa Whakangungu", - "intro": "AI ā-whare e whai ana i te tauira \"whakangungu ā-rohe, tuku ā-waho\". Kei te noho ngā taputapu whakangungu i te kāinga o te kaiwhakawhanake. Ka tukuna ngā taumaha o ngā tauira kua whakangungua ki ngā tūmau whakaputa mō te aromatawai. Mā konei ka iti ngā utu whakangungu, ā, ka mau tonu ngā raraunga whakangungu i raro i te mana ā-tinana.", + "intro": "Village AI e whai ana i te tauira \"whakangungu ā-rohe, tuku ā-waho\". Kei te noho ngā taputapu whakangungu i te kāinga o te kaiwhakawhanake. Ka tukuna ngā taumaha o ngā tauira kua whakangungua ki ngā tūmau whakaputa mō te aromatawai. Mā konei ka iti ngā utu whakangungu, ā, ka mau tonu ngā raraunga whakangungu i raro i te mana ā-tinana.", "local_title": "Whakangungu ā-rohe", "local_item1": "GPU kaiwhakamahi me te 24 GB VRAM mā te pouaka ā-waho", "local_item2": "Whakangāwari āta o QLoRA (e hāngai ana te whakatoha 4-bit ki te tahua VRAM)", @@ -193,7 +193,7 @@ }, "bias": { "heading": "Tuhipoka me te Whakamana i ngā Tohu Whakawhē", - "intro": "Kei te whakahaere a Home AI i te ao o te kōrero ā-whānau, ā, e kawe ana i ngā tūraru motuhake mō te hē whakaaro. Kua tuhia e ono ngā kāwai hē whakaaro, ā, kei roto he whakahau rapu, he tauira whakakore hē whakaaro, me ngā paearu aromātai.", + "intro": "Kei te whakahaere a Village AI i te ao o te kōrero ā-whānau, ā, e kawe ana i ngā tūraru motuhake mō te hē whakaaro. Kua tuhia e ono ngā kāwai hē whakaaro, ā, kei roto he whakahau rapu, he tauira whakakore hē whakaaro, me ngā paearu aromātai.", "family_title": "Rauropi Whānau", "family_desc": "Ko te whānau pūtau hei taunoa; ka whakaarohia hei paerewa ngā mātua o te ira kotahi, ngā whānau whakakotahi, me ngā mātua kotahi.", "elder_title": "Tūhonohono mō ngā kaumātua", @@ -220,7 +220,7 @@ }, "live_today": { "heading": "He aha te ora o tēnei rā?", - "intro": "Kei te whakahaere a Home AI i te wāhanga whakaputa ināianei me ngā āhuatanga whakahaere e whai ake nei. Ka whakahaerehia ēnei i raro i te rārangi whakahaere katoa mō ngā ratonga e ono.", + "intro": "Kei te whakahaere a Village AI i te wāhanga whakaputa ināianei me ngā āhuatanga whakahaere e whai ake nei. Ka whakahaerehia ēnei i raro i te rārangi whakahaere katoa mō ngā ratonga e ono.", "rag_title": "Āwhina i runga i te RAG", "rag_desc": "Ka tiki a te rapu ā-vector i ngā tuhinga hāngai, kua tātarihia e ngā whakaaetanga o ngā mema. E hāngai ana ngā whakautu ki ngā tuhinga kua tiki, ehara i te raraunga whakangungu anake.", "ocr_title": "Tuhipuka OCR", @@ -233,7 +233,7 @@ "limitations": { "heading": "Ngā here me ngā pātai tuwhera", "item1": "Kāore anō te whakangungu kia tīmata: Kua hoahoatia, kua tuhia hoki te hanganga SLL. Kei te tāuta ngā taputapu. Engari kāore anō kia whakangungua tētahi tauira. Ko ngā kī mō te whakahaere i te wā whakangungu he hoahoa hanganga, ehara i ngā hua aromatawai.", - "item2": "Whakaurunga herea: Ka whakahaere te AI kāinga i roto i ngā kaipānga whakawhanaunga e whā i runga i tētahi tūāpapa kotahi i hangaia e te kaiwhakawhanake anga. Kāore e taea te whānui i te whaihua o te whakahaere mehemea kāore he whakaurunga motuhake.", + "item2": "Whakaurunga herea: Ka whakahaere te Village AI i roto i ngā kaipānga whakawhanaunga e whā i runga i tētahi tūāpapa kotahi i hangaia e te kaiwhakawhanake anga. Kāore e taea te whānui i te whaihua o te whakahaere mehemea kāore he whakaurunga motuhake.", "item3": "Ngā ine i pūrongo a te rōpū anō: E pūrongo ana te rōpū kotahi i hanga i te pūnaha i ngā tatauranga mō te mahi me te haumaru. Kua whakamaheretia he arotake motuhake, engari kāore anō kia whakahaerehia.", "item4": "Te whakatinanatanga o ngā tikanga tuku iho: Ka taea te whakaiti pono i ngā tikanga whakaaro hōhonu kia noho hei tohutohu anake mō te anga? Ehara i te mea mā te kōwhiri a tētahi mema i te kōwhiringa 'Buddhist' e tohu ana kua mārama rānei, kua whai i ngā tikanga o te Buddhism. Ka mōrearea tēnei kia noho mata noa.", "item5": "Ehara i te mōhiotia te toitūtanga o te whakangungu: He pātai rangahau tuwhera mēnā ka ora tonu ngā here whakahaere i roto i ngā rau huringa whakangungu me te kore he ngoikore. Kua hoahoatia te kitenga huringa, engari kāore anō kia whakamātauria.", diff --git a/public/locales/mi/village-case-study.json b/public/locales/mi/village-case-study.json index aa25e39f..b5d3e9e1 100644 --- a/public/locales/mi/village-case-study.json +++ b/public/locales/mi/village-case-study.json @@ -36,7 +36,7 @@ "infra_desc": "Ngā tūmau whakaputa kei Aotearoa me te Uniana o Europi. Kāore ngā raraunga e whakawhiti i raro i te mana whakahaere o Amerika. Kāore ngā raraunga hapori e puta atu i te whakatakotoranga e hāngai ana ki a rātou.", "training_title": "Whakangungu e whakahaerehia ana e te hapori", "training_desc": "Te whakangāwari QLoRA ki ngā raraunga motuhake o te rohe, me te whai i ngā whakaaetanga me te takenga mai. Ka taea e ngā hapori te tango i ngā raraunga whakangungu, ā, ka whakaoho i te whakahou anō i te tauira.", - "link_note": "Mō tētahi whakamārama taipitopito mō te hanganga tauira, te huarahi whakangungu, me te whakaurunga whakahaere, tirohia Home AI / SLL: Sovereign Locally-Trained Language Model." + "link_note": "Mō tētahi whakamārama taipitopito mō te hanganga tauira, te huarahi whakangungu, me te whakaurunga whakahaere, tirohia Village AI / SLL: Sovereign Locally-Trained Language Model." }, "polycentric": { "heading": "Kaitiakitanga ā-pokapū maha", @@ -160,7 +160,7 @@ "heading": "Tūhura anō", "description": "Ruku hohonu ki te hanganga hangarau, pānuitia ngā rangahau, kia kite rānei i te papaanga Village e mahi ana.", "visit_village": "Haere ki te kāinga →", - "home_ai": "Mōdela Reo Rangatira →", + "village_ai": "Mōdela Reo Rangatira →", "research_paper": "Pepa Rangahau →", "research_details": "Ngā taipitopito rangahau →" } diff --git a/public/researcher.html b/public/researcher.html index e7690e02..fac046cd 100644 --- a/public/researcher.html +++ b/public/researcher.html @@ -127,7 +127,7 @@

      Development Context

      - Tractatus has been developed from April 2025 and is now in active production (11+ months). What began as a single-project demonstration has expanded to include production deployment at Village Home Trust and sovereign language model governance through Home AI. Observations derive from direct engagement with Claude Code (Anthropic Claude models, Sonnet 4.5 through Opus 4.6) across over 1,000 development sessions. This is exploratory research, not controlled study. + Tractatus has been developed from April 2025 and is now in active production (11+ months). What began as a single-project demonstration has expanded to include production deployment at Village Home Trust and sovereign language model governance through Village AI. Observations derive from direct engagement with Claude Code (Anthropic Claude models, Sonnet 4.5 through Opus 4.6) across over 1,000 development sessions. This is exploratory research, not controlled study.

      @@ -252,9 +252,9 @@
    • Sparse Autoencoders: Mechanistic interpretability through decomposition of polysemantic neurons into monosemantic features
    -

    Application to Village Home AI

    +

    Application to Village Village AI

    - The Village Home AI deployment uses QLoRA-fine-tuned Llama 3.1/3.2 models where steering vectors can be applied at inference time. This creates a two-layer governance architecture: Tractatus provides structural constraints on decision boundaries, while steering vectors address pre-reasoning mechanical biases within the model itself. Together, they represent governance that operates both outside and inside the model. + The Village Village AI deployment uses QLoRA-fine-tuned Llama 3.1/3.2 models where steering vectors can be applied at inference time. This creates a two-layer governance architecture: Tractatus provides structural constraints on decision boundaries, while steering vectors address pre-reasoning mechanical biases within the model itself. Together, they represent governance that operates both outside and inside the model.

    @@ -319,7 +319,7 @@
    - +
    @@ -328,8 +328,8 @@
    -

    - Home AI: Sovereign Governance Research Platform +

    + Village AI: Sovereign Governance Research Platform

    Status: Inference operational | Training pipeline in progress @@ -338,26 +338,26 @@

    -

    - Home AI represents a significant research milestone: full Tractatus governance embedded in a locally-trained, sovereign language model inference pipeline. This is the first deployment where governance operates inside the model serving layer rather than alongside an external API. +

    + Village AI represents a significant research milestone: full Tractatus governance embedded in a locally-trained, sovereign language model inference pipeline. This is the first deployment where governance operates inside the model serving layer rather than alongside an external API.

    -

    Two-Model Architecture

    +

    Two-Model Architecture

      -
    • Fast model (Llama 3.2 3B): Low-latency responses for routine queries, with governance pre-screening
    • -
    • Deep model (Llama 3.1 8B): Complex reasoning with full governance pipeline, including BoundaryEnforcer and PluralisticDeliberationOrchestrator
    • -
    • QLoRA fine-tuning: Parameter-efficient adaptation on local hardware, enabling community-specific model customisation without cloud dependency
    • +
    • Fast model (Llama 3.2 3B): Low-latency responses for routine queries, with governance pre-screening
    • +
    • Deep model (Llama 3.1 8B): Complex reasoning with full governance pipeline, including BoundaryEnforcer and PluralisticDeliberationOrchestrator
    • +
    • QLoRA fine-tuning: Parameter-efficient adaptation on local hardware, enabling community-specific model customisation without cloud dependency
    -

    Research Significance

    -

    - Home AI opens the research question of governance-inside-the-training-loop for community-controlled models. Training data never leaves the local infrastructure; governance rules shape model behaviour through both fine-tuning data curation and inference-time constraints. This creates a fundamentally different governance surface than API-mediated approaches. +

    Research Significance

    +

    + Village AI opens the research question of governance-inside-the-training-loop for community-controlled models. Training data never leaves the local infrastructure; governance rules shape model behaviour through both fine-tuning data curation and inference-time constraints. This creates a fundamentally different governance surface than API-mediated approaches.

    - Learn more about Home AI → + data-i18n="sections.village_ai.learn_more">Learn more about Village AI →
    @@ -1108,7 +1108,7 @@
    • Framework governs agenticgovernance.digital (11+ months continuous operation)
    • Village Home Trust production deployment: zero governance violations
    • -
    • Home AI sovereign inference governance: operational
    • +
    • Village AI sovereign inference governance: operational
    • Cultural DNA rules (inst_085-089) enforced through pre-commit hooks (4+ months operational)
    • Phase 5 integration: 100% complete (all 6 services, 203/203 tests passing)
    • Multilingual support: EN, DE, FR, Te Reo Maori
    • @@ -1145,7 +1145,7 @@

      ⚠️ Cross-Platform Consistency (Partial)

        -
      • Validated: Claude Code (Anthropic Claude, Sonnet 4.5 through Opus 4.6) and Home AI (Llama 3.1/3.2 via QLoRA)
      • +
      • Validated: Claude Code (Anthropic Claude, Sonnet 4.5 through Opus 4.6) and Village AI (Llama 3.1/3.2 via QLoRA)
      • Unknown: Generalizability to Copilot, GPT-4, AutoGPT, LangChain, CrewAI, other open models
      • Research need: Broader cross-platform validation studies beyond Claude and Llama families
      diff --git a/public/timeline.html b/public/timeline.html index 1e9ba7b0..5186374b 100644 --- a/public/timeline.html +++ b/public/timeline.html @@ -241,7 +241,7 @@ December 2025

      Village Case Study

      - The Village platform — a community-governed digital space — became the primary production deployment of Tractatus governance. Home AI, the platform's locally-scoped language model, applies all six governance services to every user interaction: RAG-based help, document OCR, story assistance, and AI memory transparency. + The Village platform — a community-governed digital space — became the primary production deployment of Tractatus governance. Village AI, the platform's locally-scoped language model, applies all six governance services to every user interaction: RAG-based help, document OCR, story assistance, and AI memory transparency.

      A formal case study was published documenting the deployment, including honest limitations: early-stage federated deployment, self-reported metrics, operator-developer overlap. Independent validation was scheduled for 2026. @@ -281,7 +281,7 @@ February 2026

      Current State

      - The framework has reached 800 commits across 16 months. Six governance services operate in production. The Village platform provides the primary evidence base, with Home AI applying Tractatus governance to every interaction. + The framework has reached 800 commits across 16 months. Six governance services operate in production. The Village platform provides the primary evidence base, with Village AI applying Tractatus governance to every interaction.

      Open questions: Does the architecture scale beyond single-tenant deployment? Can the governance overhead be reduced below 5% while maintaining coverage? Does the apparent safety-performance alignment hold under controlled measurement?

      diff --git a/public/home-ai.html b/public/village-ai.html similarity index 93% rename from public/home-ai.html rename to public/village-ai.html index 17ca991f..41c98a37 100644 --- a/public/home-ai.html +++ b/public/village-ai.html @@ -1,20 +1,20 @@ - + - Home AI — Sovereign Locally-Trained Language Model | Tractatus - + Village AI — Sovereign Locally-Trained Language Model | Tractatus + - - + + - + @@ -44,7 +44,7 @@
      1. Home
      2. /
      3. -
      4. Home AI
      5. +
      6. Village AI
      @@ -56,13 +56,13 @@
      SOVEREIGN LOCALLY-TRAINED LANGUAGE MODEL
      -

      Home AI

      +

      Village AI

      A language model where the community controls the training data, the model weights, and the governance rules. Not just governed inference — governed training.

      - Status: Home AI operates in production for inference. The sovereign training pipeline is designed and documented; hardware is being installed. Training has not yet begun. This page describes both current capability and intended architecture. + Status: Village AI operates in production for inference. The sovereign training pipeline is designed and documented; hardware is being installed. Training has not yet begun. This page describes both current capability and intended architecture.

      @@ -124,7 +124,7 @@

      Two-Model Architecture

      - Home AI uses two models of different sizes, routed by task complexity. This is not a fallback mechanism — each model is optimised for its role. + Village AI uses two models of different sizes, routed by task complexity. This is not a fallback mechanism — each model is optimised for its role.

      @@ -167,7 +167,7 @@ All communities

      - Trained on platform documentation, philosophy, feature guides, and FAQ content. Provides the foundational understanding of how Village works, what Home AI's values are, and how to help members navigate the platform. + Trained on platform documentation, philosophy, feature guides, and FAQ content. Provides the foundational understanding of how Village works, what Village AI's values are, and how to help members navigate the platform.

      Update frequency: weekly during beta, quarterly at GA. Training method: QLoRA fine-tuning. @@ -180,7 +180,7 @@ Per community

      - Each community trains a lightweight LoRA adapter on its own content — stories, documents, photos, and events that members have explicitly consented to include. This allows Home AI to answer questions like "What stories has Grandma shared?" without accessing any other community's data. + Each community trains a lightweight LoRA adapter on its own content — stories, documents, photos, and events that members have explicitly consented to include. This allows Village AI to answer questions like "What stories has Grandma shared?" without accessing any other community's data.

      Adapters are small (50–100MB). Consent is per-content-item. Content marked "only me" is never included regardless of consent. Training uses DPO (Direct Preference Optimization) for value alignment. @@ -207,7 +207,7 @@

      Governance During Training

      - This is the central research contribution. Most AI governance frameworks operate at inference time — they filter or constrain responses after the model has already been trained. Home AI embeds governance inside the training loop. + This is the central research contribution. Most AI governance frameworks operate at inference time — they filter or constrain responses after the model has already been trained. Village AI embeds governance inside the training loop.

      This follows Christopher Alexander's principle of Not-Separateness: governance is woven into the training architecture, not applied afterward. The BoundaryEnforcer validates every training batch before the forward pass. If a batch contains cross-tenant data, data without consent, or content marked as private, the batch is rejected and the training step does not proceed. @@ -248,7 +248,7 @@

      Dual-Layer Tractatus Architecture

      - Home AI is governed by Tractatus at two distinct layers simultaneously. This is the architectural insight that distinguishes the SLL approach from both ungoverned models and bolt-on safety filters. + Village AI is governed by Tractatus at two distinct layers simultaneously. This is the architectural insight that distinguishes the SLL approach from both ungoverned models and bolt-on safety filters.

      @@ -306,14 +306,14 @@

      Philosophical Foundations

      - Home AI's governance draws from four philosophical traditions, each contributing a specific architectural principle. These are not decorative references — they translate into concrete design decisions. + Village AI's governance draws from four philosophical traditions, each contributing a specific architectural principle. These are not decorative references — they translate into concrete design decisions.

      Isaiah Berlin — Value Pluralism

      - Values are genuinely plural and sometimes incompatible. When freedom conflicts with equality, there may be no single correct resolution. Home AI presents options without hierarchy and documents what each choice sacrifices. + Values are genuinely plural and sometimes incompatible. When freedom conflicts with equality, there may be no single correct resolution. Village AI presents options without hierarchy and documents what each choice sacrifices.

      Architectural expression: PluralisticDeliberationOrchestrator presents trade-offs; it does not resolve them.

      @@ -321,7 +321,7 @@

      Ludwig Wittgenstein — Language Boundaries

      - Language shapes what can be thought and expressed. Some things that matter most resist systematic expression. Home AI acknowledges the limits of what language models can capture — particularly around grief, cultural meaning, and lived experience. + Language shapes what can be thought and expressed. Some things that matter most resist systematic expression. Village AI acknowledges the limits of what language models can capture — particularly around grief, cultural meaning, and lived experience.

      Architectural expression: BoundaryEnforcer defers values decisions to humans, acknowledging limits of computation.

      @@ -371,7 +371,7 @@

      Layer 3: Adopted Wisdom Traditions

      - Individual members and communities can adopt principles from wisdom traditions to influence how Home AI frames responses. These are voluntary, reversible, and transparent. They influence presentation, not content access. Multiple traditions can be adopted simultaneously; conflicts are resolved by the member, not the AI. + Individual members and communities can adopt principles from wisdom traditions to influence how Village AI frames responses. These are voluntary, reversible, and transparent. They influence presentation, not content access. Multiple traditions can be adopted simultaneously; conflicts are resolved by the member, not the AI.

      Enforcement: framing hints in response generation. Override always available.

      @@ -382,7 +382,7 @@

      Wisdom Traditions

      - Home AI offers thirteen wisdom traditions that members can adopt to guide AI behaviour. Each tradition has been validated against the Stanford Encyclopedia of Philosophy as the primary scholarly reference. Adoption is voluntary, transparent, and reversible. + Village AI offers thirteen wisdom traditions that members can adopt to guide AI behaviour. Each tradition has been validated against the Stanford Encyclopedia of Philosophy as the primary scholarly reference. Adoption is voluntary, transparent, and reversible.

      @@ -484,7 +484,7 @@

      Training Infrastructure

      - Home AI follows a "train local, deploy remote" model. The training hardware sits in the developer's home. Trained model weights are deployed to production servers for inference. This keeps training costs low and training data under physical control. + Village AI follows a "train local, deploy remote" model. The training hardware sits in the developer's home. Trained model weights are deployed to production servers for inference. This keeps training costs low and training data under physical control.

      @@ -521,7 +521,7 @@

      Bias Documentation and Verification

      - Home AI operates in the domain of family storytelling, which carries specific bias risks. Six bias categories have been documented with detection prompts, debiasing examples, and evaluation criteria. + Village AI operates in the domain of family storytelling, which carries specific bias risks. Six bias categories have been documented with detection prompts, debiasing examples, and evaluation criteria.

      @@ -586,7 +586,7 @@

      What's Live Today

      - Home AI currently operates in production with the following governed features. These run under the full six-service governance stack. + Village AI currently operates in production with the following governed features. These run under the full six-service governance stack.

      @@ -620,7 +620,7 @@
    • - Limited deployment: Home AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments. + Limited deployment: Village AI operates across four federated tenants within one platform built by the framework developer. Governance effectiveness cannot be generalised without independent deployments.
    • diff --git a/public/village-case-study.html b/public/village-case-study.html index 431f926b..11a5f01b 100644 --- a/public/village-case-study.html +++ b/public/village-case-study.html @@ -164,7 +164,7 @@
    • - For a detailed account of the model architecture, training approach, and governance integration, see Home AI / SLL: Sovereign Locally-Trained Language Model. + For a detailed account of the model architecture, training approach, and governance integration, see Village AI / SLL: Sovereign Locally-Trained Language Model.

      @@ -445,9 +445,9 @@ data-i18n="cta.visit_village"> Visit the Village → - + data-i18n="cta.village_ai"> Sovereign Language Model → Why This Matters Beyond Coding -

      I am building a platform called Village — sovereign community spaces where families share stories, preserve memories, and maintain their cultural heritage. Part of the long-term vision includes Home AI: locally-trained small language models that help members write stories, summarize discussions, and triage content for moderation.

      +

      I am building a platform called Village — sovereign community spaces where families share stories, preserve memories, and maintain their cultural heritage. Part of the long-term vision includes Village AI: locally-trained small language models that help members write stories, summarize discussions, and triage content for moderation.

      -

      The herber incident is a microcosm of what will happen inside Villages when Home AI is deployed.

      +

      The herber incident is a microcosm of what will happen inside Villages when Village AI is deployed.

      -

      Consider: a family matriarch has had three good experiences with Home AI summarizing her stories. The summaries were accurate, respectful, well-structured. On the fourth request, the AI summarizes a deceased member's story but omits a whakapapa detail that the matriarch, had she read the original, would have noticed. But she does not read the original. Why would she? The last three summaries were fine.

      +

      Consider: a family matriarch has had three good experiences with Village AI summarizing her stories. The summaries were accurate, respectful, well-structured. On the fourth request, the AI summarizes a deceased member's story but omits a whakapapa detail that the matriarch, had she read the original, would have noticed. But she does not read the original. Why would she? The last three summaries were fine.

      The omission becomes embedded in the community's collective memory. No one notices because the summary looked right. The AI was confident. The matriarch was busy. The family moves on with an incomplete version of their own history.

      @@ -97,7 +97,7 @@ const post = {

      What We Are Doing About It

      -

      Our Home AI governance framework — documented in detail at agenticgovernance.digital — was already designed to address many of these risks. The Tractatus framework embeds 31 governance rules at point-of-execution. The BoundaryEnforcer validates every training step before execution. Christopher Alexander's architectural principles ensure governance is inside the training loop, not bolted on afterward.

      +

      Our Village AI governance framework — documented in detail at agenticgovernance.digital — was already designed to address many of these risks. The Tractatus framework embeds 31 governance rules at point-of-execution. The BoundaryEnforcer validates every training step before execution. Christopher Alexander's architectural principles ensure governance is inside the training loop, not bolted on afterward.

      But the herber incident revealed gaps that we had not yet addressed:

      @@ -105,7 +105,7 @@ const post = {

      Gap 2: Confidence scales with capability, not correctness. The 5-10% governance overhead we measure is computational cost. It does not measure whether the governance rules themselves are correct. A system that enforces the wrong rules with 100% reliability is worse than one that enforces the right rules with 95% reliability — because the first gives false confidence.

      -

      Gap 3: Human verification erodes with trust. Our verification framework includes human review sampling: 100% for flagged content, 25% for grief narratives, 5% random. But as the KPMG research shows, 66% of people skip verification. The better Home AI performs, the less carefully humans will review its output.

      +

      Gap 3: Human verification erodes with trust. Our verification framework includes human review sampling: 100% for flagged content, 25% for grief narratives, 5% random. But as the KPMG research shows, 66% of people skip verification. The better Village AI performs, the less carefully humans will review its output.

      Gap 4: "Dry run confirms" does not mean "the action is safe." Validation that uses the same flawed model as the destructive operation will confirm the operation every time. Independent verification requires independent logic.

      @@ -114,7 +114,7 @@ const post = {
      • Mandatory friction for irreversible actions — regardless of AI confidence level, any action that modifies community content irreversibly requires human confirmation with the original content visible
      • Original-first architecture — AI summaries never replace originals; the original must always be accessible, linked, and primary
      • -
      • Error surfacing — monthly transparency reports showing what Home AI got wrong, building calibrated trust rather than blind trust
      • +
      • Error surfacing — monthly transparency reports showing what Village AI got wrong, building calibrated trust rather than blind trust
      • Independent verification layers — each governance component must verify using logic that does not share assumptions with other components
      • Explicit uncertainty expression — "I don't know" and "I'm uncertain about..." as first-class outputs, measured alongside accuracy
      @@ -135,7 +135,7 @@ const post = {

      Epistemic humility in language models. OpenAI's research shows models hallucinate because training rewards confident guessing. Can models be trained to express genuine uncertainty? Not "I think this might be..." (a hedge that still implies knowledge) but "I have no information about this and I am guessing" (an honest statement of epistemic limits)?

      -

      The 75%-25% ratio. MIT GOV/LAB (2025) found that a 75%-human/25%-AI ratio generated the greatest citizen acceptance in participatory governance. Does this ratio hold for community AI? Should Home AI be explicitly positioned as a contributor, never as an authority — and should the UI always show the human-to-AI ratio of any output?

      +

      The 75%-25% ratio. MIT GOV/LAB (2025) found that a 75%-human/25%-AI ratio generated the greatest citizen acceptance in participatory governance. Does this ratio hold for community AI? Should Village AI be explicitly positioned as a contributor, never as an authority — and should the UI always show the human-to-AI ratio of any output?

      If you are a researcher working on any of these questions, or if you are building community AI systems and grappling with the same problems, I would very much like to hear from you. The Village project is committed to open governance documentation — everything described here is available at agenticgovernance.digital.

      @@ -173,11 +173,11 @@ const post = {

      John Stroh is the founder of the Village platform (mysovereignty.digital) and the agentic governance research project (agenticgovernance.digital).

      -

      The Home AI governance framework is open source and available at agenticgovernance.digital.

      `, +

      The Village AI governance framework is open source and available at agenticgovernance.digital.

      `, excerpt: 'At 11pm on a Friday, my AI coding assistant nearly locked me out of my own community. The analysis was wrong but looked right — with every surface marker of thoroughness. This is a story about the psychological dimension of AI over-trust, and why it matters more than the technical one.', status: 'published', published_at: new Date('2026-02-08T12:00:00Z'), - tags: ['ai-safety', 'automation-bias', 'over-trust', 'home-ai', 'governance', 'research'], + tags: ['ai-safety', 'automation-bias', 'over-trust', 'village-ai', 'governance', 'research'], moderation: { ai_analysis: null, human_reviewer: 'john-stroh', diff --git a/scripts/publish-steering-vectors-blog-post.js b/scripts/publish-steering-vectors-blog-post.js index 7c3f6440..e0883a8c 100644 --- a/scripts/publish-steering-vectors-blog-post.js +++ b/scripts/publish-steering-vectors-blog-post.js @@ -59,7 +59,7 @@ const post = {

      Sovereign local deployment — running open-weight models like Llama on your own hardware — provides full access to model weights, intermediate activations, and per-layer analysis. Every steering technique described above is architecturally available.

      -

      The Village Home AI platform, using QLoRA-fine-tuned Llama 3.1/3.2 models with a two-tier training architecture, is structurally positioned to apply these techniques. The paper proposes a four-phase implementation path integrating steering vectors into the existing training pipeline and Tractatus governance framework.

      +

      The Village Village AI platform, using QLoRA-fine-tuned Llama 3.1/3.2 models with a two-tier training architecture, is structurally positioned to apply these techniques. The paper proposes a four-phase implementation path integrating steering vectors into the existing training pipeline and Tractatus governance framework.

      Who Steers? The Governance Question

      @@ -106,7 +106,7 @@ const post = { excerpt: 'Some AI biases fire before reasoning engages — like a driver reaching for the wrong indicator stalk. Prompt-level fixes cannot reach them. Steering vector techniques can, but only if you have access to model weights. This is the structural advantage of sovereign deployment — and it raises the question: who decides what bias to correct?', status: 'published', published_at: new Date('2026-02-09T12:00:00Z'), - tags: ['steering-vectors', 'mechanical-bias', 'sovereign-ai', 'home-ai', 'debiasing', 'governance', 'research'], + tags: ['steering-vectors', 'mechanical-bias', 'sovereign-ai', 'village-ai', 'debiasing', 'governance', 'research'], moderation: { ai_analysis: null, human_reviewer: 'john-stroh', diff --git a/scripts/publish-taonga-governance-blog-post.js b/scripts/publish-taonga-governance-blog-post.js index ad56855e..74028528 100644 --- a/scripts/publish-taonga-governance-blog-post.js +++ b/scripts/publish-taonga-governance-blog-post.js @@ -36,7 +36,7 @@ const post = {

      Full paper: Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models (STO-RES-0010)

      `, excerpt: 'Our steering vectors paper treated bias correction as a platform affordance. Critique revealed a deeper question: whose norms do steering vectors enforce? This companion paper proposes polycentric governance. Draft awaiting Māori peer review.', - tags: ['taonga', 'polycentric-governance', 'sovereign-ai', 'steering-vectors', 'indigenous-data-sovereignty', 'tikanga', 'home-ai', 'research'], + tags: ['taonga', 'polycentric-governance', 'sovereign-ai', 'steering-vectors', 'indigenous-data-sovereignty', 'tikanga', 'village-ai', 'research'], status: 'published', featured: false, publishedAt: new Date('2026-02-09T14:00:00Z'), diff --git a/scripts/seed-blog-posts.js b/scripts/seed-blog-posts.js index c4488c9a..23981aea 100644 --- a/scripts/seed-blog-posts.js +++ b/scripts/seed-blog-posts.js @@ -98,7 +98,7 @@ const posts = [

      The most significant architectural evolution was recognising that these services must coordinate through mutual validation ("Deep Interlock" in Alexander's terms). A single-service bypass doesn't compromise the whole system — multiple services must be circumvented simultaneously, which is exponentially harder.

      The Village Case Study

      -

      The Village platform — a community platform with a sovereign locally-trained language model (SLL) called Home AI — became the primary production test of the framework. Every user interaction with Home AI passes through all six governance services before a response is generated.

      +

      The Village platform — a community platform with a sovereign locally-trained language model (SLL) called Village AI — became the primary production test of the framework. Every user interaction with Village AI passes through all six governance services before a response is generated.

      Observed metrics from this deployment:

        diff --git a/scripts/update-cache-version.js b/scripts/update-cache-version.js index c1419962..8ce2c32b 100644 --- a/scripts/update-cache-version.js +++ b/scripts/update-cache-version.js @@ -53,7 +53,7 @@ const HTML_FILES = [ 'public/case-submission.html', 'public/koha.html', 'public/check-version.html', - 'public/home-ai.html', + 'public/village-ai.html', 'public/architecture.html', 'public/village-case-study.html', 'public/architectural-alignment.html',