- +
-

Overview

+

What Is the Village

-

Platform Purpose

-

- The Village is a member-owned community platform providing sovereign data storage, - AI-assisted features, and privacy-by-design architecture. Communities operate - with full data ownership and governance-protected AI assistance. +

+ The Village is a member-owned platform for whānau, marae, clubs, and community organisations. Each community gets its own isolated tenant with sovereign data storage, AI-assisted features, and governance-protected privacy. The platform supports te reo Māori throughout. +

+

+ All AI processing runs on the platform's own infrastructure — a locally fine-tuned Llama model with no data sent to external AI providers. Communities operate with full data ownership and can withdraw consent at any time.

-

Deployment Facts

+

Deployment Facts

  • - Duration: - 11+ months in production + Duration: + 18+ months in production
  • - Tenant Model: - Single-tenant (multi-tenant planned) + Tenant Model: + Multi-tenant (multiple communities)
  • - AI Features: - 4 governed features live + AI Model: + Sovereign Llama (QLoRA fine-tuned)
  • - Services/Response: - 6 governance checks + AI Features: + 4 governed features live +
  • +
  • + Infrastructure: + NZ + EU (no US dependency)
@@ -136,68 +139,75 @@
- +
-

Architecture Mapping

-

- Each Village AI feature maps to specific Tractatus governance services. - The table below shows how the six services coordinate for each feature. -

+

Sovereign AI Architecture

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Village FeaturePrimary ServiceFunction
Home AI ResponsesBoundaryEnforcerBlocks values judgments, defers to human
User Query ProcessingCrossReferenceValidatorPrevents prompt injection, validates intent
Session ManagementContextPressureMonitorTracks session health, triggers handoffs
Multi-Step OperationsMetacognitiveVerifierDetects scope creep, requires review
Feature FlagsInstructionPersistenceClassifierPersistence classification for settings
Consent HandlingPluralisticDeliberationOrchestratorMulti-stakeholder decisions
+
+

+ The Village runs its own language model — not an API call to a US hyperscaler, but a locally fine-tuned model where the training data, model weights, and inference pipeline all remain under community control. +

+ +
+
+

Local Language Model

+

Llama 3.1 8B and Llama 3.2 3B, fine-tuned with QLoRA on community-specific data. All inference runs on the platform's own GPU infrastructure.

+
+
+

Sovereign Infrastructure

+

Production servers in New Zealand and the EU. No data transits US jurisdiction. Community data never leaves the deployment it belongs to.

+
+
+

Community-Controlled Training

+

QLoRA fine-tuning on domain-specific data with consent tracking and provenance. Communities can withdraw training data and trigger model retraining.

+
+
+ +

+ For a detailed account of the model architecture, training approach, and governance integration, see Home AI / SLL: Sovereign Locally-Trained Language Model. +

- +
-

The Home AI Flow

-

- When a user submits a query to Home AI, it passes through six verification stages - before a response is generated. This flow operates in the critical execution path. -

-

- For a detailed account of Home AI's sovereign architecture, training approach, and governance integration, see Home AI / SLL: Sovereign Locally-Trained Language Model. +

Polycentric Governance

+ +
+

+ The distinctive contribution of the Village is its governance architecture. Rather than a single operator making all decisions, the platform implements polycentric governance — multiple co-equal authorities that share structural control over how AI is used. +

+ +
+
+

Co-Equal Authority

+

Communities maintain architectural co-governance — not just consultation rights, but structural authority over how their data is used. Drawn from te ao Māori concepts of rangatiratanga (self-determination) and kaitiakitanga (guardianship).

+
+
+

Right of Non-Participation

+

Members can opt out of any AI feature without losing access to the platform. AI governance defers to human judgment on values questions and never overrides community decisions.

+
+
+

Taonga-Centred Design

+

Cultural treasures (taonga) are governed as first-class objects with provenance tracking, withdrawal rights, and community authority over how they appear in AI contexts.

+
+
+

Tenant-Scoped Isolation

+

Each community operates in complete data isolation. No cross-tenant data sharing. Each tenant's governance decisions apply only within their own boundary.

+
+
+ +

+ The research foundation is described in Taonga-Centred Steering Governance: Polycentric AI for Indigenous Data Sovereignty. +

+
+
+ + +
+

How Governance Works in Practice

+

+ When a member uses any AI feature, the request passes through six governance checks before a response is generated. Each check is independent and can block or modify the request.

@@ -205,69 +215,62 @@
  • 1
    -

    User Query Received

    -

    User submits query via Help Chat widget or story assistance

    +

    Member request received

    +

    A member asks for help, requests OCR, or uses story assistance.

  • 2
    -

    BoundaryEnforcer Check

    -

    Is this a values question requiring human judgment?

    +

    Values boundary check

    +

    Is this a values question that requires human judgment? If so, the AI defers rather than answering.

  • 3
    -

    CrossReferenceValidator Check

    -

    Does this conflict with stored instructions or attempt injection?

    +

    Intent validation

    +

    Does the request conflict with stored governance rules or attempt prompt injection? Cross-references against known instruction sets.

  • 4
    -

    ContextPressureMonitor Check

    -

    Is session health within acceptable bounds?

    +

    Context and session health

    +

    Is the session within acceptable bounds? Monitors for context pressure and triggers graceful handoff when needed.

  • 5
    -

    Query Processing

    -

    RAG system retrieves context with permission filtering

    +

    Permission-filtered retrieval and response

    +

    The sovereign Llama model generates a response using RAG context filtered by the member's permissions. All processing stays on-infrastructure.

  • 6
    -

    Response Generation

    -

    AI generates response (Claude Haiku for non-EN, local Llama for EN)

    +

    Scope verification

    +

    Is the response appropriate to what was asked? Detects scope creep and blocks responses that exceed the original request.

  • -
    7
    +
    7
    -

    MetacognitiveVerifier Check

    -

    Is response appropriate to query scope?

    -
    -
  • -
  • -
    8
    -
    -

    Delivery

    -

    Response delivered to user with source attribution

    +

    Delivery with attribution

    +

    Response delivered to the member with source attribution. Every step is logged for audit.

  • - +
    -

    Governed Features in Detail

    +

    What the Platform Delivers

    - +
    @@ -275,15 +278,14 @@
    -

    RAG-Based Help Centre

    +

    Help Centre

    -

    - Vector search over indexed help content, stories, and documentation. - Results filtered by user permissions before inclusion in context. +

    + Members ask questions in natural language and get answers drawn from help content, stories, and documentation — filtered by their permissions. +

    +

    + Governance: Values boundary check prevents AI from making judgments; intent validation blocks prompt injection attempts.

    -
    - Services: CrossReferenceValidator, BoundaryEnforcer -
    @@ -294,15 +296,14 @@
    -

    Document OCR

    +

    Document OCR

    -

    - Automated text extraction from uploaded documents. - Operates under explicit consent controls with audit logging. +

    + Upload a document and get the text extracted automatically. Useful for digitising letters, certificates, and historical records. +

    +

    + Governance: Requires explicit consent before processing. All operations are audit-logged with full provenance.

    -
    - Services: PluralisticDeliberationOrchestrator (consent) -
    @@ -313,16 +314,14 @@ -

    Story Assistance

    +

    Story Assistance

    -

    - AI-assisted writing suggestions for family stories. - Content suggestions filtered through BoundaryEnforcer to prevent - inappropriate recommendations. +

    + AI-assisted writing suggestions for community stories and family histories. Helps with structure, prompts, and gentle editing. +

    +

    + Governance: Values boundary check prevents inappropriate content suggestions; scope verification ensures the AI stays within what was asked.

    -
    - Services: BoundaryEnforcer, MetacognitiveVerifier -
    @@ -333,27 +332,25 @@ -

    AI Memory Transparency

    +

    AI Memory Transparency

    -

    - User-controlled summarisation with full audit dashboard. - Members can view, edit, and delete what AI "remembers" about them. +

    + Members can see, edit, and delete what the AI "remembers" about them. Full audit dashboard shows every AI interaction. +

    +

    + Governance: Multi-stakeholder consent required. Persistence decisions classified and auditable. Members control their own data.

    -
    - Services: InstructionPersistenceClassifier, PluralisticDeliberationOrchestrator -
    -

    Honest Limitations

    +

    Honest Limitations

    -

    - This case study documents preliminary evidence from an early-stage federated deployment across four tenants. - We are transparent about the following limitations: +

    + This case study documents preliminary evidence from a production multi-tenant deployment. We are transparent about the following limitations:

      @@ -362,8 +359,8 @@
      - Limited Deployment: - Village operates across four federated tenants within one platform. Generalisability to other contexts is unknown. + Small Scale: + The Village currently serves a small number of community tenants. Generalisability to larger deployments or different community types is unknown.
    • @@ -371,8 +368,8 @@
      - Self-Reported Metrics: - No independent verification of logged data has been conducted. + Self-Reported Metrics: + No independent verification of logged data has been conducted.
    • @@ -380,8 +377,8 @@
      - Operator-Developer Overlap: - Framework developer also operates Village (conflict of interest). + Operator-Developer Overlap: + Framework developer also operates the Village (conflict of interest).
    • @@ -389,8 +386,8 @@
      - Limited Adversarial Testing: - No formal red-team evaluation has been conducted. + Limited Adversarial Testing: + No formal red-team evaluation has been conducted.
    • @@ -398,8 +395,8 @@
      - Voluntary Invocation: - AI could theoretically bypass governance if not configured to use it. + Voluntary Invocation: + AI could theoretically bypass governance if not configured to use it.
    @@ -408,49 +405,55 @@
    -

    What This Demonstrates

    +

    What This Demonstrates

    -

    Evidence Supports

    +

    Evidence Supports

      -
    • • Architectural governance can operate in production without prohibitive overhead
    • -
    • • Six-service coordination is technically feasible
    • -
    • • Governance violations are detectable and auditable
    • -
    • • The framework learns from failures (documented incident responses)
    • +
    • • Sovereign AI deployment is technically feasible for small community organisations
    • +
    • • Polycentric governance can operate in production without prohibitive overhead
    • +
    • • Multi-tenant isolation with per-community governance is achievable
    • +
    • • Governance violations are detectable and auditable
    • +
    • • The framework learns from failures (documented incident responses)
    -

    Evidence Does NOT Support

    +

    Evidence Does NOT Support

      -
    • • Framework effectiveness at scale (thousands of concurrent users)
    • -
    • • Generalisability across different AI systems
    • -
    • • Resistance to sophisticated adversarial attacks
    • -
    • • Regulatory sufficiency (EU AI Act compliance untested)
    • +
    • • Framework effectiveness at scale (thousands of concurrent users)
    • +
    • • Generalisability across different AI systems or model architectures
    • +
    • • Resistance to sophisticated adversarial attacks
    • +
    • • Regulatory sufficiency (EU AI Act compliance untested)
    - +

    Explore Further

    - See the Village platform in action, or dive deeper into the technical architecture. + Dive deeper into the technical architecture, read the research, or see the Village platform in action.