What is Home AI?
Home AI is the practical implementation of Tractatus governance within the Village platform — a community-owned digital space where members share stories, documents, and family histories. Unlike cloud-hosted AI assistants, Home AI operates under the principle of digital sovereignty: the community's data and the AI's behaviour are governed by the community itself, not by a remote provider.
The term "SLL" (Sovereign Locally-trained Language Model) describes the architectural goal: a language model whose training data, inference, and governance all remain under local control. In practice, Home AI currently uses a hybrid approach — local Llama models for English-language operations and Claude Haiku via API for non-English languages — with a roadmap toward fully local inference as hardware and model capabilities allow.
What distinguishes Home AI from other AI assistants is not the model itself, but the governance layer around it. Every interaction — whether a help query, document OCR, story suggestion, or AI-generated summary — passes through the full Tractatus governance stack before any response reaches the user.
The Governance Stack
Each Home AI interaction traverses six governance services in sequence. This is not optional middleware — it operates in the critical execution path, meaning a response cannot be generated without passing through all checks.
1. BoundaryEnforcer
Detects whether a user query involves values decisions (privacy trade-offs, ethical questions, cultural sensitivity) and blocks the AI from responding autonomously. These are deferred to human moderators. The boundary between "technical question" and "values question" is defined by community-specific rules, not by the AI's judgment.
2. CrossReferenceValidator
Validates the query against stored instructions and known patterns. This is the service that would have caught the 27027 incident — the user's explicit instruction ("use port 27027") is stored externally and cross-referenced against the AI's proposed action ("use port 27017"). When stored instructions conflict with the AI's response, the stored instruction takes precedence.
3. ContextPressureMonitor
Tracks session health metrics: token usage, message count, error rate, task complexity. When pressure exceeds thresholds (ELEVATED at 25%, HIGH at 50%, CRITICAL at 75%), the system adjusts validation intensity or recommends session handoff. This prevents the degradation patterns observed in extended AI sessions where error rates compound.
4. MetacognitiveVerifier
For complex operations (multi-step tasks, file modifications, configuration changes), the AI performs a structured self-assessment before proposing actions: alignment with instructions, coherence of approach, completeness of plan, safety of proposed changes, and consideration of alternatives. This is triggered selectively to avoid overhead on simple queries.
5. InstructionPersistenceClassifier
Classifies instructions by their intended lifespan (HIGH: strategic/permanent, MEDIUM: operational/session-scoped, LOW: tactical/single-use) and quadrant (Strategic, Operational, Tactical, System, Stochastic). This classification determines how strongly the CrossReferenceValidator enforces each instruction and how long it persists in the external store.
6. PluralisticDeliberationOrchestrator
When the AI encounters decisions where legitimate values conflict — for example, a member's privacy interests versus community safety concerns — this service halts autonomous decision-making and coordinates a deliberation process among affected stakeholders. The AI presents the conflict and facilitates discussion; it does not resolve it.
Governed Features
Home AI currently provides four AI-powered features, each operating under the full governance stack.
RAG-Based Help
Vector search retrieves relevant documentation and help content, filtered by the member's permission level. The AI generates contextual answers grounded in retrieved documents rather than from its training data alone.
Governance: BoundaryEnforcer prevents PII exposure; CrossReferenceValidator validates responses against platform policies.
Document OCR
Automated text extraction from uploaded documents (historical records, handwritten letters, photographs with text). Extracted text is stored within the member's scope, not shared across tenants or used for model training.
Governance: Processing only occurs under explicit consent controls; results are tenant-isolated.
Story Assistance
AI-generated suggestions for writing family stories: prompts, structural advice, and narrative enhancement. Suggestions are filtered through BoundaryEnforcer so that the AI does not impose cultural interpretations or values judgments on family narratives.
Governance: Cultural context decisions are deferred to the storyteller, not resolved by the AI.
AI Memory Transparency
Members can view what the AI "remembers" about their interactions: summarised conversation history, inferred preferences, and stored instructions. Members control whether this memory persists, is reset, or is deleted entirely.
Governance: Consent granularity covers AI triage memory, OCR memory, and summarisation memory independently.
Sovereignty Architecture
The concept of "sovereign" in Home AI is concrete, not aspirational. It refers to specific architectural properties:
Data sovereignty
All member data is stored on infrastructure controlled by the community operator — currently OVH (France) and Catalyst (New Zealand). No member data flows to AI provider APIs for training. Query content sent to Claude Haiku for non-English processing is ephemeral and not retained by the provider.
Governance sovereignty
The rules governing AI behaviour are defined by the community, not the AI provider. BoundaryEnforcer rules, instruction persistence levels, and deliberation triggers are configured per-tenant. A family history community has different boundary rules from a neighbourhood association.
Inference sovereignty (in progress)
English-language queries currently use a locally-hosted Llama model. The roadmap includes expanding local inference to additional languages as multilingual open models mature. The governance layer is model-agnostic — switching the underlying model does not require changes to the governance architecture.
Te Tiriti o Waitangi and Digital Sovereignty
The sovereignty principles underlying Home AI are informed by Te Tiriti o Waitangi (the Treaty of Waitangi, 1840) and Māori concepts of rangatiratanga (self-determination over one's domain), kaitiakitanga (guardianship of resources for future generations), and mana (authority and dignity).
These are not metaphorical borrowings. They provide concrete architectural guidance: communities should control their own data (rangatiratanga), AI systems should preserve rather than degrade the information they govern (kaitiakitanga), and automated decisions should not diminish the standing of the people they affect (mana).
The Tractatus framework is developed in Aotearoa New Zealand, and these principles predate Western technology governance by centuries. We consider them prior art, not novel invention.
Limitations and Open Questions
- • Single implementation: Home AI operates within one platform built by the framework developer. Conclusions about governance effectiveness cannot be generalised without independent deployments.
- • Self-reported metrics: Performance and safety figures are reported by the same team that built the system. Independent audit is planned but not yet conducted.
- • Hybrid inference: Full sovereignty requires local inference for all languages. Currently, non-English queries depend on cloud APIs (Claude Haiku), which introduces a provider dependency.
- • Scale unknown: The governance overhead (approximately 5% per interaction) is measured at current scale. Whether this holds under high-throughput conditions is untested.
- • Adversarial testing limited: The governance stack has not been subjected to systematic adversarial evaluation (jailbreak attempts, prompt injection at scale). Red-teaming is a priority for 2026.