What’s New
+Guardian Agents and the Philosophy of AI Accountability
+How Wittgenstein, Berlin, Ostrom, and Te Ao Māori converge on the same architectural requirements for governing AI in community contexts.
+Guardian Agents in Production
+Four-phase verification using mathematical similarity, not generative checking. Confidence badges, claim-level analysis, and adaptive learning — all tenant-scoped.
+Village Beta Pilot
+Village is accepting beta applications from communities ready to participate in constitutional AI governance. Community Basic from $10/mo.
+The Problem
From Code to Conversation: The Same Mechanism
-- In code, this bias produces measurable failures — wrong port, connection refused, incident logged in 14.7ms. But the same architectural flaw operates in every AI conversation, where it is far harder to detect. -
-- When a user from a collectivist culture asks for family advice, the model defaults to Western individualist framing — because that is what 95% of the training data reflects. When a Māori user asks about data guardianship, the model offers property-rights language instead of kaitiakitanga. When someone asks about end-of-life decisions, the model defaults to utilitarian calculus rather than the user’s religious or cultural framework. -
-- The mechanism is identical: training data distributions override the user’s actual context. In code, the failure is binary and detectable. In conversation, it is gradient and invisible — culturally inappropriate advice looks like “good advice” to the system, and often to the user. There is no CrossReferenceValidator catching it in 14.7ms. -
++ The same mechanism operates in every AI conversation. When a user from a collectivist culture asks for family advice, the model defaults to Western individualist framing. When a Māori user asks about data guardianship, the model offers property-rights language. Training data distributions override user context — in code the failure is binary and detectable, in conversation it is gradient and invisible. +
+ -- This is not an edge case, and it is not limited to code. It is a category of failure that gets worse as models become more capable: stronger patterns produce more confident overrides — whether the override substitutes a port number or a value system. Safety through training alone is insufficient. The failure mode is structural, it operates across every domain where AI acts, and the solution must be structural. -
Six Governance Services
+Governance Architecture
- Every AI action passes through six external services before execution. Governance operates in the critical path — bypasses require explicit flags and are logged. + Six governance services in the critical path, plus Guardian Agents verifying every AI response. Bypasses require explicit flags and are logged.