Core Values
Human Sovereignty
AI must never make values decisions without human approval. Some choices—privacy vs. convenience, user agency, cultural context—cannot be systematized. They require human judgment, always.
"What cannot be systematized must not be automated."
Digital Sovereignty
Communities and individuals must control their own data and AI systems. No corporate surveillance, no centralized control. Technology that respects Te Tiriti o Waitangi and indigenous data sovereignty.
"Technology serves communities, not corporations."
Radical Transparency
All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand why AI systems make the choices they do, and have the power to override them.
"Transparency builds trust, opacity breeds harm."
Community Empowerment
AI safety is not a technical problem—it's a social one. Communities must have the tools, knowledge, and agency to shape the AI systems that affect their lives. No tech paternalism.
"Those affected by AI must have power over AI."