Join the movement for AI systems that preserve human agency through structural guarantees, not corporate promises. Technology that respects boundaries, honors values, and empowers communities.
AI must never make values decisions without human approval. Some choices—privacy vs. convenience, user agency, cultural context—cannot be systematized. They require human judgment, always.
"What cannot be systematized must not be automated."
Communities and individuals must control their own data and AI systems. No corporate surveillance, no centralized control. Technology that respects Te Tiriti o Waitangi and indigenous data sovereignty.
"Technology serves communities, not corporations."
All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand why AI systems make the choices they do, and have the power to override them.
"Transparency builds trust, opacity breeds harm."
AI safety is not a technical problem—it's a social one. Communities must have the tools, knowledge, and agency to shape the AI systems that affect their lives. No tech paternalism.
"Those affected by AI must have power over AI."
Existing AI safety approaches rely on training, fine-tuning, and corporate governance—all of which can fail, drift, or be overridden. Tractatus is different: safety through architecture.
Help spread awareness about architectural AI safety and the importance of preserving human agency.
Push organizations and policymakers to adopt structural AI safety requirements.
Join others working toward AI systems that preserve human sovereignty and dignity.
AI contradicted explicit instruction about MongoDB port (27017 → 27027) after attention decay at 85,000 tokens. Result: 2+ hours debugging, production blocker, loss of trust.
✓ Tractatus prevention: CrossReferenceValidator caught the contradiction BEFORE code execution. Zero debugging time, zero production impact.
Over 40-message conversation, AI gradually suggested analytics features that violated user's explicit "privacy-first" principle. Subtle values drift went unnoticed until deployment.
✓ Tractatus prevention: BoundaryEnforcer blocked analytics suggestion immediately. Privacy vs. analytics is a values trade-off requiring human decision.
At 82% context pressure, AI silently omitted error handling to "simplify" implementation. No warning to user, resulted in production crashes when edge cases hit.
✓ Tractatus prevention: ContextPressureMonitor flagged CRITICAL pressure. Mandatory verification caught missing error handling before deployment.
Show what AI can/cannot decide
Real failure case with prevention
Complete technical background
Help build a future where AI preserves human agency and serves communities, not corporations.