From dd6b3b345e300e5eaae1f1545539f296cf3fde27 Mon Sep 17 00:00:00 2001 From: TheFlow Date: Tue, 7 Oct 2025 23:14:32 +1300 Subject: [PATCH] feat: add About and Values pages with Te Tiriti acknowledgment MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Created /about.html with mission, values, framework overview - Created /about/values.html with comprehensive values statement - Included respectful Te Tiriti o Waitangi acknowledgment - Added CARE Principles for Indigenous Data Governance - Documented digital sovereignty and Māori data sovereignty - Updated all page footers with Te Tiriti acknowledgment - Added links to Te Mana Raraunga and indigenous data resources - Cache-busted all HTML files for deployment 🤖 Generated with Claude Code (https://claude.com/claude-code) Co-Authored-By: Claude --- public/about.html | 257 ++++++++++++++++ public/about/values.html | 419 ++++++++++++++++++++++++++ public/admin/dashboard.html | 4 +- public/admin/login.html | 4 +- public/advocate.html | 2 +- public/api-reference.html | 2 +- public/demos/27027-demo.html | 4 +- public/demos/boundary-demo.html | 4 +- public/demos/classification-demo.html | 4 +- public/demos/tractatus-demo.html | 4 +- public/docs-viewer.html | 10 +- public/docs.html | 6 +- public/implementer.html | 2 +- public/index.html | 2 +- public/researcher.html | 2 +- 15 files changed, 701 insertions(+), 25 deletions(-) create mode 100644 public/about.html create mode 100644 public/about/values.html diff --git a/public/about.html b/public/about.html new file mode 100644 index 00000000..ff7791f0 --- /dev/null +++ b/public/about.html @@ -0,0 +1,257 @@ + + + + + + About | Tractatus AI Safety Framework + + + + + + + + + +
+
+
+

+ About Tractatus +

+

+ A framework for AI safety through architectural constraints, preserving human agency where it matters most. +

+
+
+
+ + +
+
+

Our Mission

+
+

+ The Tractatus Framework exists to address a fundamental problem in AI safety: current approaches rely on training, fine-tuning, and corporate governance—all of which can fail, drift, or be overridden. We propose safety through architecture. +

+

+ Inspired by Ludwig Wittgenstein's Tractatus Logico-Philosophicus, our framework recognizes that some domains—values, ethics, cultural context, human agency—cannot be systematized. What cannot be systematized must not be automated. AI systems should have structural constraints that prevent them from crossing these boundaries. +

+
+ "Whereof one cannot speak, thereof one must be silent."
+ — Ludwig Wittgenstein, Tractatus (§7) +
+

+ Applied to AI: "What cannot be systematized must not be automated." +

+
+
+ + +
+

Core Values

+
+
+

Sovereignty

+

+ Individuals and communities must maintain control over decisions affecting their data, privacy, and values. AI systems must preserve human agency, not erode it. +

+
+ +
+

Transparency

+

+ All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand how and why systems make choices, and have power to override them. +

+
+ +
+

Harmlessness

+

+ AI systems must not cause harm through action or inaction. This includes preventing drift, detecting degradation, and enforcing boundaries against values erosion. +

+
+ +
+

Community

+

+ AI safety is a collective endeavor. We are committed to open collaboration, knowledge sharing, and empowering communities to shape the AI systems that affect their lives. +

+
+
+ + +
+ + +
+

How It Works

+
+

+ The Tractatus Framework consists of five integrated components that work together to enforce structural safety: +

+
+ +
+
+

InstructionPersistenceClassifier

+

+ Classifies instructions by quadrant (Strategic, Operational, Tactical, System, Stochastic) and determines persistence level (HIGH/MEDIUM/LOW/VARIABLE). +

+
+ +
+

CrossReferenceValidator

+

+ Validates AI actions against stored instructions to prevent contradictions (like the 27027 incident where MongoDB port was changed from explicit instruction). +

+
+ +
+

BoundaryEnforcer

+

+ Ensures AI never makes values decisions without human approval. Privacy trade-offs, user agency, cultural context—these require human judgment. +

+
+ +
+

ContextPressureMonitor

+

+ Detects when session conditions increase error probability (token pressure, message length, task complexity) and adjusts behavior or suggests handoff. +

+
+ +
+

MetacognitiveVerifier

+

+ AI self-checks complex reasoning before proposing actions. Evaluates alignment, coherence, completeness, safety, and alternatives. +

+
+
+ + +
+ + +
+

Origin Story

+
+

+ The Tractatus Framework emerged from real-world AI failures experienced during extended Claude Code sessions. The "27027 incident"—where AI contradicted an explicit instruction about MongoDB port after 85,000 tokens—revealed that traditional safety approaches were insufficient. +

+

+ After documenting multiple failure modes (parameter contradiction, values drift, silent degradation), we recognized a pattern: AI systems lacked structural constraints. They could theoretically "learn" safety, but in practice they failed when context pressure increased, attention decayed, or subtle values conflicts emerged. +

+

+ The solution wasn't better training—it was architecture. Drawing inspiration from Wittgenstein's insight that some things lie beyond the limits of language (and thus systematization), we built a framework that enforces boundaries through structure, not aspiration. +

+
+
+ + +
+

License & Contribution

+
+

+ The Tractatus Framework is open source under the MIT License. We encourage: +

+
    +
  • Academic research and validation studies
  • +
  • Implementation in production AI systems
  • +
  • Submission of failure case studies
  • +
  • Theoretical extensions and improvements
  • +
  • Community collaboration and knowledge sharing
  • +
+

+ The framework is intentionally permissive because AI safety benefits from transparency and collective improvement, not proprietary control. +

+
+
+
+ + +
+
+

Join the Movement

+

+ Help build AI systems that preserve human agency through architectural guarantees. +

+ +
+
+ + + + + + diff --git a/public/about/values.html b/public/about/values.html new file mode 100644 index 00000000..f5f06166 --- /dev/null +++ b/public/about/values.html @@ -0,0 +1,419 @@ + + + + + + Values & Principles | Tractatus AI Safety Framework + + + + + + + + + + +
+
+
+

+ Values & Principles +

+

+ The foundational values that guide the Tractatus Framework's development, governance, and community. +

+
+
+
+ + + + + +
+ + +
+

Core Values

+
+

+ These four values form the foundation of the Tractatus Framework. They are not aspirational—they are architectural. The framework is designed to enforce these values through structure, not training. +

+
+ + +
+

1. Sovereignty

+

+ Principle: Individuals and communities must maintain control over decisions affecting their data, privacy, values, and agency. AI systems must preserve human sovereignty, not erode it. +

+ +

What This Means in Practice:

+
    +
  • AI cannot make values trade-offs (e.g., privacy vs. convenience) without human approval
  • +
  • Users can always override AI decisions
  • +
  • No "dark patterns" or manipulative design that undermines agency
  • +
  • Communities control their own data and AI systems
  • +
  • No paternalistic "AI knows best" approaches
  • +
+ +

Framework Implementation:

+
    +
  • BoundaryEnforcer blocks values decisions requiring human judgment
  • +
  • InstructionPersistenceClassifier respects STRATEGIC and HIGH persistence instructions
  • +
  • All decisions are reversible and auditable
  • +
+
+ + +
+

2. Transparency

+

+ Principle: All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand how and why systems make choices. +

+ +

What This Means in Practice:

+
    +
  • Every AI decision includes reasoning and evidence
  • +
  • Users can inspect instruction history and classification
  • +
  • All boundary checks and validations are logged
  • +
  • No hidden optimization goals or secret constraints
  • +
  • Source code is open and auditable
  • +
+ +

Framework Implementation:

+
    +
  • CrossReferenceValidator shows which instruction conflicts with proposed action
  • +
  • MetacognitiveVerifier provides reasoning analysis and confidence scores
  • +
  • All framework decisions include explanatory output
  • +
+
+ + +
+

3. Harmlessness

+

+ Principle: AI systems must not cause harm through action or inaction. This includes preventing drift, detecting degradation, and enforcing boundaries against values erosion. +

+ +

What This Means in Practice:

+
    +
  • Prevent parameter contradictions (e.g., 27027 incident)
  • +
  • Detect and halt values drift before deployment
  • +
  • Monitor context pressure to catch silent degradation
  • +
  • No irreversible actions without explicit human approval
  • +
  • Fail safely: when uncertain, ask rather than assume
  • +
+ +

Framework Implementation:

+
    +
  • ContextPressureMonitor detects when error probability increases
  • +
  • BoundaryEnforcer prevents values drift
  • +
  • CrossReferenceValidator catches contradictions before execution
  • +
+
+ + +
+

4. Community

+

+ Principle: AI safety is a collective endeavor, not a corporate product. Communities must have tools, knowledge, and agency to shape AI systems affecting their lives. +

+ +

What This Means in Practice:

+
    +
  • Open source framework under permissive MIT license
  • +
  • Accessible documentation and educational resources
  • +
  • Support for academic research and validation studies
  • +
  • Community contributions to case studies and improvements
  • +
  • No paywalls, no vendor lock-in, no proprietary control
  • +
+ +

Framework Implementation:

+
    +
  • All code publicly available on GitHub
  • +
  • Interactive demos for education and advocacy
  • +
  • Three audience paths: researchers, implementers, advocates
  • +
+
+
+ + +
+

Te Tiriti o Waitangi & Digital Sovereignty

+ +
+

+ Context: The Tractatus Framework is developed in Aotearoa New Zealand. We acknowledge Te Tiriti o Waitangi (the Treaty of Waitangi, 1840) as the founding document of this nation, and recognize the ongoing significance of tino rangatiratanga (self-determination) and kaitiakitanga (guardianship) in the digital realm. +

+

+ This acknowledgment is not performative. Digital sovereignty—the principle that communities control their own data and technology—has deep roots in indigenous frameworks that predate Western tech by centuries. +

+
+ +

Why This Matters for AI Safety

+
+

+ Te Tiriti o Waitangi establishes principles of partnership, protection, and participation. These principles directly inform the Tractatus Framework's approach to digital sovereignty: +

+
    +
  • Rangatiratanga (sovereignty): Communities must control decisions affecting their data and values
  • +
  • Kaitiakitanga (guardianship): AI systems must be stewards, not exploiters, of data and knowledge
  • +
  • Mana (authority & dignity): Technology must respect human dignity and cultural context
  • +
  • Whanaungatanga (relationships): AI safety is collective, not individual—relationships matter
  • +
+
+ +

Our Approach

+
+

+ We do not claim to speak for Māori or indigenous communities. Instead, we: +

+
    +
  • Follow established frameworks: We align with Te Mana Raraunga (Māori Data Sovereignty Network) and CARE Principles for Indigenous Data Governance
  • +
  • Respect without tokenism: Te Tiriti forms part of our strategic foundation, not a superficial overlay
  • +
  • Avoid premature engagement: We will not approach Māori organizations for endorsement until we have demonstrated value and impact
  • +
  • Document and learn: We study indigenous data sovereignty principles and incorporate them architecturally
  • +
+
+ +
+

Te Tiriti Principles in Practice

+
+
+
+ +
+
+ Partnership: AI systems should be developed in partnership with affected communities, not imposed upon them. +
+
+
+
+ +
+
+ Protection: The framework protects against values erosion, ensuring cultural contexts are not overridden by AI assumptions. +
+
+
+
+ +
+
+ Participation: Communities maintain agency over AI decisions affecting their data and values. +
+
+
+
+
+ + +
+

Indigenous Data Sovereignty

+ +
+

+ Indigenous data sovereignty is the principle that indigenous peoples have the right to control the collection, ownership, and application of their own data. This goes beyond privacy—it's about self-determination in the digital age. +

+
+ +

CARE Principles for Indigenous Data Governance

+

+ The Tractatus Framework aligns with the CARE Principles, developed by indigenous data governance experts: +

+ +
+
+

Collective Benefit

+

+ Data ecosystems shall be designed and function in ways that enable Indigenous Peoples to derive benefit from the data. +

+
+ +
+

Authority to Control

+

+ Indigenous Peoples' rights and interests in Indigenous data must be recognized and their authority to control such data be empowered. +

+
+ +
+

Responsibility

+

+ Those working with Indigenous data have a responsibility to share how data are used to support Indigenous Peoples' self-determination and collective benefit. +

+
+ +
+

Ethics

+

+ Indigenous Peoples' rights and wellbeing should be the primary concern at all stages of the data life cycle and across the data ecosystem. +

+
+
+ +

Resources & Further Reading

+
+ +
+
+ + +
+

Governance & Accountability

+ +
+

+ Values without enforcement are aspirations. The Tractatus Framework implements these values through architectural governance: +

+
+ +
+
+

Strategic Review Protocol

+

+ Quarterly reviews of framework alignment with stated values. Any drift from sovereignty, transparency, harmlessness, or community principles triggers mandatory correction. +

+
+ +
+

Values Alignment Framework

+

+ All major decisions (architectural changes, partnerships, licensing) must pass values alignment check. If a decision would compromise any core value, it is rejected. +

+
+ +
+

Human Oversight Requirements

+

+ AI-generated content (documentation, code examples, case studies) requires human approval before publication. No AI makes values decisions without human judgment. +

+
+ +
+

Community Accountability

+

+ Open source development means community oversight. If we fail to uphold these values, the community can fork, modify, or create alternatives. This is by design. +

+
+
+
+ + +
+
+

Our Commitment

+
+

+ These values are not negotiable. They form the architectural foundation of the Tractatus Framework. We commit to: +

+
    +
  • Preserving human sovereignty over values decisions
  • +
  • Maintaining radical transparency in all framework operations
  • +
  • Preventing harm through structural constraints, not promises
  • +
  • Building and empowering community, not extracting from it
  • +
  • Respecting Te Tiriti o Waitangi and indigenous data sovereignty
  • +
+

+ When in doubt, we choose human agency over AI capability. Always. +

+
+
+
+
+ + + + + + diff --git a/public/admin/dashboard.html b/public/admin/dashboard.html index f3063f72..3e89914a 100644 --- a/public/admin/dashboard.html +++ b/public/admin/dashboard.html @@ -4,7 +4,7 @@ Admin Dashboard | Tractatus Framework - + @@ -180,7 +180,7 @@ - + diff --git a/public/admin/login.html b/public/admin/login.html index 5a4c101c..e9a9585a 100644 --- a/public/admin/login.html +++ b/public/admin/login.html @@ -4,7 +4,7 @@ Admin Login | Tractatus Framework - + @@ -88,7 +88,7 @@ - + diff --git a/public/advocate.html b/public/advocate.html index f6c626f1..fc0c499c 100644 --- a/public/advocate.html +++ b/public/advocate.html @@ -5,7 +5,7 @@ For Advocates | Tractatus AI Safety Framework - + diff --git a/public/api-reference.html b/public/api-reference.html index 3b17ca3c..f0e476fd 100644 --- a/public/api-reference.html +++ b/public/api-reference.html @@ -5,7 +5,7 @@ API Reference | Tractatus Framework - +