tractatus/docs/governance/TRA-VAL-0001-core-values-principles-v1-0.md
TheFlow 2298d36bed fix(submissions): restructure Economist package and fix article display
- Create Economist SubmissionTracking package correctly:
  * mainArticle = full blog post content
  * coverLetter = 216-word SIR— letter
  * Links to blog post via blogPostId
- Archive 'Letter to The Economist' from blog posts (it's the cover letter)
- Fix date display on article cards (use published_at)
- Target publication already displaying via blue badge

Database changes:
- Make blogPostId optional in SubmissionTracking model
- Economist package ID: 68fa85ae49d4900e7f2ecd83
- Le Monde package ID: 68fa2abd2e6acd5691932150

Next: Enhanced modal with tabs, validation, export

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-24 08:47:42 +13:00

13 KiB

Tractatus AI Safety Framework - Core Values and Principles

Document Type: Strategic Foundation Created: 2025-10-06 Author: John Stroh Version: 1.0 Status: Active Filename: TRA-VAL-0001-core-values-principles-v1-0.md Document Code: TRA-VAL-0001 Directory Path: docs/governance/ Security Classification: Public

Primary Quadrant: STRATEGIC Related Quadrants: OPS, TAC, SYS

Implements: Tractatus Framework Specification v2.0 Implements Relationship: Foundation Related Documents: TRA-GOV-0001, TRA-GOV-0002, TRA-GOV-0003 Related Relationship: Core Implementation Status: Active


Purpose

This document establishes the foundational values and principles that guide the Tractatus AI Safety Framework and all aspects of this website platform. These enduring elements represent our deepest commitments to safe AI development and provide the basis for strategic alignment across all features, content, and operations.


Core Values

Sovereignty & Self-determination

  • Human Agency Preservation: AI systems must augment, never replace, human decision-making authority
  • User Control: Individuals maintain complete control over their data and engagement with AI features
  • No Manipulation: Zero dark patterns, no hidden AI influence, complete transparency in AI operations
  • Explicit Consent: All AI features require clear user understanding and opt-in

Transparency & Honesty

  • Visible AI Reasoning: All AI-generated suggestions include the reasoning process
  • Public Moderation Queue: Human oversight decisions are documented and visible
  • Clear Boundaries: Explicitly communicate what AI can and cannot do
  • Honest Limitations: Acknowledge framework limitations and edge cases
  • No Proprietary Lock-in: Open source, open standards, exportable data

Harmlessness & Protection

  • Privacy-First Design: No tracking, no surveillance, minimal data collection
  • Security by Default: Regular audits, penetration testing, zero-trust architecture
  • Fail-Safe Mechanisms: AI errors default to human review, not automatic action
  • Boundary Enforcement: Architectural guarantees prevent AI from making values decisions
  • User Safety: Protection from AI-generated misinformation or harmful content

Human Judgment Primacy

  • Values Decisions: Always require human approval, never delegated to AI
  • Strategic Oversight: Human authority over mission, values, and governance
  • Escalation Protocols: Clear pathways for AI to request human guidance
  • Override Capability: Humans can always override AI suggestions
  • Accountability: Human responsibility for all AI-assisted actions

Community & Accessibility

  • Universal Access: Core framework documentation freely available to all
  • Three Audience Paths: Tailored content for Researchers, Implementers, Advocates
  • Economic Accessibility: Free tier with substantive capabilities
  • Knowledge Sharing: Open collaboration, peer review, community contributions
  • WCAG Compliance: Accessible to all abilities and assistive technologies

Biodiversity & Ecosystem Thinking

  • Multiple Valid Approaches: No single solution, respect for alternative frameworks
  • Interoperability: Integration with diverse AI safety approaches
  • Sustainability: Long-term viability over short-term growth
  • Resilience: Distributed systems, multiple mirrors, no single points of failure
  • Environmental Responsibility: Green hosting, efficient code, minimal resource consumption

Guiding Principles

Architectural Safety Guarantees

  • Structural over Training: Safety through architecture, not just fine-tuning
  • Explicit Boundaries: Codified limits on AI action authority
  • Verifiable Compliance: Automated checks against strategic values
  • Cross-Reference Validation: AI actions validated against explicit instructions
  • Context Pressure Monitoring: Detection of error-prone conditions

Dogfooding Implementation

  • Self-Application: This website uses Tractatus to govern its own AI operations
  • Living Demonstration: Platform proves framework effectiveness through use
  • Continuous Validation: Real-world testing of governance mechanisms
  • Transparent Meta-Process: Public documentation of how AI governs AI

Progressive Implementation

  • Phased Rollout: 4-phase deployment over 18 months
  • Incremental Features: Add capabilities as governance matures
  • No Shortcuts: Quality over speed, world-class execution
  • Learn and Adapt: Iterate based on real-world feedback

Education-Centered Approach

  • Demystify AI Safety: Make complex concepts accessible
  • Build Literacy: Empower users to understand AI governance
  • Interactive Demonstrations: Learn by doing (classification, 27027 incident, boundary enforcement)
  • Case Study Learning: Real-world failures and successes
  • Open Research: Share findings, encourage replication

Jurisdictional Awareness & Data Sovereignty

  • Respect Indigenous Leadership: Honor indigenous data sovereignty principles (CARE Principles)
  • Te Tiriti Foundation: Acknowledge Te Tiriti o Waitangi as strategic baseline
  • Location-Aware Hosting: Consider data residency and jurisdiction
  • Global Application: Framework designed for worldwide implementation
  • Local Adaptation: Support for cultural and legal contexts

AI Governance Framework

  • Quadrant-Based Classification: Strategic/Operational/Tactical/System/Stochastic organization
  • Time-Persistence Metadata: Instructions classified by longevity and importance
  • Human-AI Collaboration: Clear delineation of authority and responsibility
  • Instruction Persistence: Critical directives maintained across context windows
  • Metacognitive Verification: AI self-assessment before proposing actions

Research & Validation Priority

  • Peer Review: Academic rigor, scholarly publication
  • Reproducible Results: Open code, documented methodologies
  • Falsifiability: Framework designed to be tested and potentially disproven
  • Continuous Research: Ongoing validation and refinement
  • Industry Collaboration: Partnerships with AI organizations

Sustainable Operations

  • Koha Model: Transparent, community-supported funding (Phase 3+)
  • No Exploitation: Fair pricing, clear value exchange
  • Resource Efficiency: Optimized code, cached content, minimal overhead
  • Long-Term Thinking: Decades, not quarters
  • Community Ownership: Contributors have stake in success

Te Tiriti o Waitangi Commitment

Strategic Baseline (Not Dominant Cultural Overlay):

The Tractatus framework acknowledges Te Tiriti o Waitangi and indigenous leadership in digital sovereignty as a strategic foundation for this work. We:

  • Respect Indigenous Data Sovereignty: Follow documented principles (CARE Principles, Te Mana Raraunga research)
  • Acknowledge Historical Leadership: Indigenous peoples have led sovereignty struggles for centuries
  • Apply Published Standards: Use peer-reviewed indigenous data governance frameworks
  • Defer Deep Engagement: Will wait to approach Māori organizations until we have a stable and well developed platform in production. Our objective will be to request help in editing a Māori version that has their support and approval.

Implementation:

  • Footer acknowledgment (subtle, respectful)
  • /about/values page (detailed explanation)
  • Resource directory (links to Māori data sovereignty work)
  • No tokenism, no performative gestures

Values Alignment in Practice

Content Curation (Blog, Resources)

  • AI Suggests: Claude analyzes trends, proposes topics
  • Human Approves: All values-sensitive content requires human review
  • Transparency: AI reasoning visible in moderation queue
  • Attribution: Clear "AI-curated, human-approved" labels

Media Inquiries

  • AI Triages: Analyzes urgency, topic sensitivity
  • Human Responds: All responses written or approved by humans
  • Escalation: Values-sensitive topics immediately escalated to strategic review

Case Study Submissions

  • AI Reviews: Assesses relevance, completeness
  • Human Validates: Final publication decision always human
  • Quality Control: Framework alignment checked against TRA-VAL-0001

Interactive Demonstrations

  • Educational Purpose: Teach framework concepts through interaction
  • No Live Data: Demonstrations use example scenarios only
  • Transparency: Show exactly how classification and validation work

Decision Framework

When values conflict (e.g., transparency vs. privacy, speed vs. safety):

  1. Explicit Recognition: Acknowledge the tension publicly
  2. Context Analysis: Consider specific situation and stakeholders
  3. Hierarchy Application:
    • Human Safety > System Performance
    • Privacy > Convenience
    • Transparency > Proprietary Advantage
    • Long-term Sustainability > Short-term Growth
  4. Document Resolution: Record decision rationale for future reference
  5. Community Input: Seek feedback on significant value trade-offs

Review and Evolution

Annual Review Process

  • Scheduled: 2026-10-06 (one year from creation)
  • Scope: Comprehensive evaluation of values relevance and implementation
  • Authority: Human PM (John Stroh) with community input
  • Outcome: Updated version or reaffirmation of current values

Triggering Extraordinary Review

Immediate review required if:

  • Framework fails to prevent significant AI harm
  • Values found to be in conflict with actual operations
  • Major regulatory or ethical landscape changes
  • Community identifies fundamental misalignment

Evolution Constraints

  • Core values (Sovereignty, Transparency, Harmlessness, Human Judgment) are immutable
  • Guiding principles may evolve based on evidence and experience
  • Changes require explicit human approval and public documentation

Metrics for Values Adherence

Sovereignty & Self-determination

  • Zero instances of hidden AI influence
  • 100% opt-in for AI features
  • User data export capability maintained

Transparency & Honesty

  • All AI reasoning documented in moderation queue
  • Public disclosure of framework limitations
  • Clear attribution of AI vs. human content

Harmlessness & Protection

  • Zero security breaches
  • Privacy audit pass rate: 100%
  • Fail-safe activation rate (AI defers to human)

Human Judgment Primacy

  • 100% of values decisions reviewed by humans
  • Average escalation response time < 24 hours
  • Zero unauthorized AI autonomous actions

Community & Accessibility

  • WCAG AA compliance: 100% of pages
  • Free tier usage: >80% of all users
  • Community contributions accepted and integrated

Implementation Requirements

All features, content, and operations must:

  1. Pass Values Alignment Check: Documented review against this framework
  2. Include Tractatus Governance: Boundary enforcement, classification, validation
  3. Maintain Human Oversight: Clear escalation paths to human authority
  4. Support Transparency: Reasoning and decision processes visible
  5. Respect User Sovereignty: No manipulation, complete control, clear consent

Failure to align with these values is grounds for feature rejection or removal.


Appendix A: Values in Action Examples

Example 1: Blog Post Suggestion

AI Action: Suggests topic "Is AI Safety Overblown?" Classification: STOCHASTIC (exploration) → escalate to STRATEGIC (values-sensitive) Human Review: Topic involves framework credibility, requires strategic approval Decision: Approved with requirement for balanced, evidence-based treatment Outcome: Blog post published with AI reasoning visible, cites peer-reviewed research

Example 2: Media Inquiry Response

AI Action: Triages inquiry from major tech publication as "high urgency" Classification: OPERATIONAL (standard process) Human Review: Response drafted by human, reviews AI summary for accuracy Decision: Human-written response sent, AI triage saved time Outcome: Effective media engagement, human authority maintained

Example 3: Feature Request

AI Action: Suggests adding "auto-approve" for low-risk blog posts Classification: STRATEGIC (changes governance boundary) Human Review: Would reduce human oversight, conflicts with core values Decision: Rejected - all content requires human approval per TRA-VAL-0001 Outcome: Framework integrity preserved, alternative efficiency improvements explored


Appendix B: Glossary

AI Governance: Frameworks and mechanisms that control AI system behavior Boundary Enforcement: Preventing AI from actions outside defined authority Dogfooding: Using the framework to govern itself (meta-implementation) Human Judgment Primacy: Core principle that humans retain decision authority Quadrant Classification: Strategic/Operational/Tactical/System/Stochastic categorization Time-Persistence Metadata: Instruction classification by longevity and importance Values-Sensitive: Content or decisions that intersect with strategic values


Document Authority: This document has final authority over all platform operations. In case of conflict between this document and any other guidance, TRA-VAL-0001 takes precedence.

Next Review: 2026-10-06 Version History: v1.0 (2025-10-06) - Initial creation


This document is maintained by John Stroh (john.stroh.nz@pm.me) and subject to annual review. Changes require explicit human approval and public documentation.