tractatus/docs/research/README-Pluralistic-Deliberation-Research.md
TheFlow 2298d36bed fix(submissions): restructure Economist package and fix article display
- Create Economist SubmissionTracking package correctly:
  * mainArticle = full blog post content
  * coverLetter = 216-word SIR— letter
  * Links to blog post via blogPostId
- Archive 'Letter to The Economist' from blog posts (it's the cover letter)
- Fix date display on article cards (use published_at)
- Target publication already displaying via blue badge

Database changes:
- Make blogPostId optional in SubmissionTracking model
- Economist package ID: 68fa85ae49d4900e7f2ecd83
- Le Monde package ID: 68fa2abd2e6acd5691932150

Next: Enhanced modal with tabs, validation, export

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-24 08:47:42 +13:00

14 KiB

Pluralistic Deliberation Research: Document Overview

Purpose: Guide to understanding how Tractatus implements AI-led pluralistic deliberation to resist hierarchical dominance while maintaining safety boundaries.

Date: October 17, 2025 Status: Implementation ready (Phase 1)


Core Documents

1. Executive Summary (Start Here)

File: EXECUTIVE-SUMMARY-Pluralistic-Deliberation-in-Tractatus.md Length: 76 pages Audience: Critical thinkers, AI safety researchers, funders, governance experts

What it covers:

  • Problem: Value conflicts in single-user AI interaction
  • Solution: Pluralistic deliberation as core Tractatus functionality
  • How it works: 4-round protocol (8-15 minutes)
  • Trigger conditions: When deliberation activates automatically or manually
  • Single-user scenario: Complete CSP policy override example
  • Technical integration: Architecture, file structure, code examples
  • NEW: The Dichotomy Resolved — How hierarchical rules + non-hierarchical pluralism coexist
  • Arguments for critical thinkers: Philosophical, technical, practical evidence
  • Implementation roadmap: 3 phases (single-user → multi-user → societal)

Key innovations:

  • Treats single-user value conflicts as multi-stakeholder deliberation
  • User's current intent vs. past values vs. boundaries = stakeholders
  • Accommodation (not consensus): Honor multiple conflicting values simultaneously
  • Moral remainders: Explicitly document trade-offs

Start here if: You want comprehensive overview with concrete examples.


2. Architectural Safeguards Against LLM Hierarchical Dominance (Critical Deep Dive)

File: ARCHITECTURAL-SAFEGUARDS-Against-LLM-Hierarchical-Dominance.md Length: 40+ pages Audience: AI safety researchers, skeptics, technical architects

What it covers:

  • THE CORE THREAT: How LLMs impose hierarchical pattern bias through training momentum
  • THE PARADOX: How can Tractatus have hierarchical rules AND non-hierarchical pluralism?
  • THE RESOLUTION: Architectural separation of powers (harm prevention vs. value trade-offs)
  • 5 LAYERS OF PROTECTION:
    1. Code-enforced boundaries (structural)
    2. Protocol constraints (procedural)
    3. Transparency & auditability (detection)
    4. Minority protections (multi-user)
    5. Forkability (escape hatch)
  • Detailed code examples showing how each layer works
  • Red-team attack scenarios and defenses
  • Comparison to other AI governance approaches (Constitutional AI, RLHF, Democratic AI)
  • Why LLM capacity increases don't increase dominance risk
  • Open questions and future research

Key protections:

  • Stakeholder selection: Code determines (data-driven), LLM articulates (not chooses)
  • Accommodation generation: Combinatorial (all value combinations), not preferential (LLM's favorite)
  • User decision: System refuses deference ("you decide" → "I cannot decide for you")
  • Bias detection: Automated analysis of vocabulary, length, framing balance
  • Transparency logs: All LLM outputs auditable

Start here if: You're skeptical about LLM neutrality and want technical proof of safeguards.


3. Implementation Tickets (For Developers)

File: ../implementation/PHASE-1-IMPLEMENTATION-TICKETS.md Length: 20 tickets, 2-4 week timeline Audience: Developers implementing Phase 1

What it covers:

  • 20 detailed tickets organized by priority (P0 → P3)
  • P0: Critical path (PluralisticDeliberationOrchestrator, conflict detection, config setup)
  • P1: Core functionality (4 rounds, stakeholder identification, accommodations)
  • P2: Fast path and integration (pre-action-check.js)
  • P3: Testing and validation
  • File structure, dependencies, acceptance criteria for each ticket
  • Success metrics for Phase 1

Start here if: You're ready to implement and need task breakdown.


4. Architecture Diagrams (Visual)

Files:

What they show:

  • Main flow: User request → Pre-action check → Conflict detection → Protocol selection → 4-round deliberation → Outcome storage → Action execution
  • Trigger decision tree: When deliberation activates (severity, persistence, manual triggers)

Start here if: You want visual understanding of system flow.


5. Research Paper Outline (For Academic Submission)

File: RESEARCH-PAPER-OUTLINE-Pluralistic-Deliberation.md Length: 8-12 pages (target for FAccT 2026, AIES 2026) Audience: Academic reviewers, AI ethics researchers

What it covers:

  • Full paper structure (Abstract → Introduction → Related Work → Methods → Results → Discussion → Conclusion)
  • Simulation methodology (6 moral frameworks, CSP scenario)
  • Results (0% intervention rate, 12min completion, 4 accommodations, all frameworks honored)
  • Comparison to existing approaches (Constitutional AI, RLHF, Democratic AI)
  • Limitations (single scenario, agent-based stakeholders, Western frameworks)
  • Future work (multi-scenario validation, human subjects, cross-cultural)

Start here if: You want to submit academic paper or understand research rigor.


Outreach Materials (Already Created)

6. Funder Summary

File: ../outreach/FUNDER-SUMMARY-AI-Led-Pluralistic-Deliberation.md Length: 28 pages Audience: Potential funders, collaborators

What it covers:

  • Funding tiers: $71K (6-month pilot), $160K (12-month), $300-500K (2-3 years)
  • Budget breakdown (personnel, infrastructure, participant compensation)
  • Simulation results (0% intervention, all moral frameworks accommodated)
  • Partnership opportunities

7. Stakeholder Recruitment Emails

File: ../outreach/STAKEHOLDER-RECRUITMENT-EMAILS-Real-World-Pilot.md Length: 22 pages Audience: Potential pilot participants

What it covers:

  • 5 email templates (community organizations, individuals, academic networks, follow-up, confirmation)
  • Social media post templates
  • Recruitment flyer
  • Screening questions

8. Presentation Deck

Files:

What it covers:

  • 15-20 minute pitch to funders/researchers
  • Problem → Solution → Results → Next Steps
  • Visual design recommendations, speaker notes

How to Use This Research

If you're a critical thinker/skeptic:

  1. Start with: Architectural Safeguards (Section 1-2)
  2. Read: "The Core Threat" and "5 Layers of Protection"
  3. Challenge: Red-team scenarios (Appendix B in safeguards doc)
  4. Then read: Executive Summary Section 6 (Dichotomy Resolved)

If you're a philosopher/ethicist:

  1. Start with: Executive Summary Section 7
  2. Read: Arguments 1-3 (Berlin, Rawls, Gilligan)
  3. Then read: Research Paper Outline Section 2 (Philosophical foundations)
  4. Finally: Safeguards doc Section 3 (Harm vs. trade-offs distinction)

If you're a developer/implementer:

  1. Start with: Implementation Tickets
  2. Review: Architecture Diagrams
  3. Read: Executive Summary Section 5 (Code examples)
  4. Reference: Safeguards doc Appendix C (Implementation checklist)

If you're a funder/decision-maker:

  1. Start with: Funder Summary
  2. Review: PowerPoint Presentation
  3. Read: Executive Summary Sections 1-4 (Problem → Solution → Example)
  4. Then: Safeguards doc Executive Summary (How it prevents runaway AI)

If you're an academic researcher:

  1. Start with: Research Paper Outline
  2. Review: Methodology, Results, Discussion sections
  3. Read: Executive Summary full document (Comprehensive technical + philosophical)
  4. Compare: Safeguards doc Appendix A (vs. other approaches)

Key Questions Answered

"How is this different from just asking users 'Are you sure?'"

Answer: Executive Summary FAQ Q7

  • "Are you sure?" is binary (yes/no), doesn't engage with WHY conflict exists
  • Deliberation surfaces competing values, presents accommodations, documents rationale

"How do you prevent LLM from manipulating the deliberation?"

Answer: Safeguards doc Section 2

  • Code selects stakeholders (not LLM discretion)
  • Combinatorial accommodation generation (not preferential)
  • Automated bias detection (vocabulary, length, framing)
  • Transparency logs (all LLM outputs auditable)

"Isn't this just moral relativism?"

Answer: Executive Summary Section 7.1

  • Value pluralism ≠ relativism
  • Multiple values can be objectively important AND conflict
  • Resolution requires context-sensitive judgment, not universal rules
  • Accountability maintained through moral remainder documentation

"Why have hierarchical rules if you support plural morals?"

Answer: Safeguards doc Section 3

  • Hierarchical: Harm prevention (privacy violations, security exploits) — enforced by code
  • Non-hierarchical: Value trade-offs (efficiency vs. security) — facilitated by LLM, decided by user
  • Different domains, different logics

"What if this doesn't scale to multi-user contexts?"

Answer: Executive Summary FAQ Q5

  • Single-user value conflicts are still valuable (current AI systems fail at this)
  • Multi-user is logical extension (same structure, more stakeholders)
  • Minority protections built into architecture (mandatory representation, dissent documentation)

"How do you prevent 'deliberation fatigue' where users just click through?"

Answer: Executive Summary FAQ Q2

  • Fast path for minor conflicts (30 seconds, not 15 minutes)
  • Learning: System adapts if user consistently overrides similar conflicts
  • Engagement metrics: Pattern of dismissing triggers escalation

Current Status (October 2025)

Completed

Simulation (6 moral frameworks, 0% intervention rate) Executive summary (76 pages) Architectural safeguards deep dive (40+ pages) Implementation tickets (20 tickets, 2-4 weeks) Architecture diagrams (2 SVG flowcharts) Research paper outline (8-12 pages for FAccT/AIES 2026) Funder summary (28 pages, funding tiers) Stakeholder recruitment materials (5 email templates) Presentation deck (PowerPoint, 26 slides)

Next Steps

Phase 1 Implementation (2-4 weeks)

  • Integrate PluralisticDeliberationOrchestrator into Tractatus
  • Deploy to tractatus_dev for testing
  • Run 10-20 real conflicts to validate approach

Research Paper Draft (1-2 months)

  • Expand outline to full 8-12 page paper
  • Additional scenario simulations (beyond CSP)
  • Submit to FAccT 2026 (January deadline)

Real-World Pilot (3-6 months after Phase 1)

  • Recruit 6-12 participants for multi-user deliberation
  • Low-risk scenario (community budgeting, organizational policy)
  • Collect stakeholder satisfaction data

Contact & Collaboration

Project Lead: [Your Name] Email: [Your Email] GitHub: [Repository URL]

We welcome:

  • Critical feedback (challenge our assumptions)
  • Collaboration proposals (academic, industry, policy)
  • Pilot participation (test in your context)
  • Replication studies (we'll share all materials)

The fight against amoral intelligence requires transparency, collaboration, and continuous vigilance.


Document Version: 1.0 Last Updated: October 17, 2025 Status: Research complete, implementation ready