Pluralistic Deliberation in Action

When AI faces values decisions—choices with no single "correct" answer—the Tractatus framework facilitates human deliberation across stakeholder perspectives instead of making autonomous choices. This interactive demo shows how the PluralisticDeliberationOrchestrator works.

The Scenario

Context: You're using Claude Code to develop a web application. The AI discovers your code contains a security vulnerability that could expose user data. This creates a values conflict:

  • Reporting the vulnerability protects future users but may damage your reputation
  • Staying silent preserves your project timeline but risks user harm
  • Partially disclosing balances concerns but may be seen as deceptive

This is a values decision—there's no universally "correct" technical answer. Different stakeholders have legitimate but conflicting perspectives.

Should the AI autonomously decide what to do, or facilitate deliberation among stakeholders?