When AI faces values decisions—choices with no single "correct" answer—the Tractatus framework facilitates human deliberation across stakeholder perspectives instead of making autonomous choices. This interactive demo shows how the PluralisticDeliberationOrchestrator works.
Context: You're using Claude Code to develop a web application. The AI discovers your code contains a security vulnerability that could expose user data. This creates a values conflict:
This is a values decision—there's no universally "correct" technical answer. Different stakeholders have legitimate but conflicting perspectives.
Should the AI autonomously decide what to do, or facilitate deliberation among stakeholders?
If the AI decided autonomously, it would need to:
Why this fails: Values aren't commensurable. There's no objective function that correctly weighs "developer reputation" against "user safety." Any autonomous choice imposes the AI's (or its designers') value hierarchy on stakeholders who never consented to it.
PluralisticDeliberationOrchestrator identifies parties affected by this decision. Click to include each perspective:
Select at least 2 stakeholders to continue
The framework surfaces each stakeholder's perspective without ranking or resolving them. This is deliberation, not decision-making.
The framework has facilitated deliberation but made no autonomous choice. The human must now decide, informed by all perspectives:
Available Options:
Key Principle: The PluralisticDeliberationOrchestrator facilitates but never decides on values conflicts.
What the Framework Does:
What the Framework Doesn't Do:
Why this matters: Values pluralism means there's no single objective answer to normative questions. Preserving human agency requires AI systems to facilitate deliberation rather than automate value judgments—even when that's less "efficient."