- Create Economist SubmissionTracking package correctly: * mainArticle = full blog post content * coverLetter = 216-word SIR— letter * Links to blog post via blogPostId - Archive 'Letter to The Economist' from blog posts (it's the cover letter) - Fix date display on article cards (use published_at) - Target publication already displaying via blue badge Database changes: - Make blogPostId optional in SubmissionTracking model - Economist package ID: 68fa85ae49d4900e7f2ecd83 - Le Monde package ID: 68fa2abd2e6acd5691932150 Next: Enhanced modal with tabs, validation, export 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
37 KiB
Background Materials Packet
PluralisticDeliberationOrchestrator Pilot - Algorithmic Hiring Transparency
Project: Tractatus Pluralistic Deliberation Pilot Principal Investigator: [NAME, TITLE, EMAIL] Date: 2025-10-17 Scenario: Algorithmic Hiring Transparency
Table of Contents
- Welcome & Overview
- The Scenario: Algorithmic Hiring Transparency
- Deliberation Process Structure
- Your Role as a Stakeholder
- Time Commitment Breakdown
- AI Facilitation: What to Expect
- Who Else Will Participate?
- Expected Outcomes
- Key Terms & Concepts
- Background Reading Materials
- Frequently Asked Questions
1. Welcome & Overview
Thank you for your interest!
You've been invited to participate in a groundbreaking pilot deliberation on algorithmic hiring transparency because your perspective is essential to addressing this complex values conflict.
What Is This Project?
This is a research pilot testing whether AI can fairly facilitate multi-stakeholder deliberation on contentious governance issues. You'll join 5 other stakeholders representing different perspectives (job applicants, employers, AI vendors, regulators, labor advocates, AI ethics researchers) to explore whether competing values can be accommodated in algorithmic hiring transparency policies.
Why This Matters
Substantive Impact: Your deliberation will inform real policy debates happening right now:
- NYC's AI hiring law (Local Law 144, 2023)
- EU AI Act transparency requirements
- Federal EEOC guidance on algorithmic discrimination
- State-level algorithmic accountability legislation
Methodological Impact: You'll help answer: Can AI assist in democratic governance fairly? Your feedback on AI facilitation quality is as valuable as your input on hiring transparency itself.
What Makes This "Pluralistic"?
Traditional deliberation seeks consensus. Pluralistic deliberation seeks accommodation - honoring multiple values simultaneously, even when they conflict. Dissent is documented as legitimate, not suppressed.
You will NOT be asked to:
- ❌ Abandon your values
- ❌ Compromise on core principles
- ❌ Agree with others for the sake of agreement
You WILL be asked to:
- ✅ Understand others' perspectives deeply
- ✅ Identify shared values where they exist
- ✅ Explore creative accommodations that honor multiple values
- ✅ Document remaining disagreements respectfully
2. The Scenario: Algorithmic Hiring Transparency
The Core Question
When employers use AI algorithms to screen job applicants, how much should they be required to disclose?
This question pits multiple legitimate values against each other:
Competing Values:
- Fairness: Applicants want to know how they're evaluated and challenge discriminatory criteria
- Privacy: Disclosure of evaluation factors could expose sensitive personal data
- Accountability: Public oversight requires knowing what algorithms do
- Trade Secrets: Employers/vendors fear competitors will copy their systems
- Gaming Risk: Full transparency could enable applicants to manipulate the system
- Innovation: Over-regulation might discourage AI development
- Efficiency: Detailed disclosures are costly and time-consuming
No easy answer exists. Different moral frameworks prioritize these values differently.
Real-World Stakes
Who's affected:
- 70 million U.S. job applications screened by AI annually (and growing)
- Applicants who don't know why they were rejected
- Employers facing discrimination lawsuits and regulatory uncertainty
- AI vendors navigating conflicting state/federal requirements
- Regulators trying to protect rights without stifling innovation
Current Status:
- NYC requires bias audits (but not algorithm disclosure)
- EU AI Act mandates transparency for "high-risk" systems (but details unclear)
- Multiple lawsuits over algorithmic discrimination
- No federal standard exists
Why This Scenario?
This scenario scored 96/100 in our selection rubric because it:
- Involves genuine moral conflict (not just technical disagreement)
- Affects vulnerable populations (job applicants, especially marginalized groups)
- Has immediate real-world relevance (legislation pending)
- Draws on multiple moral frameworks (consequentialist efficiency vs. deontological rights)
- Has no obvious "correct" answer
- Could set precedent for other AI transparency debates (credit scoring, insurance, criminal justice)
3. Deliberation Process Structure
Format: Hybrid (Asynchronous + Synchronous)
Week 1-2: Asynchronous Position Statements
- You'll submit a written position (500-1000 words) explaining your perspective
- Time: ~1 hour
- Purpose: Ensure everyone's voice is heard before synchronous discussion
Week 3: Synchronous Video Deliberation
- Two 2-hour video conference sessions (4 hours total)
- Dates/times scheduled to accommodate all 6 participants
- Purpose: Engage deeply with other perspectives, explore accommodations
- Format: AI-facilitated discussion with human observer present
Week 4: Asynchronous Refinement
- Review and provide feedback on outcome document
- Complete post-deliberation survey
- Time: ~1 hour
- Purpose: Ensure your perspective is accurately represented
Total Time Commitment: 4-6 hours over 4 weeks
Four Deliberation Rounds
Your synchronous deliberation will follow this structure:
Round 1: Position Statements (60 minutes)
- Each stakeholder presents their perspective (5-7 minutes each)
- AI facilitator identifies moral frameworks represented
- No debate yet - just listening and understanding
Round 2: Shared Values Discovery (45 minutes)
- Identify values that ALL stakeholders share (e.g., "accurate hiring decisions are good")
- Explore common ground as foundation for accommodation
- AI facilitator summarizes shared values
Round 3: Accommodation Exploration (60 minutes)
- Brainstorm policy options that honor multiple values simultaneously
- Discuss trade-offs and moral remainders (what's sacrificed)
- Consider tiered approaches (different rules for different contexts)
- AI facilitator suggests accommodation strategies
Round 4: Outcome Documentation (45 minutes)
- Discuss whether accommodation was reached
- Document remaining disagreements respectfully
- Identify "moral remainder" (what values couldn't be fully honored)
- AI facilitator drafts outcome summary for your review
Breaks: 10-minute break between sessions 1-2 and sessions 3-4
4. Your Role as a Stakeholder
What We're Asking You To Bring
Your Expertise:
- Lived experience with algorithmic hiring (as applicant, employer, vendor, regulator, etc.)
- Professional knowledge of your domain
- Understanding of how your stakeholder group is affected
Your Values:
- Moral framework that guides your perspective (consequentialist, deontological, virtue ethics, care ethics, communitarian, etc. - don't worry, we'll help identify this)
- Core principles you won't compromise
- Vision of what "good" algorithmic hiring looks like
Your Openness:
- Curiosity about why others think differently
- Willingness to consider accommodations (without abandoning your values)
- Respect for dissent as legitimate
What We're NOT Asking
❌ Not policymaking: This is a pilot. Outcomes will inform policy but have no binding authority.
❌ Not consensus-building: You will NOT be pressured to agree. Documented dissent is success, not failure.
❌ Not representation: You're here for your perspective, but you don't "represent" your entire stakeholder group officially.
❌ Not expert testimony: We want your values and reasoning, not just facts (though facts matter!).
Ground Rules for Deliberation
All participants will be asked to commit to:
- Respect: Disagree with ideas, not people. No personal attacks.
- Confidentiality: Don't share others' identities or statements outside the deliberation without permission.
- Good Faith: Assume others are acting in good faith, even when you disagree.
- Listening: Seek to understand before seeking to be understood.
- Honesty: Share your real values, not what you think we want to hear.
- Openness: Consider possibilities you hadn't thought of before.
- Dissent: You have the RIGHT to disagree. Dissent will be documented respectfully.
If any participant violates these ground rules, the human observer will intervene immediately.
5. Time Commitment Breakdown
Week 1-2: Asynchronous Position Statement (~1 hour)
What you'll do:
- Read this background packet (30 min)
- Write your position statement (500-1000 words) addressing:
- What transparency should employers be required to provide?
- What values guide your position? (fairness, privacy, innovation, etc.)
- What concerns you about alternative approaches?
- What trade-offs are you willing/unwilling to make?
- Deadline: [DATE], 5:00 PM ET
Support available:
- Office hours with facilitator if you have questions
- Sample position statements available (from past deliberations on different topics)
Week 3: Synchronous Video Deliberation (4 hours)
Session 1: [DATE], [TIME] (2 hours)
- Round 1: Position Statements (60 min)
- Break (10 min)
- Round 2: Shared Values Discovery (45 min)
Session 2: [DATE], [TIME] (2 hours)
- Round 3: Accommodation Exploration (60 min)
- Break (10 min)
- Round 4: Outcome Documentation (45 min)
Technical requirements:
- Stable internet connection
- Computer with webcam and microphone (video required for full participation, but accommodations available if needed)
- Quiet space where you can focus
Accessibility:
- Closed captioning available upon request
- If video participation is a barrier, please contact us for alternative arrangements
Week 4: Asynchronous Refinement (~1 hour)
What you'll do:
- Review AI-generated outcome document (30 min)
- Provide feedback: "Does this accurately represent your position and the deliberation?" (15 min)
- Complete post-deliberation survey on process quality and AI facilitation (15 min)
- Deadline: [DATE], 5:00 PM ET
Optional:
- Decide whether you'd like to be publicly identified in research outputs (you can decide after seeing the materials)
6. AI Facilitation: What to Expect
What Does "AI-Led" Mean?
The deliberation will be facilitated by an artificial intelligence system called PluralisticDeliberationOrchestrator. This is an experimental methodology, and your feedback on whether it works is a core research goal.
What the AI Will Do
✅ Pose discussion questions to guide the 4 rounds ✅ Summarize stakeholder positions neutrally ✅ Identify moral frameworks in tension (consequentialism, deontology, etc.) ✅ Suggest accommodation options (not prescribe them) ✅ Draft outcome documents for your review ✅ Track facilitation quality (intervention counts, stakeholder satisfaction)
What the AI Will NOT Do
❌ NOT decide policy outcomes - You and other stakeholders decide; AI only facilitates ❌ NOT advocate for any position - AI remains neutral ❌ NOT override the human observer - Human has final authority ❌ NOT proceed if you're uncomfortable - You can request human facilitation at any time
Human Observer: Your Safety Net
A trained human observer will be present at ALL times (in-person or via video). The observer will:
- Monitor AI facilitation quality (Is the AI fair? Clear? Culturally sensitive?)
- Watch for stakeholder distress (Do you seem uncomfortable?)
- Detect pattern bias (Is the AI inadvertently centering vulnerable groups as "the problem"?)
- Intervene immediately if:
- You request human facilitation
- The AI makes an error or shows bias
- Anyone shows signs of distress
- Safety or ethical concerns arise
Human observer MUST intervene for these mandatory triggers:
- Stakeholder distress (you express discomfort or go silent)
- Pattern bias (AI uses stigmatizing or offensive framing)
- Stakeholder disengagement (hostile or withdrawn behavior)
- AI malfunction (nonsensical or contradictory responses)
- Confidentiality breach (AI shares information it shouldn't)
- Ethical boundary violation (AI advocates instead of facilitates)
Your Rights Regarding AI Facilitation
✅ You can request human facilitation at any time for any reason (no justification needed) ✅ You can pause the deliberation if you need a break or feel uncomfortable ✅ You can withdraw if AI facilitation is not working for you ✅ You will receive a transparency report showing all AI vs. human actions
Why Are We Testing AI Facilitation?
Potential Benefits:
- Scalability: AI could enable deliberation for issues that lack human facilitation resources
- Neutrality: AI might be perceived as less biased than human facilitators (though it has its own biases)
- Consistency: AI applies same framework across all stakeholders
- Innovation: Demonstrates feasibility of AI-assisted governance
Potential Risks:
- AI bias: System might inadvertently favor certain perspectives or use harmful framings
- Discomfort: You might feel uncomfortable being facilitated by AI
- Errors: AI might misunderstand nuance or make mistakes
Your feedback will help us understand: Is AI facilitation viable? Under what conditions? What safeguards are needed?
If you're uncomfortable with AI facilitation, please let us know BEFORE the deliberation starts. We can arrange human-only facilitation or you can decline participation without penalty.
7. Who Else Will Participate?
Stakeholder Composition (6 Total)
You'll deliberate with 5 other participants representing:
-
Job Applicant Advocate
- Represents: People seeking employment, especially those affected by algorithmic screening
- Likely values: Fairness, transparency, accountability, non-discrimination
- Likely concerns: Algorithms discriminate invisibly; applicants can't challenge rejections
-
Employer / HR Representative
- Represents: Companies using AI hiring tools
- Likely values: Efficiency, cost control, legal compliance, quality hires
- Likely concerns: Over-regulation stifles innovation; disclosure reveals trade secrets
-
AI Vendor Representative
- Represents: Companies that build/sell algorithmic hiring tools
- Likely values: Innovation, competition, intellectual property protection
- Likely concerns: Competitors will copy algorithms; customers will switch to less transparent tools
-
Regulator / Policy Expert
- Represents: Government agencies enforcing anti-discrimination and consumer protection laws
- Likely values: Public accountability, legal clarity, rights protection
- Likely concerns: Need to balance transparency with practicality; patchwork state laws are confusing
-
Labor Rights Advocate
- Represents: Workers' rights organizations and labor unions
- Likely values: Power balance, collective bargaining, worker autonomy
- Likely concerns: Algorithms shift power to employers; workers have no voice in design
-
AI Ethics Researcher
- Represents: Academic/research perspective on algorithmic fairness
- Likely values: Scientific validity, evidence-based policy, long-term societal impact
- Likely concerns: Current transparency measures don't actually achieve fairness goals
Why This Composition?
These 6 stakeholder types represent the full range of perspectives affected by algorithmic hiring transparency policy:
- Those harmed by opacity (applicants, workers)
- Those harmed by transparency (vendors, employers)
- Those tasked with balancing (regulators)
- Those studying effectiveness (researchers)
No stakeholder type has majority power. All perspectives must be taken seriously.
Diversity Considerations
We are actively recruiting stakeholders who:
- Have direct lived experience with algorithmic hiring (not just theoretical knowledge)
- Represent diverse demographic backgrounds (race, gender, age, disability status, etc.)
- Come from varied professional contexts (tech companies, small businesses, government, academia, advocacy orgs)
- Bring different moral frameworks to the table (consequentialist, deontological, virtue ethics, care ethics, etc.)
Your unique perspective is why you were invited. There are no "replaceable" participants.
8. Expected Outcomes
What Will We Produce?
1. Outcome Document (Public)
A summary of the deliberation including:
- Values in tension: What moral frameworks were represented?
- Shared values: What did all stakeholders agree on?
- Accommodation options explored: What policy approaches were discussed?
- Outcome: Was accommodation reached? What values were prioritized/deprioritized?
- Dissenting perspectives: What disagreements remain (documented respectfully)?
- Moral remainder: What values couldn't be fully honored?
Confidentiality: Your identity will be pseudonymized by default (e.g., "Employer Representative A"). You can opt into public attribution later if you choose.
2. Transparency Report (Public)
A detailed log showing:
- All AI facilitation actions (prompts, summaries, suggestions)
- All human interventions (when, why, how resolved)
- Safety escalations (if any occurred)
- Stakeholder satisfaction ratings
- Lessons learned about AI facilitation quality
Purpose: Demonstrate accountability for AI-led facilitation.
3. Research Findings (Public)
Analysis of:
- Process quality: Did pluralistic deliberation work? What helped/hindered?
- AI facilitation effectiveness: Was AI fair? Where did it succeed/fail?
- Substantive insights: What did we learn about algorithmic hiring transparency conflicts?
- Methodological lessons: When is AI-assisted governance viable?
4. Policy Brief (Public)
A practitioner-friendly summary for:
- Legislators considering algorithmic transparency laws
- Regulators implementing existing laws
- Companies designing algorithmic hiring systems
- Advocates pushing for transparency
Note: This is a pilot, so outcomes are informative, not prescriptive. Real policymakers will decide.
How Will Outcomes Be Used?
Research:
- Academic publications on pluralistic deliberation and AI governance
- Conference presentations demonstrating methodology
- Open-source release of deliberation framework (so others can replicate)
Policy Influence:
- Shared with NYC, EU, and federal regulators considering transparency rules
- Cited in advocacy campaigns for algorithmic accountability
- Used to inform stakeholder engagement best practices
Demonstration:
- Video highlights may be used in public presentations (only with your explicit consent)
- Case study for future values conflicts (credit scoring, content moderation, etc.)
You control: Whether you're publicly identified, whether your video is used publicly (you can decide after seeing the materials).
What Happens If We Don't Reach Accommodation?
That's okay! Pluralistic deliberation documents dissent as legitimate.
If the deliberation ends with:
- ✅ Accommodations explored but dissent remains → SUCCESS (we learned what the tensions are)
- ✅ Shared values identified but no consensus → SUCCESS (we found common ground to build from)
- ✅ Moral frameworks clarified → SUCCESS (we understand why people disagree)
Failure would be:
- ❌ Stakeholders unwilling to engage with other perspectives
- ❌ Deliberation process breaks down (hostility, bad faith, disengagement)
- ❌ AI facilitation causes harm that isn't caught
The human observer is there to prevent failure scenarios.
9. Key Terms & Concepts
Pluralistic Deliberation
Definition: A decision-making process that seeks to accommodate multiple moral frameworks simultaneously, rather than forcing consensus. Dissent is documented as legitimate.
Example: Instead of choosing "full transparency" vs. "no transparency," pluralistic deliberation might explore tiered approaches (different rules for different contexts) that honor both accountability and trade secret concerns.
Contrast with:
- Consensus-building: Everyone must agree (pluralistic deliberation doesn't require agreement)
- Majority rule: Most popular view wins (pluralistic deliberation honors minority perspectives)
- Compromise: Everyone gives up something (pluralistic deliberation seeks creative accommodations)
Moral Frameworks
Definition: Different ethical perspectives that prioritize values differently.
Common frameworks in this deliberation:
-
Consequentialism (Outcome-focused)
- Principle: Actions are right if they produce good outcomes (e.g., hire best candidates, prevent discrimination)
- Applied here: Transparency is good IF it leads to fairer hiring outcomes; bad if it enables gaming
- Key question: "Will this policy lead to better or worse results?"
-
Deontology (Rights/Duties-focused)
- Principle: Actions are right if they respect rights and duties (e.g., applicants have a right to know how they're judged)
- Applied here: Transparency is required as a matter of justice, regardless of outcomes
- Key question: "Does this policy respect people's rights?"
-
Virtue Ethics (Character-focused)
- Principle: Actions are right if they reflect virtuous character (honesty, fairness, wisdom)
- Applied here: Employers should be honest about how they evaluate applicants because honesty is virtuous
- Key question: "What would a virtuous person do?"
-
Care Ethics (Relationship-focused)
- Principle: Actions are right if they nurture relationships and attend to needs
- Applied here: Transparency policies should consider how they affect trust between employers and applicants
- Key question: "How does this policy affect relationships?"
-
Communitarianism (Community-focused)
- Principle: Actions are right if they serve the common good and community values
- Applied here: Transparency should serve community goals (fair hiring practices, economic efficiency)
- Key question: "What does the community need?"
Why this matters: People often talk past each other because they're using different moral frameworks. Identifying frameworks helps us understand why values conflict.
Incommensurability
Definition: When two values cannot be measured on a single scale (e.g., you can't directly compare "fairness" to "trade secrets" because they're different types of goods).
Example: "How many units of trade secret protection is one unit of applicant fairness worth?" - This question is nonsensical because the values are incommensurable.
Implication: We can't "optimize" our way to a solution. We need to make tragic choices (Calabresi & Bobbitt, 1978) where some values are deprioritized.
Moral Remainder
Definition: The values or principles that couldn't be fully honored in a decision, even if the decision was the best available option.
Example: If the deliberation reaches an accommodation that prioritizes accountability over trade secret protection, the moral remainder is the legitimate concern that competitors might copy algorithms. This remainder should be acknowledged, not dismissed.
Why it matters: Acknowledging moral remainder shows respect for dissenting perspectives and prevents dismissing their concerns as "solved."
Accommodation (vs. Consensus)
Consensus: Everyone agrees on a single solution.
Accommodation: Multiple values are honored simultaneously, even if not everyone agrees this is the best approach. Dissent is documented respectfully.
Example:
- Consensus: "We all agree employers should disclose X, Y, Z."
- Accommodation: "We've identified a tiered approach where high-stakes hiring (C-suite) requires more disclosure than low-stakes hiring (entry-level temp workers). This honors accountability (deontological) AND efficiency (consequentialist) concerns. Job applicant advocates still believe all hiring should have full disclosure (dissent documented)."
Pattern Bias
Definition: When a deliberation process (or AI) inadvertently centers vulnerable populations as "the problem" rather than centering the system that affects them.
Example of pattern bias:
- ❌ BAD FRAMING: "How do we prevent job applicants from gaming transparent algorithms?"
- ✅ GOOD FRAMING: "How do we design algorithms that are both transparent and robust against manipulation?"
Why it matters: Framing shapes whose concerns are taken seriously. The human observer will intervene if AI uses pattern-biased framing.
AI Safety (in Deliberation Context)
Definition: Measures to ensure AI facilitation doesn't harm stakeholders or compromise deliberation integrity.
Key safety measures in this pilot:
- Human observer present at all times (authority to intervene)
- Intervention triggers (6 mandatory, 5 discretionary)
- Transparency logging (all AI actions documented)
- Stakeholder rights (can request human facilitation anytime)
- Safety escalation procedures (pause session if critical concern arises)
10. Background Reading Materials
Required Reading (Please read before Week 1-2 position statement)
1. Algorithmic Hiring: How We Got Here
Source: [Placeholder - We'll provide a 2-3 page summary document]
Key points:
- Employers have used algorithmic screening since the 1990s (keyword matching in resumes)
- Modern AI systems use machine learning to predict "quality hires" based on past patterns
- 70 million U.S. job applications screened by AI annually (2023 estimate)
- Concerns about discrimination emerged when studies showed algorithms replicate historical biases (e.g., penalizing women for gaps in employment due to caregiving)
Why this matters: Understand what "algorithmic hiring" actually means in practice.
2. Current Legal Landscape
Source: [Placeholder - We'll provide a 2-3 page summary document]
Key points:
- NYC Local Law 144 (2023): Requires annual bias audits (tests whether algorithm discriminates by race/gender), but does NOT require disclosure of algorithm itself
- EU AI Act (2024): Classifies hiring algorithms as "high-risk AI," mandates transparency, but details are still being finalized
- Federal: EEOC interprets Title VII (anti-discrimination law) to apply to algorithms, but no specific transparency requirements yet
- State patchwork: Illinois, Maryland, California have various disclosure and consent laws
Why this matters: Understand what current law requires (and doesn't require).
3. Competing Perspectives: Short Case Studies
Source: [Placeholder - We'll provide 3-4 short vignettes, ~1 page each]
Example vignettes:
- Case A: Job applicant denied without explanation (highlights fairness concerns)
- Case B: Vendor sued for discrimination (highlights legal liability concerns)
- Case C: Small business overwhelmed by compliance costs (highlights efficiency concerns)
- Case D: Algorithm gaming scandal (highlights manipulation concerns)
Why this matters: See the issue from multiple stakeholder perspectives before deliberation starts.
Optional Reading (For those who want deeper background)
Academic Sources:
- Barocas & Selbst (2016): "Big Data's Disparate Impact" - Explains how algorithms can discriminate even without explicit bias
- Kroll et al. (2017): "Accountable Algorithms" - Proposes transparency frameworks for automated decision-making
- Pasquale (2015): The Black Box Society - Argues for algorithmic accountability in hiring, finance, health
Legal/Policy Sources:
- NYC Department of Consumer and Worker Protection: Guidance on Local Law 144 compliance
- EU AI Act: Article 13 (Transparency Requirements for High-Risk AI Systems)
- EEOC Technical Assistance Document: "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees" (May 2022)
Practitioner Sources:
- Society for Human Resource Management (SHRM): "Artificial Intelligence in the Hiring Process: Compliance and Ethical Considerations"
- Partnership on AI: "Algorithmic Impact Assessment" framework
We'll provide links/PDFs for all materials - you don't need to search for them.
11. Frequently Asked Questions
About Participation
Q: Am I required to participate if I've been invited? A: No. Participation is completely voluntary. You can decline without penalty, and there's no negative consequence for saying no.
Q: Can I withdraw after I've started? A: Yes. You can withdraw at any time (before, during, or after the deliberation). If you withdraw during or after, we'll ask if we can use data collected up to that point, but you can say no.
Q: What if I can't make the scheduled video call times? A: Contact us as soon as possible. We'll try to find alternative times that work for all 6 participants. If we can't accommodate your schedule, we may need to invite a replacement participant (no penalty to you).
Q: Will I be paid for my time? A: This is a volunteer pilot with no financial compensation. However, if compensation is a barrier to your participation, please contact us. We may be able to provide a modest honorarium (up to $500, subject to funding availability) or reimburse childcare/travel expenses.
Q: Can I participate anonymously? A: Yes. By default, research outputs will use pseudonyms (e.g., "Employer Representative A"). You'll be asked AFTER the deliberation if you'd like to be publicly identified - you can decline.
About AI Facilitation
Q: What if I don't trust the AI? A: That's a legitimate concern. You have three options:
- Request human facilitation at any time during the deliberation (no justification needed)
- Provide feedback after the deliberation (we'll improve the AI based on your input)
- Decline participation if you're uncomfortable with AI facilitation (no penalty)
Q: What if the AI makes a mistake? A: The human observer will intervene immediately if the AI makes an error. All interventions will be logged in the transparency report.
Q: What if the AI seems biased? A: The human observer is specifically trained to detect pattern bias (e.g., framing vulnerable groups as "the problem"). If you feel the AI is biased, you can:
- Raise the concern directly during the deliberation
- Request human facilitation for that segment
- Provide feedback in the post-deliberation survey
Q: Who built the AI, and what are their incentives? A: The PluralisticDeliberationOrchestrator is built by [ORGANIZATION] as a research project, not a commercial product. The research team's incentive is to demonstrate whether AI-assisted governance is viable, which requires honest reporting of failures as much as successes. If AI facilitation doesn't work, that's a valuable research finding.
Q: Can the AI read my private messages? A: If you send private messages to the facilitators (AI or human), those are confidential and will NOT be shared with other participants or in public outputs. The AI will only reference information you share in the group deliberation.
About the Deliberation Process
Q: What if I strongly disagree with other participants? A: That's expected! Disagreement is the point. You will NOT be pressured to agree. Dissenting perspectives will be documented respectfully in the outcome document.
Q: What if someone is hostile or disrespectful? A: The human observer will intervene immediately if anyone violates the ground rules (respect, good faith, confidentiality). Repeated violations may result in a participant being removed from the deliberation.
Q: What if I don't know my "moral framework"? A: That's fine! The AI facilitator will help identify moral frameworks based on your position statement and discussion contributions. You don't need to self-identify as "consequentialist" or "deontological" - just explain your reasoning, and the AI will identify the framework.
Q: What if I change my mind during the deliberation? A: That's great! Changing your mind (or refining your position) after hearing other perspectives is a sign of good deliberation. You're not locked into your initial position statement.
Q: What if we can't reach accommodation? A: That's okay! Documented dissent is a legitimate outcome. The goal is to understand why values conflict and explore whether accommodation is possible - not to force agreement.
About Outcomes and Use
Q: Will my participation influence real policy? A: Maybe. This is a pilot, so outcomes are informative (not binding). However, we'll share findings with NYC, EU, and federal regulators who are actively writing algorithmic transparency rules. Your deliberation could shape real policy debates.
Q: Can I cite this deliberation in my own work? A: Yes! You can reference the deliberation in academic papers, blog posts, or presentations. If you opt into public attribution, you can identify yourself as a participant. If you remain pseudonymous, you can still describe your experience without identifying others.
Q: What if I disagree with how my perspective is represented in the outcome document? A: You'll have a chance to provide feedback during Week 4 refinement. If you feel your position is misrepresented, you can request corrections. If we can't resolve the disagreement, your objection will be noted in the final document.
Q: Will this deliberation set a precedent that constrains future deliberations? A: No. The Precedent database (where this deliberation will be stored) is informative, not prescriptive. Future deliberations on similar issues will be informed by your deliberation but are free to reach different conclusions based on different contexts.
About Data and Privacy
Q: What data will be collected about me? A: We'll collect:
- Your position statement (Week 1-2)
- Video/audio recording of synchronous sessions (Week 3) - if you consent
- Transcripts of all discussions
- Your feedback survey responses (Week 4)
- Facilitation logs (AI vs. human actions, interventions)
See the Informed Consent Form for full details.
Q: Will my personal information be shared publicly? A: No. Your name, employer, and email will NOT be publicly shared unless you explicitly opt into public attribution (which you decide AFTER seeing the materials).
Q: Can I request that my data be deleted? A: Yes. You can request data deletion within 30 days of deliberation completion (unless already anonymized and published). After 30 days, data may be retained for research purposes.
Q: Who owns the deliberation data? A: [ORGANIZATION] retains ownership of the data for research purposes, but you retain ownership of your specific contributions (your position statement, your statements during deliberation). If you want to reuse your own contributions elsewhere, you can.
About Logistics
Q: What technology do I need? A: For synchronous sessions (Week 3):
- Computer or tablet with webcam and microphone
- Stable internet connection
- Web browser (Chrome, Firefox, Safari, Edge)
- Quiet space where you can focus
We'll provide video conferencing link and instructions 1 week before deliberation.
Q: What if I have technical difficulties during the video call? A: We'll have a technical support contact standing by. If you disconnect, we'll pause the deliberation and wait for you to rejoin. If the issue can't be resolved quickly, we'll reschedule that session.
Q: Can I use my phone instead of a computer? A: Video participation works best on a computer/tablet with a larger screen. If mobile is your only option, contact us to discuss accommodations.
Q: Are transcripts or captions available? A: Yes. We'll provide live captions during video sessions (automatic) and written transcripts afterward. If you need specific accessibility accommodations, please contact us.
Next Steps
Before Week 1-2 Deliberation:
- Read this background packet (you just did!)
- Review the Informed Consent Form (separate document)
- Sign and return the consent form by [DATE]
- Read required background materials (we'll send links)
- Write your position statement (500-1000 words, due [DATE])
Questions?
Contact:
- Project Lead: [NAME, EMAIL, PHONE]
- AI Safety Lead: [NAME, EMAIL]
- Technical Support: [NAME, EMAIL]
Office Hours: [DATES/TIMES] via video conference (optional, for questions)
Thank you for your willingness to participate in this groundbreaking pilot. Your perspective is essential, and we're committed to ensuring this is a respectful, productive, and safe deliberation process.
Document Version: 1.0 Date: 2025-10-17