- Create Economist SubmissionTracking package correctly: * mainArticle = full blog post content * coverLetter = 216-word SIR— letter * Links to blog post via blogPostId - Archive 'Letter to The Economist' from blog posts (it's the cover letter) - Fix date display on article cards (use published_at) - Target publication already displaying via blue badge Database changes: - Make blogPostId optional in SubmissionTracking model - Economist package ID: 68fa85ae49d4900e7f2ecd83 - Le Monde package ID: 68fa2abd2e6acd5691932150 Next: Enhanced modal with tabs, validation, export 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
373 lines
14 KiB
Markdown
373 lines
14 KiB
Markdown
# Informed Consent Form
|
|
## PluralisticDeliberationOrchestrator Pilot - AI-Led Deliberation
|
|
|
|
**Project:** Tractatus Pluralistic Deliberation Pilot
|
|
**Scenario:** Algorithmic Hiring Transparency
|
|
**Principal Investigator:** [NAME, TITLE, EMAIL]
|
|
**Date:** 2025-10-17
|
|
|
|
---
|
|
|
|
## Purpose of This Document
|
|
|
|
This form explains what you're agreeing to if you participate in this deliberation. **Please read carefully.** You can ask questions before signing, and you can withdraw at any time.
|
|
|
|
---
|
|
|
|
## 1. What Is This Study/Project?
|
|
|
|
You are invited to participate in a **pilot deliberation on algorithmic hiring transparency**. The purpose is twofold:
|
|
|
|
1. **Substantive Goal:** Explore whether competing values (fairness, efficiency, privacy, accountability, innovation) can be accommodated in algorithmic hiring transparency policies
|
|
|
|
2. **Methodological Goal:** Test whether AI can facilitate multi-stakeholder deliberation fairly and effectively
|
|
|
|
This is a **research pilot**, not formal policymaking. Any outcomes will be shared publicly but have no binding authority.
|
|
|
|
---
|
|
|
|
## 2. What Will I Be Asked to Do?
|
|
|
|
If you consent to participate, you will:
|
|
|
|
### Week 1-2 (Asynchronous):
|
|
- Submit a **written position statement** (500-1000 words) explaining your perspective on algorithmic hiring transparency
|
|
- **Time:** ~1 hour
|
|
|
|
### Week 3 (Synchronous):
|
|
- Attend **two 2-hour video conference sessions** (dates/times scheduled to accommodate all 6 participants)
|
|
- Engage in facilitated discussion with 5 other stakeholders representing different perspectives
|
|
- **Time:** 4 hours total + travel time if applicable
|
|
|
|
### Week 4 (Asynchronous):
|
|
- Review and provide feedback on **outcome document** summarizing the deliberation
|
|
- Complete **post-deliberation survey** on process quality and AI facilitation
|
|
- **Time:** ~1 hour
|
|
|
|
### Total Time Commitment: 4-6 hours over 4 weeks
|
|
|
|
---
|
|
|
|
## 3. AI-Led Facilitation (IMPORTANT)
|
|
|
|
### What Does "AI-Led" Mean?
|
|
|
|
The deliberation will be **facilitated by an artificial intelligence system** called PluralisticDeliberationOrchestrator. This means:
|
|
|
|
- **The AI will:**
|
|
- Pose discussion questions
|
|
- Summarize stakeholder positions
|
|
- Identify moral frameworks and values in tension
|
|
- Suggest accommodation options
|
|
- Draft outcome documents
|
|
|
|
- **A HUMAN OBSERVER WILL:**
|
|
- Be present at all times (in-person or via video)
|
|
- Monitor AI facilitation quality
|
|
- **Intervene immediately** if:
|
|
- You request human facilitation
|
|
- The AI makes an error or shows bias
|
|
- Any participant shows signs of distress
|
|
- Safety or ethical concerns arise
|
|
|
|
### Your Rights Regarding AI Facilitation:
|
|
|
|
- ✅ **You can request human facilitation at any time** for any reason (no justification needed)
|
|
- ✅ **You can pause the deliberation** if you need a break or feel uncomfortable
|
|
- ✅ **You can withdraw** if AI facilitation is not working for you
|
|
- ✅ **You will receive a transparency report** showing all AI vs. human actions
|
|
|
|
### Why AI Facilitation?
|
|
|
|
We're testing whether AI can assist in governance processes fairly. Your feedback on this question is as valuable as your input on algorithmic hiring transparency itself.
|
|
|
|
**If you're uncomfortable with AI facilitation**, please let us know. We can arrange human-only facilitation or you can decline participation without penalty.
|
|
|
|
---
|
|
|
|
## 4. What Data Will Be Collected?
|
|
|
|
### Data Collected:
|
|
|
|
1. **Your written position statement** (Week 1-2)
|
|
2. **Video/audio recordings** of synchronous sessions (Week 3) - if you consent
|
|
3. **Transcripts** of all discussions
|
|
4. **Your feedback survey** responses
|
|
5. **Facilitation logs** (AI vs. human actions, interventions, safety escalations)
|
|
|
|
### How Data Will Be Used:
|
|
|
|
- **Research:** Analysis of deliberation process, AI facilitation quality, stakeholder satisfaction
|
|
- **Publication:** Anonymized excerpts may be quoted in research papers, blog posts, or demonstrations
|
|
- **Demonstration:** Video highlights may be used in public presentations (only with your explicit consent - see Section 6)
|
|
|
|
### How Data Will NOT Be Used:
|
|
|
|
- ❌ Your personal information (name, employer, email) will NOT be publicly shared without your consent
|
|
- ❌ Data will NOT be sold or shared with third parties for commercial purposes
|
|
- ❌ Your participation will NOT be used to endorse any specific policy position
|
|
|
|
---
|
|
|
|
## 5. Confidentiality & Anonymization
|
|
|
|
### What Is Confidential:
|
|
|
|
- **Your identity:** Research publications will use pseudonyms (e.g., "Employer Representative A," "Labor Advocate B")
|
|
- **Private communications:** Any private messages to facilitators are confidential
|
|
- **Sensitive information:** If you share confidential business information or personal details, we will redact from public materials
|
|
|
|
### What Is NOT Confidential:
|
|
|
|
- **Your stated position:** What you say during group deliberation will be heard by the other 5 participants
|
|
- **Public attribution (if you opt in):** See Section 6
|
|
|
|
### Deliberation Ground Rules:
|
|
|
|
All participants will be asked to respect confidentiality:
|
|
- Don't share others' identities or statements outside the deliberation without permission
|
|
- Focus on ideas, not individuals
|
|
- Respect disagreement as legitimate
|
|
|
|
**Note:** We cannot guarantee other participants will maintain confidentiality, but it will be an explicit ground rule.
|
|
|
|
---
|
|
|
|
## 6. Public Attribution (Optional)
|
|
|
|
After the deliberation, we may ask if you'd like to be **publicly identified** in research outputs. This is **entirely optional** and you can decide after seeing the final materials.
|
|
|
|
**If you opt in to public attribution:**
|
|
|
|
- Your name, title, and organization may be listed as a participant
|
|
- You may be quoted by name in publications or presentations
|
|
- Video of your participation may be used in demonstrations
|
|
|
|
**If you decline public attribution:**
|
|
|
|
- You will be referred to by pseudonym only
|
|
- No identifying information will be shared
|
|
- You can still be quoted, but anonymously
|
|
|
|
**You are NOT required to decide now.** We will ask again after the deliberation when you can review the final materials.
|
|
|
|
---
|
|
|
|
## 7. Risks & Discomforts
|
|
|
|
### Potential Risks:
|
|
|
|
1. **Time commitment:** 4-6 hours over 4 weeks (may conflict with work/personal obligations)
|
|
|
|
2. **Emotional discomfort:** Engaging with perspectives you strongly disagree with may be frustrating or stressful
|
|
|
|
3. **AI facilitation concerns:** You may feel uncomfortable being facilitated by AI, or feel AI is biased
|
|
|
|
4. **Confidentiality breach:** Other participants might share your identity or statements despite ground rules (we mitigate this by screening participants and setting clear expectations)
|
|
|
|
5. **Professional risk:** If publicly identified, your stated position might be used against you professionally (we mitigate this by giving you full control over public attribution)
|
|
|
|
### Mitigation Measures:
|
|
|
|
- **Human observer** present at all times to intervene if needed
|
|
- **Right to withdraw** at any time without penalty
|
|
- **Optional attribution:** You control whether you're publicly identified
|
|
- **Participant screening:** We select participants committed to good-faith deliberation
|
|
|
|
---
|
|
|
|
## 8. Benefits
|
|
|
|
### Direct Benefits to You:
|
|
|
|
- **Exposure to diverse perspectives:** Understand how employers, advocates, regulators, and others think
|
|
- **Skill development:** Experience in pluralistic deliberation and conflict resolution
|
|
- **Network:** Relationships with stakeholders in your field
|
|
- **Recognition:** Optional public attribution in research/media
|
|
- **Contribution:** Help advance responsible AI governance
|
|
|
|
### Broader Benefits:
|
|
|
|
- **Policy influence:** Findings may inform real regulations (NYC, EU, federal)
|
|
- **Methodological advancement:** Demonstrate feasibility of AI-assisted governance
|
|
- **Precedent-setting:** Model for future values conflicts (credit scoring, content moderation, etc.)
|
|
|
|
---
|
|
|
|
## 9. Compensation
|
|
|
|
**This is a volunteer pilot.** You will NOT receive financial compensation for your participation.
|
|
|
|
**If compensation is a barrier to your participation**, please contact us. We may be able to provide:
|
|
|
|
- Modest honorarium (up to $500, subject to funding availability)
|
|
- Reimbursement for childcare or other expenses enabling participation
|
|
- Acknowledgment letter for employer (documenting your contribution to public interest research)
|
|
|
|
---
|
|
|
|
## 10. Voluntary Participation & Withdrawal
|
|
|
|
### Your Participation Is Completely Voluntary
|
|
|
|
- You may **decline to participate** without any penalty
|
|
- You may **skip questions** you don't want to answer
|
|
- You may **take breaks** during sessions
|
|
- You may **withdraw at any time** (before, during, or after the deliberation)
|
|
|
|
### If You Withdraw:
|
|
|
|
- **Before deliberation starts:** No data collected
|
|
- **During deliberation:** We will ask if we can use data collected up to that point (you can say no)
|
|
- **After deliberation:** You can request that your data be excluded from research (within 30 days of completion)
|
|
|
|
### What Happens If You Withdraw:
|
|
|
|
- **No penalty:** Your decision is respected, no questions asked
|
|
- **Data handling:** We will delete your data if you request it (unless already anonymized and published)
|
|
- **Replacement:** We may invite another participant to fill your role
|
|
|
|
---
|
|
|
|
## 11. Questions & Concerns
|
|
|
|
### Before Signing:
|
|
|
|
If you have questions about:
|
|
|
|
- **The deliberation process:** Contact [PROJECT LEAD NAME, EMAIL]
|
|
- **AI facilitation or safety:** Contact [AI SAFETY LEAD NAME, EMAIL]
|
|
- **Your rights as a participant:** Contact [IRB or ETHICS CONTACT, if applicable]
|
|
|
|
### During the Deliberation:
|
|
|
|
- **Immediate concerns:** Tell the human observer or request a break
|
|
- **Ongoing concerns:** Contact [PROJECT LEAD] between sessions
|
|
|
|
### After the Deliberation:
|
|
|
|
- **Feedback:** Complete the post-deliberation survey
|
|
- **Complaints:** Contact [PROJECT LEAD] or [INSTITUTIONAL CONTACT if applicable]
|
|
|
|
---
|
|
|
|
## 12. Future Contact
|
|
|
|
### May we contact you in the future?
|
|
|
|
☐ **Yes**, you may contact me about:
|
|
- Follow-up questions about this deliberation
|
|
- Future deliberations on related topics
|
|
- Research findings and publications
|
|
|
|
☐ **No**, please do not contact me after this deliberation ends
|
|
|
|
**Preferred contact method:** ☐ Email ☐ Phone ☐ Other: __________
|
|
|
|
---
|
|
|
|
## 13. Consent Statement
|
|
|
|
**I have read and understood this consent form. I have had the opportunity to ask questions and my questions have been answered. I understand:**
|
|
|
|
☐ **The purpose** of this deliberation (substantive and methodological goals)
|
|
|
|
☐ **AI-led facilitation** and my right to request human facilitation or withdraw
|
|
|
|
☐ **Time commitment** (4-6 hours over 4 weeks)
|
|
|
|
☐ **Data collection** (recordings, transcripts, surveys, facilitation logs)
|
|
|
|
☐ **Confidentiality** (pseudonymous by default, public attribution optional)
|
|
|
|
☐ **Risks** (time, emotional discomfort, AI concerns, confidentiality breach risk)
|
|
|
|
☐ **Benefits** (exposure, skills, network, contribution to research)
|
|
|
|
☐ **Voluntary participation** and my right to withdraw at any time
|
|
|
|
☐ **No compensation** (volunteer pilot, with possible modest honorarium if needed)
|
|
|
|
**By signing below, I consent to participate in this deliberation under the conditions described above.**
|
|
|
|
---
|
|
|
|
**Participant Name (printed):** _____________________________________________
|
|
|
|
**Signature:** ___________________________________ **Date:** _______________
|
|
|
|
**Email:** ___________________________________________________________________
|
|
|
|
**Phone (optional):** _________________________________________________________
|
|
|
|
**Organization/Affiliation:** __________________________________________________
|
|
|
|
**Role in this deliberation:** ☐ Job Applicant Rep ☐ Employer/HR Rep ☐ AI Vendor Rep
|
|
☐ Regulator Rep ☐ Labor Advocate ☐ AI Ethics Researcher
|
|
|
|
---
|
|
|
|
## Video/Audio Recording Consent (Optional)
|
|
|
|
**Do you consent to being video/audio recorded during synchronous sessions?**
|
|
|
|
☐ **Yes**, I consent to video/audio recording for:
|
|
- ☐ Research purposes only (transcription, analysis)
|
|
- ☐ Public demonstration (video clips may be shown in presentations) - I understand I can revoke this later
|
|
|
|
☐ **No**, I do not consent to recording. (We will take notes instead; you can still participate)
|
|
|
|
---
|
|
|
|
**Signature (if consenting to recording):** ______________________ **Date:** ________
|
|
|
|
---
|
|
|
|
## Researcher/Facilitator Statement
|
|
|
|
I certify that:
|
|
|
|
- This consent form was explained to the participant
|
|
- The participant had the opportunity to ask questions
|
|
- All questions were answered to the participant's satisfaction
|
|
- The participant signed this form voluntarily
|
|
|
|
**Researcher/Facilitator Name:** _____________________________________________
|
|
|
|
**Signature:** ___________________________________ **Date:** _______________
|
|
|
|
---
|
|
|
|
## Participant Copy
|
|
|
|
**Please keep a copy of this form for your records.**
|
|
|
|
If you have questions later, contact:
|
|
|
|
**Project Lead:** [NAME]
|
|
**Email:** [EMAIL]
|
|
**Phone:** [PHONE]
|
|
|
|
**Institutional Contact (if applicable):** [NAME, TITLE, EMAIL]
|
|
|
|
---
|
|
|
|
**Document Version:** 1.0
|
|
**Date:** 2025-10-17
|
|
**IRB Approval (if applicable):** [PROTOCOL NUMBER, DATE]
|
|
|
|
---
|
|
|
|
## Appendix: Key Terms Defined
|
|
|
|
**PluralisticDeliberationOrchestrator:** AI system designed to facilitate multi-stakeholder deliberation by identifying moral frameworks in tension and exploring accommodations (not forcing consensus).
|
|
|
|
**Moral Frameworks:** Different ethical perspectives (e.g., consequentialism focuses on outcomes, deontology focuses on rights/duties, virtue ethics focuses on character, care ethics focuses on relationships).
|
|
|
|
**Pluralistic Accommodation:** A resolution that honors multiple values simultaneously, even when they conflict. Dissent is documented as legitimate, not suppressed.
|
|
|
|
**Transparency Report:** Document showing all AI vs. human facilitation actions, interventions, safety escalations, and stakeholder feedback. Demonstrates accountability.
|
|
|
|
**Human Intervention:** When the human observer steps in to take over from AI facilitation due to safety concerns, quality issues, or stakeholder request.
|
|
|
|
**Pattern Bias:** When AI (or process) inadvertently centers vulnerable populations as "the problem" or uses stigmatizing framing. A mandatory intervention trigger.
|