- Create Economist SubmissionTracking package correctly: * mainArticle = full blog post content * coverLetter = 216-word SIR— letter * Links to blog post via blogPostId - Archive 'Letter to The Economist' from blog posts (it's the cover letter) - Fix date display on article cards (use published_at) - Target publication already displaying via blue badge Database changes: - Make blogPostId optional in SubmissionTracking model - Economist package ID: 68fa85ae49d4900e7f2ecd83 - Le Monde package ID: 68fa2abd2e6acd5691932150 Next: Enhanced modal with tabs, validation, export 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
633 lines
23 KiB
Markdown
633 lines
23 KiB
Markdown
# Stakeholder Recruitment Email Templates
|
|
## PluralisticDeliberationOrchestrator - Algorithmic Hiring Transparency
|
|
|
|
**Project:** Tractatus Pluralistic Deliberation Pilot
|
|
**Scenario:** Algorithmic Hiring Transparency
|
|
**Date:** 2025-10-17
|
|
**Compensation:** Volunteer participation (no payment)
|
|
|
|
---
|
|
|
|
## Usage Notes
|
|
|
|
- **Personalize** each email with recipient's name, organization, relevant work
|
|
- **Attach** background materials packet (separate document)
|
|
- **Include** informed consent form (separate document)
|
|
- **Send** from official Tractatus email address
|
|
- **Follow up** within 1 week if no response
|
|
|
|
---
|
|
|
|
## Template 1: Job Applicant Representative
|
|
|
|
**Subject:** Invitation: Multi-Stakeholder Deliberation on AI Hiring Transparency
|
|
|
|
---
|
|
|
|
Dear [NAME],
|
|
|
|
I'm reaching out to invite you to participate in a **pilot deliberation on algorithmic hiring transparency**—a pressing issue for job seekers navigating AI-screened applications.
|
|
|
|
**Why you?** We're seeking someone with recent job-seeking experience who can represent the applicant perspective in a multi-stakeholder dialogue about whether and how companies should disclose their use of AI in hiring decisions.
|
|
|
|
### What This Is
|
|
|
|
This is a **pluralistic deliberation**—not a debate, not a focus group, but a structured conversation among 6 stakeholders with different perspectives (job applicants, employers, AI vendors, regulators, labor advocates, and researchers). The goal is to explore whether competing values (fairness, efficiency, privacy, innovation) can be accommodated, not to reach forced consensus.
|
|
|
|
### What's Unique
|
|
|
|
**This deliberation will be facilitated by an AI** (with a human observer present for safety). We're piloting a new approach to AI-assisted governance, and your feedback on the process will be as valuable as your input on the issue itself.
|
|
|
|
### Time Commitment
|
|
|
|
**Total: 4-6 hours over 4 weeks (hybrid format)**
|
|
|
|
- **Week 1-2 (Asynchronous):** Submit written position statement (1 hour)
|
|
- **Week 3 (Synchronous):** Two 2-hour video sessions (scheduled to your availability)
|
|
- **Week 4 (Asynchronous):** Review and refine outcome document (1 hour)
|
|
|
|
### Compensation
|
|
|
|
This is a **volunteer** pilot. Your participation contributes to research on pluralistic AI governance. If this creates financial hardship, please let me know—we may be able to provide modest reimbursement for time.
|
|
|
|
### What You'll Do
|
|
|
|
- Share your perspective on algorithmic hiring transparency as someone who's applied for jobs
|
|
- Listen to and engage with perspectives from employers, vendors, regulators, and advocates
|
|
- Help identify shared values and explore accommodations for competing concerns
|
|
- Provide feedback on the AI facilitation process
|
|
|
|
### What You'll Get
|
|
|
|
- Insight into how employers and AI vendors think about hiring algorithms
|
|
- Exposure to cutting-edge AI governance methodology
|
|
- Recognition in any publications or demonstrations resulting from this work
|
|
- A full transparency report showing how the deliberation unfolded
|
|
|
|
### AI Facilitation Disclosure
|
|
|
|
**Important:** The facilitation will be led by an AI system (PluralisticDeliberationOrchestrator). A trained human observer will be present at all times and will intervene if the AI makes any mistakes, shows bias, or if you request human facilitation. You can withdraw at any time if you're uncomfortable.
|
|
|
|
### Next Steps
|
|
|
|
If you're interested:
|
|
|
|
1. **Review** the attached background materials (scenario overview, deliberation process)
|
|
2. **Complete** the attached informed consent form
|
|
3. **Reply** by [DATE - 2 weeks from send] with:
|
|
- Confirmation of interest
|
|
- Your availability for Week 3 video sessions (we'll find a time that works for all 6 participants)
|
|
- Any questions or concerns
|
|
|
|
### Questions?
|
|
|
|
Please don't hesitate to reach out. I'm happy to discuss this further by email or schedule a brief call.
|
|
|
|
Thank you for considering this. Your voice matters, and we'd be honored to have you at the table.
|
|
|
|
Best regards,
|
|
|
|
[YOUR NAME]
|
|
[TITLE]
|
|
Tractatus Project
|
|
[EMAIL]
|
|
[PHONE]
|
|
|
|
---
|
|
|
|
**Attachments:**
|
|
- Background Materials Packet (scenario-overview-algorithmic-hiring.pdf)
|
|
- Informed Consent Form (consent-form-ai-led-deliberation.pdf)
|
|
|
|
---
|
|
|
|
## Template 2: Employer / HR Representative
|
|
|
|
**Subject:** Invitation: Industry Input on AI Hiring Transparency Standards
|
|
|
|
---
|
|
|
|
Dear [NAME],
|
|
|
|
I'm reaching out to invite you to participate in a **pilot deliberation on algorithmic hiring transparency** that could inform emerging standards in this rapidly evolving regulatory landscape.
|
|
|
|
**Why you?** We're seeking an HR or Talent Acquisition leader with experience using (or evaluating) AI screening tools who can represent employer interests in balancing efficiency, compliance, and fairness.
|
|
|
|
### What This Is
|
|
|
|
This is a **multi-stakeholder deliberation** convening 6 participants: job applicants, employers, AI vendors, regulators, labor advocates, and researchers. We're exploring whether algorithmic hiring transparency can be designed to address applicant fairness concerns without undermining business efficiency or IP protection.
|
|
|
|
### Why This Matters for Employers
|
|
|
|
With NYC Local Law 144, the EU AI Act, and proposed federal legislation, transparency requirements are coming. This deliberation is an opportunity to:
|
|
|
|
- **Shape the conversation** before regulations are finalized
|
|
- **Explore practical solutions** that work for businesses, not just regulators
|
|
- **Understand applicant and advocate concerns** directly
|
|
- **Test whether tiered transparency** (some disclosure to applicants, more to regulators, protected trade secrets) is viable
|
|
|
|
### What's Unique
|
|
|
|
**This deliberation will be facilitated by an AI** (with a human observer present). We're piloting AI-assisted governance methods and want industry feedback on whether this approach can produce more nuanced, less adversarial policy than traditional comment periods.
|
|
|
|
### Time Commitment
|
|
|
|
**Total: 4-6 hours over 4 weeks (hybrid format)**
|
|
|
|
- **Week 1-2 (Asynchronous):** Submit written position (1 hour)
|
|
- **Week 3 (Synchronous):** Two 2-hour video sessions
|
|
- **Week 4 (Asynchronous):** Review outcome (1 hour)
|
|
|
|
### Compensation
|
|
|
|
This is a **volunteer** pilot. Participation is in the spirit of industry contribution to responsible AI standards. If your organization requires compensation for executive time, please let me know.
|
|
|
|
### What You'll Do
|
|
|
|
- Articulate employer concerns: efficiency, trade secrets, gaming, compliance burden
|
|
- Engage with applicant, regulator, and advocate perspectives
|
|
- Explore whether accommodations exist (e.g., transparency to regulators but not public, bias audits without revealing weights)
|
|
- Provide feedback on AI facilitation quality
|
|
|
|
### What You'll Get
|
|
|
|
- Early insight into likely regulatory directions
|
|
- Relationships with key stakeholders (regulators, advocates, researchers)
|
|
- Opportunity to shape pluralistic policy framework
|
|
- Recognition in research publications and potential media coverage
|
|
|
|
### AI Facilitation Disclosure
|
|
|
|
**Important:** An AI system will facilitate, with human oversight. You can request human facilitation at any time. See attached materials for full details.
|
|
|
|
### Next Steps
|
|
|
|
If interested:
|
|
|
|
1. **Review** attached background materials
|
|
2. **Complete** informed consent form
|
|
3. **Reply** by [DATE] with:
|
|
- Confirmation of interest
|
|
- Week 3 availability
|
|
- Any questions
|
|
|
|
I'm also happy to discuss this with your legal/compliance team if needed.
|
|
|
|
Best regards,
|
|
|
|
[YOUR NAME]
|
|
[TITLE]
|
|
Tractatus Project
|
|
[EMAIL]
|
|
[PHONE]
|
|
|
|
---
|
|
|
|
**Attachments:**
|
|
- Background Materials Packet
|
|
- Informed Consent Form
|
|
|
|
---
|
|
|
|
## Template 3: AI Vendor Representative
|
|
|
|
**Subject:** Invitation: Shape the Future of Algorithmic Hiring Transparency
|
|
|
|
---
|
|
|
|
Dear [NAME],
|
|
|
|
I'm reaching out to invite [COMPANY] to participate in a **pilot deliberation on algorithmic hiring transparency**—an issue at the intersection of innovation, regulation, and fairness.
|
|
|
|
**Why [COMPANY]?** As a leading provider of AI hiring tools, your technical expertise and industry perspective are essential to grounding this conversation in what's actually feasible vs. what sounds good in theory.
|
|
|
|
### What This Is
|
|
|
|
A **multi-stakeholder deliberation** among 6 participants: applicants, employers (your clients), vendors, regulators, labor advocates, and researchers. We're exploring whether transparency frameworks can be designed to address bias/fairness concerns without stifling innovation or forcing IP disclosure.
|
|
|
|
### Why This Matters for Vendors
|
|
|
|
Transparency mandates are spreading (NYC, EU, proposed federal). Vendors face a challenge:
|
|
|
|
- **How much transparency is too much?** (Where does accountability become IP theft?)
|
|
- **What transparency actually helps?** (Detailed explanations vs. performative compliance?)
|
|
- **Can technical solutions bridge the gap?** (Explainable AI, third-party audits, etc.)
|
|
|
|
This deliberation is a chance to:
|
|
|
|
- **Articulate feasibility constraints** (what's technically possible vs. science fiction)
|
|
- **Propose alternative solutions** (if full transparency isn't viable, what is?)
|
|
- **Build relationships** with regulators and advocates before adversarial litigation
|
|
- **Demonstrate responsible AI leadership**
|
|
|
|
### What's Unique
|
|
|
|
**This deliberation will be AI-facilitated** (with human oversight). Since we're discussing AI governance, it seems fitting to use AI as a tool in the process itself. Your feedback on whether AI can facilitate sensitive multi-stakeholder discussions will be valuable.
|
|
|
|
### Time Commitment
|
|
|
|
**Total: 4-6 hours over 4 weeks (hybrid format)**
|
|
|
|
- **Week 1-2 (Asynchronous):** Written position (1 hour)
|
|
- **Week 3 (Synchronous):** Two 2-hour video sessions
|
|
- **Week 4 (Asynchronous):** Review outcome (1 hour)
|
|
|
|
### Compensation
|
|
|
|
This is a **volunteer** pilot. Participation positions [COMPANY] as a thought leader in responsible AI. If your company policy requires compensation for executive participation, please let me know.
|
|
|
|
### What You'll Do
|
|
|
|
- Explain technical constraints (accuracy vs. explainability trade-offs, adversarial gaming risks)
|
|
- Engage with applicant concerns (bias, fairness, recourse)
|
|
- Explore whether tiered solutions exist (e.g., audit-based transparency vs. source code disclosure)
|
|
- Provide feedback on AI facilitation process
|
|
|
|
### What You'll Get
|
|
|
|
- Regulatory intelligence (what concerns drive policy)
|
|
- Stakeholder relationships (build trust with advocates/regulators)
|
|
- Opportunity to shape standards (before mandates are rigid)
|
|
- Research recognition and potential media coverage
|
|
|
|
### AI Facilitation Disclosure
|
|
|
|
**Important:** AI-led facilitation with human safety oversight. See attached materials for full protocol.
|
|
|
|
### Next Steps
|
|
|
|
If interested:
|
|
|
|
1. **Review** attached materials
|
|
2. **Complete** consent form
|
|
3. **Reply** by [DATE] with:
|
|
- Confirmation
|
|
- Week 3 availability
|
|
- Any concerns
|
|
|
|
Happy to discuss with your legal/PR teams if helpful.
|
|
|
|
Best regards,
|
|
|
|
[YOUR NAME]
|
|
[TITLE]
|
|
Tractatus Project
|
|
[EMAIL]
|
|
[PHONE]
|
|
|
|
---
|
|
|
|
**Attachments:**
|
|
- Background Materials Packet
|
|
- Informed Consent Form
|
|
|
|
---
|
|
|
|
## Template 4: Regulator Representative
|
|
|
|
**Subject:** Invitation: Pilot Multi-Stakeholder Deliberation on AI Hiring Transparency
|
|
|
|
---
|
|
|
|
Dear [COMMISSIONER / DIRECTOR NAME],
|
|
|
|
I'm reaching out to invite [AGENCY] to participate in a **pilot deliberation on algorithmic hiring transparency**—directly relevant to [AGENCY]'s mandate to enforce fair employment practices.
|
|
|
|
**Why [AGENCY]?** Regulatory perspectives are critical to ensuring any transparency framework is both enforceable and aligned with anti-discrimination law.
|
|
|
|
### What This Is
|
|
|
|
A **multi-stakeholder deliberation** convening applicants, employers, AI vendors, regulators, labor advocates, and researchers to explore whether algorithmic hiring transparency can be designed to:
|
|
|
|
- Address disparate impact concerns
|
|
- Provide enforceable standards (not just voluntary guidelines)
|
|
- Balance compliance burden with accountability
|
|
- Be consistent across jurisdictions (federal, state, EU)
|
|
|
|
This is **not a comment period or rulemaking**. It's an exploratory dialogue to identify whether pluralistic policy solutions exist before adversarial processes begin.
|
|
|
|
### Why This Matters for Regulators
|
|
|
|
You're already seeing this issue:
|
|
|
|
- Complaints about opaque AI rejections
|
|
- Questions about bias audit requirements (NYC LL144)
|
|
- Industry requests for safe harbors
|
|
- Jurisdictional fragmentation (state vs. federal vs. EU)
|
|
|
|
This deliberation offers:
|
|
|
|
- **Stakeholder intelligence:** What do employers/vendors/advocates actually care about? (vs. formal comments)
|
|
- **Feasibility testing:** Can proposed frameworks actually work?
|
|
- **Relationship-building:** Collaborative not adversarial engagement
|
|
- **Precedent:** Model for future AI governance challenges
|
|
|
|
### What's Unique
|
|
|
|
**This deliberation will be AI-facilitated** (with human oversight). We're piloting whether AI can assist in governance processes themselves—relevant given [AGENCY]'s role in overseeing AI employment tools.
|
|
|
|
### Time Commitment
|
|
|
|
**Total: 4-6 hours over 4 weeks (hybrid format)**
|
|
|
|
- **Week 1-2 (Asynchronous):** Written position (1 hour)
|
|
- **Week 3 (Synchronous):** Two 2-hour video sessions
|
|
- **Week 4 (Asynchronous):** Review outcome (1 hour)
|
|
|
|
### Compensation
|
|
|
|
This is a **volunteer** pilot (or could be counted as official duties if appropriate). No personal compensation.
|
|
|
|
### What You'll Do
|
|
|
|
- Articulate enforcement priorities (disparate impact, procedural fairness, recourse)
|
|
- Engage with employer/vendor concerns (burden, feasibility, trade secrets)
|
|
- Assess whether proposed solutions are enforceable
|
|
- Provide feedback on AI facilitation process
|
|
|
|
### What You'll Get
|
|
|
|
- Early look at stakeholder positions
|
|
- Multi-stakeholder relationships
|
|
- Input on emerging standards
|
|
- Research recognition
|
|
|
|
### AI Facilitation Disclosure
|
|
|
|
**Important:** AI-led with human oversight. See attached protocol.
|
|
|
|
### Ethics / Conflict Review
|
|
|
|
Please confirm with your ethics office if needed. We can structure participation as official duties, personal capacity, or observer role depending on your preference.
|
|
|
|
### Next Steps
|
|
|
|
If interested:
|
|
|
|
1. **Review** attached materials
|
|
2. **Complete** consent form (or coordinate with ethics office)
|
|
3. **Reply** by [DATE] with:
|
|
- Confirmation
|
|
- Week 3 availability
|
|
- Any clearance needs
|
|
|
|
Happy to discuss further by call.
|
|
|
|
Best regards,
|
|
|
|
[YOUR NAME]
|
|
[TITLE]
|
|
Tractatus Project
|
|
[EMAIL]
|
|
[PHONE]
|
|
|
|
---
|
|
|
|
**Attachments:**
|
|
- Background Materials Packet
|
|
- Informed Consent Form
|
|
|
|
---
|
|
|
|
## Template 5: Labor Advocate
|
|
|
|
**Subject:** Invitation: Worker Voice in AI Hiring Transparency Deliberation
|
|
|
|
---
|
|
|
|
Dear [NAME],
|
|
|
|
I'm reaching out to invite [ORGANIZATION] to participate in a **pilot deliberation on algorithmic hiring transparency**—a critical issue for worker power and dignity.
|
|
|
|
**Why [ORGANIZATION]?** Your advocacy for worker rights and algorithmic accountability positions you as an essential voice in shaping fair standards for AI hiring.
|
|
|
|
### What This Is
|
|
|
|
A **multi-stakeholder deliberation** among applicants, employers, AI vendors, regulators, labor advocates (you), and researchers. We're exploring whether algorithmic hiring transparency can protect workers without being undermined by employer resistance or technical constraints.
|
|
|
|
### Why This Matters for Workers
|
|
|
|
AI hiring tools are proliferating. Without transparency:
|
|
|
|
- **Workers can't challenge bias:** How do you prove discrimination if you don't know the algorithm?
|
|
- **Workers can't improve:** No feedback on why rejected
|
|
- **Workers have no recourse:** Black-box systems are unaccountable
|
|
|
|
This deliberation is a chance to:
|
|
|
|
- **Assert worker rights** (explanation, recourse, human review)
|
|
- **Engage employers/vendors directly** (not through litigation)
|
|
- **Shape standards before they're set** (proactive not reactive)
|
|
- **Test whether accommodation is possible** (or whether full confrontation is necessary)
|
|
|
|
### What's Unique
|
|
|
|
**This deliberation will be AI-facilitated** (with human oversight). We recognize the irony: using AI to discuss AI governance. But this lets us test whether AI can facilitate fairly across power imbalances—critical for worker contexts.
|
|
|
|
### Time Commitment
|
|
|
|
**Total: 4-6 hours over 4 weeks (hybrid format)**
|
|
|
|
- **Week 1-2 (Asynchronous):** Written position (1 hour)
|
|
- **Week 3 (Synchronous):** Two 2-hour video sessions
|
|
- **Week 4 (Asynchronous):** Review outcome (1 hour)
|
|
|
|
### Compensation
|
|
|
|
This is a **volunteer** pilot. If [ORGANIZATION] requires compensation for staff time, please let me know—we may be able to accommodate.
|
|
|
|
### What You'll Do
|
|
|
|
- Articulate worker concerns: fairness, dignity, recourse, power imbalance
|
|
- Engage with employer concerns (efficiency, gaming) to test whether they're legitimate or excuses
|
|
- Advocate for strong standards (not watered-down compromise)
|
|
- Provide feedback on whether AI facilitation is fair across power asymmetries
|
|
|
|
### What You'll Get
|
|
|
|
- Direct engagement with employers/vendors (relationship-building or intelligence-gathering)
|
|
- Opportunity to shape standards (pluralistic framework could be model for advocacy)
|
|
- Research recognition and media coverage
|
|
- Full transparency report showing deliberation process
|
|
|
|
### AI Facilitation Disclosure
|
|
|
|
**Important:** AI-led with human oversight. **We take seriously the concern that AI might favor powerful stakeholders (employers/vendors).** The human observer is explicitly trained to watch for and correct any fairness imbalances. You can request human facilitation at any time, and you'll receive a transparency report showing every AI vs. human action.
|
|
|
|
### Next Steps
|
|
|
|
If interested:
|
|
|
|
1. **Review** attached materials (including AI safety protocol)
|
|
2. **Complete** consent form
|
|
3. **Reply** by [DATE] with:
|
|
- Confirmation
|
|
- Week 3 availability
|
|
- Any concerns (especially about AI facilitation)
|
|
|
|
I'm happy to discuss concerns about AI bias or power dynamics by call.
|
|
|
|
Best regards,
|
|
|
|
[YOUR NAME]
|
|
[TITLE]
|
|
Tractatus Project
|
|
[EMAIL]
|
|
[PHONE]
|
|
|
|
---
|
|
|
|
**Attachments:**
|
|
- Background Materials Packet
|
|
- Informed Consent Form
|
|
- AI Safety & Human Intervention Protocol
|
|
|
|
---
|
|
|
|
## Template 6: AI Ethics Researcher
|
|
|
|
**Subject:** Invitation: Participate in Pluralistic Deliberation Pilot (AI-Facilitated)
|
|
|
|
---
|
|
|
|
Dear [PROFESSOR / DR. NAME],
|
|
|
|
I'm reaching out to invite you to participate in a **pilot deliberation on algorithmic hiring transparency** that serves dual purposes: addressing a real policy question AND testing a novel AI governance methodology.
|
|
|
|
**Why you?** Your research on [SPECIFIC WORK: fairness in ML / algorithmic accountability / AI ethics / etc.] makes you an ideal participant and also potential collaborator on the research itself.
|
|
|
|
### What This Is
|
|
|
|
A **multi-stakeholder deliberation** among applicants, employers, AI vendors, regulators, labor advocates, and researchers (you). We're exploring:
|
|
|
|
1. **Substantive question:** Can algorithmic hiring transparency frameworks accommodate competing values (fairness, efficiency, privacy, innovation)?
|
|
2. **Methodological question:** Can AI facilitate pluralistic deliberation across stakeholders with asymmetric power?
|
|
|
|
### Why This Matters (Substantively)
|
|
|
|
Algorithmic hiring transparency is a **live policy question**:
|
|
|
|
- NYC Local Law 144 (2023), EU AI Act (2024), proposed federal legislation
|
|
- Tension between employer interests (IP, efficiency) and applicant rights (explanation, recourse)
|
|
- No clear "right answer"—genuine values conflict
|
|
|
|
This deliberation could produce a **pluralistic framework** (e.g., tiered transparency: minimal to applicants, more to regulators, audited by third parties) that informs actual policy.
|
|
|
|
### Why This Matters (Methodologically)
|
|
|
|
We're piloting **PluralisticDeliberationOrchestrator**: an AI system designed to:
|
|
|
|
- Detect moral frameworks in tension (consequentialist, deontological, virtue, care, communitarian)
|
|
- Facilitate non-hierarchical deliberation (no framework dominates by default)
|
|
- Accommodate competing values (not force consensus)
|
|
- Document dissent as legitimate (moral remainder)
|
|
|
|
**Your role:** Provide both substantive input (AI ethics expertise) AND methodological feedback (does this process actually work?).
|
|
|
|
### What's Unique
|
|
|
|
**AI-facilitated deliberation with human oversight.** Since we're discussing AI governance, it's fitting to use AI as a tool. This lets us:
|
|
|
|
- Test feasibility of AI facilitation
|
|
- Observe AI behavior in sensitive multi-stakeholder context
|
|
- Collect data on intervention rates, fairness, stakeholder satisfaction
|
|
|
|
### Time Commitment
|
|
|
|
**Total: 4-6 hours over 4 weeks (hybrid format)**
|
|
|
|
- **Week 1-2 (Asynchronous):** Written position (1 hour)
|
|
- **Week 3 (Synchronous):** Two 2-hour video sessions
|
|
- **Week 4 (Asynchronous):** Review outcome + methodological debrief (1 hour)
|
|
|
|
### Compensation
|
|
|
|
This is a **volunteer** pilot. If you're interested in co-authoring research on the deliberation methodology, we can discuss that separately.
|
|
|
|
### What You'll Do
|
|
|
|
- Provide AI ethics expertise on hiring transparency
|
|
- Engage with stakeholder perspectives (employers, applicants, advocates)
|
|
- Observe AI facilitation quality (and request human intervention if needed)
|
|
- Provide methodological feedback post-deliberation
|
|
|
|
### What You'll Get
|
|
|
|
- Novel research data (AI-facilitated deliberation with real stakeholders)
|
|
- Potential co-authorship on methodology paper (if interested)
|
|
- Access to full deliberation transcript and transparency report
|
|
- Precedent database (if we scale this to multiple scenarios)
|
|
|
|
### AI Facilitation Disclosure
|
|
|
|
**Important:** AI-led with human oversight. Given your expertise, your feedback on AI behavior is especially valuable. See attached protocol for intervention triggers and safety mechanisms.
|
|
|
|
### Research Ethics
|
|
|
|
This pilot has been designed with research ethics in mind:
|
|
|
|
- Informed consent required
|
|
- Stakeholder welfare prioritized over research goals
|
|
- Right to withdraw at any time
|
|
- Full transparency about AI vs. human actions
|
|
|
|
If your institution requires IRB approval for your participation, please let me know.
|
|
|
|
### Next Steps
|
|
|
|
If interested:
|
|
|
|
1. **Review** attached materials (including methodology notes)
|
|
2. **Complete** consent form
|
|
3. **Reply** by [DATE] with:
|
|
- Confirmation
|
|
- Week 3 availability
|
|
- Interest in co-authorship (optional)
|
|
|
|
I'm also happy to discuss the methodology in depth by call or video.
|
|
|
|
Best regards,
|
|
|
|
[YOUR NAME]
|
|
[TITLE]
|
|
Tractatus Project
|
|
[EMAIL]
|
|
[PHONE]
|
|
|
|
---
|
|
|
|
**Attachments:**
|
|
- Background Materials Packet
|
|
- Informed Consent Form
|
|
- AI Safety & Human Intervention Protocol
|
|
- Methodology Notes (PluralisticDeliberationOrchestrator)
|
|
|
|
---
|
|
|
|
## Follow-Up Sequence
|
|
|
|
**If no response within 1 week:**
|
|
|
|
**Subject:** Re: Invitation to AI Hiring Transparency Deliberation
|
|
|
|
> Hi [NAME],
|
|
>
|
|
> Following up on my invitation below. I know you're busy, so just wanted to check:
|
|
>
|
|
> 1. Did this reach you? (Sometimes emails get filtered)
|
|
> 2. Are you interested but need more info?
|
|
> 3. Or should I follow up with someone else at [ORGANIZATION]?
|
|
>
|
|
> No pressure either way—just want to make sure I'm not missing you in a spam folder.
|
|
>
|
|
> Best,
|
|
> [YOUR NAME]
|
|
|
|
**If no response within 2 weeks:**
|
|
|
|
Consider this stakeholder unavailable. Move to alternate candidate on list.
|
|
|
|
---
|
|
|
|
**Document Status:** Ready for Use
|
|
**Next Step:** Customize with actual names, dates, and attachments
|