- Create Economist SubmissionTracking package correctly: * mainArticle = full blog post content * coverLetter = 216-word SIR— letter * Links to blog post via blogPostId - Archive 'Letter to The Economist' from blog posts (it's the cover letter) - Fix date display on article cards (use published_at) - Target publication already displaying via blue badge Database changes: - Make blogPostId optional in SubmissionTracking model - Economist package ID: 68fa85ae49d4900e7f2ecd83 - Le Monde package ID: 68fa2abd2e6acd5691932150 Next: Enhanced modal with tabs, validation, export 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1466 lines
59 KiB
Markdown
1466 lines
59 KiB
Markdown
# Media Pattern Research Guide
|
||
## Systematic Methods for Assessing Scenario Timeliness & Public Salience
|
||
|
||
**Document Type:** Research Methodology & Tools
|
||
**Date:** 2025-10-17
|
||
**Part of:** PluralisticDeliberationOrchestrator Implementation Series
|
||
**Related Documents:** evaluation-rubric-scenario-selection.md, scenario-deep-dive-algorithmic-hiring.md
|
||
**Status:** Planning Phase
|
||
|
||
---
|
||
|
||
## Executive Summary
|
||
|
||
This guide provides **systematic methods for researching media patterns, public discourse, and regulatory activity** around potential PluralisticDeliberationOrchestrator demonstration scenarios. The goal is to assess **timeliness and public salience** (Criterion 4 from evaluation-rubric.md) using replicable, evidence-based methods.
|
||
|
||
**Why Media Pattern Research Matters:**
|
||
1. **Timeliness validation:** Ensures scenario is relevant now, not historical or hypothetical
|
||
2. **Policy window identification:** Determines if demonstration can inform real decisions
|
||
3. **Polarization assessment:** Identifies whether authentic deliberation is feasible or if positions are entrenched
|
||
4. **Strategic positioning:** Helps frame demonstration to align with current discourse
|
||
5. **Risk identification:** Reveals potential controversies or sensitivities
|
||
|
||
**Key Research Questions:**
|
||
- **Search Interest:** Are people searching for this topic? Increasing or declining?
|
||
- **News Coverage:** Is mainstream media covering this? What's the tone and framing?
|
||
- **Regulatory Activity:** Is there pending/active legislation, regulation, or litigation?
|
||
- **Academic Discourse:** Is this topic generating scholarly research?
|
||
- **Polarization:** Have positions hardened into tribal camps, or is deliberation possible?
|
||
- **Policy Window:** Is there an opportunity for demonstration to influence real decisions?
|
||
|
||
**Research Workflow:**
|
||
1. **Define search terms** (keyword selection)
|
||
2. **Collect quantitative data** (Google Trends, news counts, regulatory tracking)
|
||
3. **Collect qualitative data** (content analysis, framing analysis, stakeholder position mapping)
|
||
4. **Synthesize findings** (interpret patterns, identify opportunities/risks)
|
||
5. **Document results** (structured summary for scenario scoring)
|
||
|
||
**Estimated Time:** 4-8 hours per scenario (depending on complexity and data availability)
|
||
|
||
---
|
||
|
||
## Table of Contents
|
||
|
||
1. [Research Objectives](#1-research-objectives)
|
||
2. [Data Sources & Tools](#2-data-sources--tools)
|
||
3. [Phase 1: Search Interest Analysis (Google Trends)](#3-phase-1-search-interest-analysis-google-trends)
|
||
4. [Phase 2: News Coverage Analysis](#4-phase-2-news-coverage-analysis)
|
||
5. [Phase 3: Regulatory & Legislative Tracking](#5-phase-3-regulatory--legislative-tracking)
|
||
6. [Phase 4: Academic Discourse Mapping](#6-phase-4-academic-discourse-mapping)
|
||
7. [Phase 5: Social Media & Public Discourse](#7-phase-5-social-media--public-discourse)
|
||
8. [Phase 6: Polarization Assessment](#8-phase-6-polarization-assessment)
|
||
9. [Phase 7: Policy Window Analysis](#9-phase-7-policy-window-analysis)
|
||
10. [Synthesis & Documentation](#10-synthesis--documentation)
|
||
11. [Case Study: Algorithmic Hiring Transparency](#11-case-study-algorithmic-hiring-transparency)
|
||
12. [Appendix: Research Templates](#appendix-research-templates)
|
||
|
||
---
|
||
|
||
## 1. Research Objectives
|
||
|
||
### 1.1 Primary Objectives
|
||
|
||
**Objective 1: Assess Current Salience**
|
||
- **Question:** Is this topic of public interest RIGHT NOW (not 5 years ago, not hypothetically in the future)?
|
||
- **Methods:** Google Trends, news counts, social media volume
|
||
- **Threshold:** For Tier 1 scenario, expect Google Trends score ≥50/100 in past 12 months
|
||
|
||
**Objective 2: Identify Trajectory**
|
||
- **Question:** Is interest increasing (emerging issue), stable (sustained relevance), or decreasing (fading issue)?
|
||
- **Methods:** Time-series analysis of search trends, news coverage over 3-5 years
|
||
- **Preference:** Emerging or stable (not fading)
|
||
|
||
**Objective 3: Map Discourse Landscape**
|
||
- **Question:** How is the issue being framed? Who are the key voices? What positions exist?
|
||
- **Methods:** Content analysis of news articles, position papers, opinion pieces
|
||
- **Output:** Stakeholder position map, framing taxonomy
|
||
|
||
**Objective 4: Assess Polarization**
|
||
- **Question:** Are positions entrenched in tribal camps, or is there space for deliberation?
|
||
- **Methods:** Cross-cutting coalition analysis, compromise proposal search, tone analysis
|
||
- **Preference:** Low-moderate polarization (deliberation feasible)
|
||
|
||
**Objective 5: Identify Policy Windows**
|
||
- **Question:** Is there active decision-making where demonstration could have impact?
|
||
- **Methods:** Legislative tracking, regulatory comment periods, corporate policy announcements
|
||
- **Indicators:** Pending bills, open comment periods, announced policy reviews
|
||
|
||
---
|
||
|
||
### 1.2 Secondary Objectives
|
||
|
||
**Objective 6: Identify Demonstration Partners**
|
||
- **Question:** Which organizations, researchers, or advocates are active in this space and might collaborate?
|
||
- **Methods:** Byline analysis, author affiliations, advocacy group activity
|
||
- **Output:** List of potential stakeholder representatives
|
||
|
||
**Objective 7: Anticipate Criticism**
|
||
- **Question:** What are likely criticisms of our scenario selection or deliberation approach?
|
||
- **Methods:** Analyze minority positions, dissenting voices, critique framing
|
||
- **Output:** Risk mitigation strategies
|
||
|
||
**Objective 8: Find Precedent Cases**
|
||
- **Question:** Have similar deliberations occurred? What can we learn?
|
||
- **Methods:** Search for "multi-stakeholder dialogue," "deliberative process," "consensus-building" in this domain
|
||
- **Output:** Best practices, pitfalls to avoid
|
||
|
||
---
|
||
|
||
## 2. Data Sources & Tools
|
||
|
||
### 2.1 Search Interest & Trend Data
|
||
|
||
**Google Trends** (https://trends.google.com)
|
||
- **What it provides:** Relative search volume over time (0-100 scale), geographic distribution, related queries
|
||
- **Strengths:** Free, easy to use, global coverage, real-time data
|
||
- **Limitations:** Relative scale (not absolute numbers), limited historical data (2004+), US-centric by default
|
||
- **Best for:** Identifying search interest trends, comparing scenarios, geographic targeting
|
||
|
||
**Alternative: Google Ngram Viewer** (https://books.google.com/ngrams)
|
||
- **What it provides:** Word/phrase frequency in books over time (1800-2019)
|
||
- **Best for:** Historical context, long-term trend analysis, academic discourse (books vs. web searches)
|
||
|
||
---
|
||
|
||
### 2.2 News & Media Coverage
|
||
|
||
**News Aggregators (Free):**
|
||
- **Google News** (https://news.google.com): Broad coverage, easy searching, limited filtering
|
||
- **AllSides** (https://www.allsides.com): News from left, center, right perspectives (polarization assessment)
|
||
|
||
**News Databases (Subscription/Institutional Access):**
|
||
- **LexisNexis** (https://www.lexisnexis.com): Comprehensive news archive, legal documents, transcripts
|
||
- **Factiva** (Dow Jones): Global news, company information, detailed filtering
|
||
- **ProQuest** (https://www.proquest.com): Academic + news coverage, historical archives
|
||
|
||
**Media Monitoring Tools (Paid):**
|
||
- **Meltwater** (https://www.meltwater.com): Real-time media monitoring, sentiment analysis, influencer tracking
|
||
- **Cision** (https://www.cision.com): PR/media monitoring, journalist database
|
||
|
||
**Best Practice:** Start with free tools (Google News, AllSides), escalate to databases if budget/access available.
|
||
|
||
---
|
||
|
||
### 2.3 Regulatory & Legislative Tracking
|
||
|
||
**U.S. Federal:**
|
||
- **Congress.gov** (https://www.congress.gov): Bill tracking, committee hearings, legislative text
|
||
- **Federal Register** (https://www.federalregister.gov): Proposed/final regulations, agency notices, comment periods
|
||
- **Regulations.gov** (https://www.regulations.gov): Public comments on proposed rules
|
||
|
||
**U.S. State:**
|
||
- **LegiScan** (https://legiscan.com): State bill tracking (free tier available)
|
||
- **State legislature websites** (varies by state): Direct access to bills, hearings
|
||
|
||
**International:**
|
||
- **EUR-Lex** (https://eur-lex.europa.eu): EU legislation, regulations, court decisions
|
||
- **OECD Legal Instruments** (https://legalinstruments.oecd.org): International agreements, recommendations
|
||
|
||
**Court Cases:**
|
||
- **PACER** (https://pacer.uscourts.gov): U.S. federal court filings (paid, but low cost)
|
||
- **Justia** (https://www.justia.com): Free access to U.S. case law, dockets
|
||
- **CourtListener** (https://www.courtlistener.com): Free legal opinion search
|
||
|
||
---
|
||
|
||
### 2.4 Academic & Research Literature
|
||
|
||
**Databases:**
|
||
- **Google Scholar** (https://scholar.google.com): Broad academic coverage, free, citation tracking
|
||
- **SSRN** (https://www.ssrn.com): Working papers in social sciences, pre-publication research
|
||
- **arXiv** (https://arxiv.org): Preprints in CS, physics, math (AI ethics papers often here)
|
||
- **JSTOR** (https://www.jstor.org): Academic journals (subscription, but some open access)
|
||
- **PubMed** (https://pubmed.ncbi.nlm.nih.gov): Biomedical literature (for healthcare-related scenarios)
|
||
|
||
**Conference Proceedings:**
|
||
- **ACM Digital Library** (https://dl.acm.org): Computer science conferences (FAccT, CHI, etc.)
|
||
- **IEEE Xplore** (https://ieeexplore.ieee.org): Engineering, AI, technology ethics
|
||
|
||
**Citation Analysis:**
|
||
- **Semantic Scholar** (https://www.semanticscholar.org): AI-powered citation analysis, influential papers
|
||
- **Connected Papers** (https://www.connectedpapers.com): Visual graph of related research
|
||
|
||
---
|
||
|
||
### 2.5 Social Media & Public Discourse
|
||
|
||
**Twitter/X:**
|
||
- **Advanced Search** (https://twitter.com/search-advanced): Search tweets, hashtags, date ranges, users
|
||
- **TweetDeck** (https://tweetdeck.twitter.com): Monitor hashtags, track conversations (free with X account)
|
||
- **Brandwatch / Talkwalker** (paid): Social listening, sentiment analysis, influencer identification
|
||
|
||
**Reddit:**
|
||
- **Reddit Search** (https://www.reddit.com/search): Search posts, comments, subreddits
|
||
- **Pushshift** (https://redditsearch.io): Advanced Reddit search (if API access available)
|
||
- **Subreddit Stats** (https://subredditstats.com): Growth, activity, popular posts
|
||
|
||
**Best Practice:** Use social media for pulse-checking, identifying grassroots discourse, and spotting emerging concerns. Do NOT use as primary evidence (unrepresentative, bot activity, volatility).
|
||
|
||
---
|
||
|
||
### 2.6 Stakeholder & Advocacy Group Tracking
|
||
|
||
**Advocacy Org Websites:**
|
||
- ACLU, EFF, EPIC (civil liberties/privacy)
|
||
- NAACP, National Urban League (civil rights)
|
||
- SHRM, Chamber of Commerce (business/HR)
|
||
- Tech sector: Future of Life Institute, AI Now Institute, Partnership on AI
|
||
|
||
**Think Tanks:**
|
||
- Brookings, Cato Institute, Center for American Progress (policy)
|
||
- Data & Society, AI Ethics Lab (tech ethics)
|
||
|
||
**Industry Groups:**
|
||
- Trade associations (varies by sector)
|
||
- Standard-setting bodies (NIST, ISO, IEEE)
|
||
|
||
**Best Practice:** Identify 3-5 key organizations per stakeholder group, monitor their publications, press releases, and position papers.
|
||
|
||
---
|
||
|
||
## 3. Phase 1: Search Interest Analysis (Google Trends)
|
||
|
||
### 3.1 Keyword Selection Strategy
|
||
|
||
**Principle:** Use multiple keyword variations to capture full scope of discourse.
|
||
|
||
**Keyword Types:**
|
||
|
||
1. **Core Terms:** Direct description of scenario
|
||
- Example (Algorithmic Hiring): "algorithmic hiring," "AI recruitment," "automated screening"
|
||
|
||
2. **Problem-Focused Terms:** Highlight the controversy or concern
|
||
- Example: "hiring bias," "AI discrimination," "algorithmic bias employment"
|
||
|
||
3. **Solution-Focused Terms:** What people searching for solutions might use
|
||
- Example: "algorithmic transparency," "AI audit," "explainable hiring AI"
|
||
|
||
4. **Regulatory Terms:** Legal/policy keywords
|
||
- Example: "NYC bias audit law," "EU AI Act hiring," "automated employment decision"
|
||
|
||
5. **Stakeholder Terms:** What specific groups call it
|
||
- Example: "resume screening software" (employer perspective), "job application algorithm" (applicant perspective)
|
||
|
||
**Best Practice:** Start with 5-10 keywords, then use Google Trends' "Related queries" to discover additional terms.
|
||
|
||
---
|
||
|
||
### 3.2 Google Trends Research Protocol
|
||
|
||
**Step 1: Set Parameters**
|
||
- **Time Range:**
|
||
- **Immediate salience:** Past 12 months
|
||
- **Trend trajectory:** Past 5 years
|
||
- **Historical context:** 2004-present
|
||
- **Geographic:**
|
||
- Start with worldwide, then drill down to U.S., EU, other relevant regions
|
||
- **Category:**
|
||
- Use "All categories" initially, then refine (e.g., "News," "Law & Government")
|
||
- **Search Type:**
|
||
- Use "Web Search" (not YouTube, Google Shopping, etc.)
|
||
|
||
**Step 2: Compare Keywords**
|
||
- Enter up to 5 keywords at once for comparison
|
||
- Identify which terms have highest search volume
|
||
- Note: Scores are relative (100 = peak popularity in that time range, not absolute volume)
|
||
|
||
**Step 3: Analyze Trend Lines**
|
||
- **Increasing:** Interest growing over time (good for emerging issues)
|
||
- **Stable:** Sustained interest (good for established issues)
|
||
- **Decreasing:** Fading interest (may indicate issue is settled or losing relevance)
|
||
- **Spiky:** Event-driven interest (peaks during news events, valleys in between)
|
||
|
||
**Step 4: Identify Peak Events**
|
||
- Click on spikes to see what happened (news, events, announcements)
|
||
- Example: Spike in "algorithmic hiring" searches in July 2023 = NYC LL144 effective date
|
||
|
||
**Step 5: Review Related Queries**
|
||
- **"Rising" queries:** New searches with biggest increase in search frequency
|
||
- **"Top" queries:** Most popular searches related to term
|
||
- Use these to discover language people use, adjacent topics, stakeholder concerns
|
||
|
||
**Step 6: Geographic Analysis**
|
||
- "Interest by region" shows where searches are concentrated
|
||
- Useful for identifying jurisdictions with high relevance (target for stakeholder recruitment, policy influence)
|
||
|
||
**Step 7: Document Findings**
|
||
- Screenshot trend graphs
|
||
- Export data (CSV) for time-series analysis
|
||
- Record peak events and related queries
|
||
|
||
---
|
||
|
||
### 3.3 Interpretation Guidelines
|
||
|
||
**Score Interpretation (0-100 scale):**
|
||
|
||
| Score Range | Interpretation | Scenario Suitability |
|
||
|-------------|----------------|----------------------|
|
||
| **0-10** | Minimal interest; niche topic | ⚠️ Low salience; may not attract attention |
|
||
| **10-25** | Low interest; emerging or specialized | ⏳ May be too early or too niche |
|
||
| **25-50** | Moderate interest; established but not trending | ✓ Suitable for specialized audiences |
|
||
| **50-75** | High interest; sustained coverage | ✓✓ Good for general demonstrations |
|
||
| **75-100** | Very high interest; peak or viral | ✓✓✓ Excellent for high-profile demos (but check polarization) |
|
||
|
||
**Trajectory Interpretation:**
|
||
|
||
| Pattern | Example | Scenario Fit |
|
||
|---------|---------|--------------|
|
||
| **Steady Climb** | Interest increasing 20-50% year-over-year | ✓✓✓ Emerging issue; timely |
|
||
| **Plateau** | Interest stable (±10% fluctuation) | ✓✓ Sustained relevance; safe choice |
|
||
| **Spike-and-Sustain** | Spike due to event, then settles at higher baseline | ✓✓ Event catalyzed lasting interest |
|
||
| **Spike-and-Crash** | Spike, then return to baseline | ⏳ Event-driven; interest may be temporary |
|
||
| **Decline** | Interest decreasing >20% year-over-year | ⚠️ Fading relevance |
|
||
|
||
---
|
||
|
||
## 4. Phase 2: News Coverage Analysis
|
||
|
||
### 4.1 Search Strategy
|
||
|
||
**Database Selection:**
|
||
- **Google News:** Start here for quick overview
|
||
- **LexisNexis / Factiva:** Use for comprehensive, filterable search (if access available)
|
||
- **AllSides:** Use to assess left/center/right coverage (polarization indicator)
|
||
|
||
**Search Query Construction:**
|
||
|
||
**Boolean Operators:**
|
||
- `AND`: Both terms must appear ("algorithmic AND hiring")
|
||
- `OR`: Either term can appear ("AI OR artificial intelligence")
|
||
- `NOT`: Exclude term ("hiring NOT gig") - useful to filter out unrelated topics
|
||
- `" "`: Exact phrase ("algorithmic bias")
|
||
- `*`: Wildcard ("algorithm*" matches algorithm, algorithms, algorithmic)
|
||
|
||
**Example Queries:**
|
||
|
||
1. **Broad:** `"algorithmic hiring" OR "AI recruitment" OR "automated employment"`
|
||
2. **Narrow (Problem-Focused):** `("algorithmic hiring" OR "AI recruitment") AND (bias OR discrimination OR fairness)`
|
||
3. **Solution-Focused:** `("algorithmic hiring" OR "AI recruitment") AND (transparency OR audit OR regulation)`
|
||
4. **Exclude Noise:** `"algorithmic hiring" NOT "gig economy" NOT "freelance"`
|
||
|
||
**Date Filters:**
|
||
- **Past 12 months:** For current salience
|
||
- **Past 5 years:** For trend analysis
|
||
- **Specific date ranges:** Around known events (legislation passage, major lawsuits, etc.)
|
||
|
||
---
|
||
|
||
### 4.2 Quantitative Analysis
|
||
|
||
**Metric 1: Article Counts**
|
||
|
||
**Process:**
|
||
1. Run search query in Google News or database
|
||
2. Record total number of articles in:
|
||
- Past 12 months
|
||
- Past 5 years (for trend)
|
||
3. Calculate articles per month (total / months)
|
||
|
||
**Interpretation:**
|
||
|
||
| Articles (Past 12 Months) | Interpretation | Scenario Fit |
|
||
|---------------------------|----------------|--------------|
|
||
| **<10** | Minimal coverage | ⚠️ Low salience |
|
||
| **10-25** | Low coverage; niche | ⏳ Specialized interest |
|
||
| **25-50** | Moderate coverage | ✓ Suitable for demos |
|
||
| **50-100** | High coverage; sustained | ✓✓ Good salience |
|
||
| **100+** | Very high coverage; major issue | ✓✓✓ Excellent salience (but check polarization) |
|
||
|
||
**Metric 2: Outlet Diversity**
|
||
|
||
**Process:**
|
||
1. Categorize articles by outlet type:
|
||
- **Mainstream:** NYT, WSJ, WaPo, CNN, BBC, etc.
|
||
- **Tech Press:** Wired, The Verge, Ars Technica, TechCrunch
|
||
- **Trade Press:** Industry-specific (HR Dive, Employment Law360, etc.)
|
||
- **Opinion/Analysis:** The Atlantic, Vox, Slate, etc.
|
||
- **Academic/Research:** Nature, Science, university news
|
||
|
||
2. Count articles per category
|
||
|
||
**Interpretation:**
|
||
- **Narrow coverage** (only tech press): Niche interest, may not reach general public
|
||
- **Broad coverage** (mainstream + tech + trade): Wide salience, cross-audience appeal
|
||
- **Academic coverage**: Indicates research community engagement, potential expert stakeholders
|
||
|
||
---
|
||
|
||
### 4.3 Qualitative Content Analysis
|
||
|
||
**Sampling:**
|
||
- Select 10-20 representative articles (mix of outlet types, dates, framing)
|
||
- Prioritize: Major outlets, most-shared articles (social media metrics), opinion pieces (reveal values)
|
||
|
||
**Analysis Dimensions:**
|
||
|
||
**Dimension 1: Framing**
|
||
|
||
How is the issue presented?
|
||
|
||
- **Problem Framing:** "AI hiring is biased and needs regulation"
|
||
- **Innovation Framing:** "AI hiring improves efficiency and reduces human bias"
|
||
- **Rights Framing:** "Applicants have right to explanation"
|
||
- **Economic Framing:** "Regulation will burden businesses"
|
||
- **Technical Framing:** "Explainability vs. accuracy trade-offs"
|
||
|
||
**Document:** Which frames appear most frequently? Do different outlets use different frames?
|
||
|
||
**Dimension 2: Stakeholder Voices**
|
||
|
||
Who is quoted or cited in articles?
|
||
|
||
- Employers / HR professionals
|
||
- Job applicants / workers
|
||
- AI vendors / tech companies
|
||
- Regulators / policymakers
|
||
- Civil rights advocates
|
||
- Researchers / experts
|
||
|
||
**Document:** Which voices dominate? Which are absent? (Indicates power dynamics and potential coalition structures)
|
||
|
||
**Dimension 3: Tone**
|
||
|
||
What is the emotional valence?
|
||
|
||
- **Alarmist:** "AI bias crisis," "discrimination at scale," "algorithmic dystopia"
|
||
- **Optimistic:** "AI can reduce bias," "innovation in hiring," "data-driven fairness"
|
||
- **Balanced:** "Trade-offs between efficiency and fairness," "complex challenges," "no easy answers"
|
||
- **Neutral/Descriptive:** "NYC passes bias audit law," "study finds algorithmic disparities"
|
||
|
||
**Document:** Predominant tone (indicates polarization level: alarmist + optimistic = polarized; balanced + neutral = deliberation-friendly)
|
||
|
||
**Dimension 4: Solution Proposals**
|
||
|
||
What remedies are suggested?
|
||
|
||
- **Ban/Prohibit:** "Ban algorithmic hiring" (extreme, likely polarized)
|
||
- **Regulate:** "Require audits," "mandate transparency"
|
||
- **Voluntary Standards:** "Industry self-regulation," "best practices"
|
||
- **Technological Fixes:** "Better algorithms," "explainable AI"
|
||
- **Procedural Safeguards:** "Human review," "applicant recourse"
|
||
|
||
**Document:** Range and diversity of solutions (diverse solutions = open policy window; single "obvious" solution = may be too settled)
|
||
|
||
---
|
||
|
||
### 4.4 Coverage Timeline Construction
|
||
|
||
**Purpose:** Visualize how coverage has evolved over time, identify inflection points
|
||
|
||
**Process:**
|
||
|
||
1. **Collect publication dates** for all articles (or sample)
|
||
2. **Plot article counts per month** (line graph or bar chart)
|
||
3. **Annotate events** that correspond to spikes:
|
||
- Legislation passage
|
||
- Major lawsuits filed/decided
|
||
- Corporate scandals (e.g., Amazon bias story)
|
||
- Academic studies published
|
||
- Regulatory announcements
|
||
|
||
**Interpretation:**
|
||
|
||
- **Steady baseline with event spikes:** Issue has sustained relevance; events catalyze renewed interest
|
||
- **Single spike, then silence:** Event-driven; may not be enduring issue
|
||
- **Increasing baseline over time:** Emerging issue gaining traction
|
||
- **Decreasing baseline:** Fading issue
|
||
|
||
**Example (Algorithmic Hiring):**
|
||
- 2018: Amazon bias story → spike
|
||
- 2020-2021: EU AI Act negotiations → sustained increase
|
||
- Dec 2021: NYC LL144 passed → spike
|
||
- July 2023: NYC LL144 effective → spike
|
||
- 2023-2024: High sustained baseline (regulatory implementation ongoing)
|
||
|
||
---
|
||
|
||
## 5. Phase 3: Regulatory & Legislative Tracking
|
||
|
||
### 5.1 Legislative Search Protocol
|
||
|
||
**U.S. Federal:**
|
||
|
||
**Step 1: Search Congress.gov**
|
||
- Search bar: Enter keywords (e.g., "algorithmic employment," "automated hiring," "AI bias")
|
||
- Filters:
|
||
- **Congress:** Current (118th) for pending bills; previous for historical context
|
||
- **Type:** Bills, Resolutions
|
||
- **Status:** Introduced, Passed House/Senate, Became Law
|
||
|
||
**Step 2: Review Results**
|
||
- Click on bill number to see full text, summary, sponsors, cosponsors
|
||
- Check **Actions** tab for legislative progress (committee referrals, votes, etc.)
|
||
- Check **Cosponsors** for bipartisan support (indicator of viability)
|
||
|
||
**Step 3: Monitor Committee Hearings**
|
||
- Committee websites post hearing schedules, witness lists, testimony transcripts
|
||
- Key committees (for AI/employment): Senate Commerce, House Energy & Commerce, House Education & Labor
|
||
|
||
**Documentation:**
|
||
- Bill number, title, sponsor, status
|
||
- Summary of provisions related to scenario
|
||
- Likelihood of passage (bipartisan?, committee action?, presidential support?)
|
||
|
||
---
|
||
|
||
**U.S. State:**
|
||
|
||
**Step 1: Search LegiScan**
|
||
- Similar to Congress.gov but for state legislatures
|
||
- Keyword search across all 50 states
|
||
- Filter by state, status (introduced, passed, enacted)
|
||
|
||
**Step 2: Identify Leader States**
|
||
- Which states are most active on this issue?
|
||
- Example: CA, NY, IL often lead on tech regulation
|
||
|
||
**Step 3: Track State-to-State Spread**
|
||
- If multiple states introduce similar bills (copycat legislation), indicates growing momentum
|
||
- Example: If 5+ states introduce algorithmic hiring bias audit bills after NYC, issue is spreading
|
||
|
||
---
|
||
|
||
**International:**
|
||
|
||
**Step 1: EU Legislation (EUR-Lex)**
|
||
- Search for Directives, Regulations on topic
|
||
- Example: EU AI Act (Regulation 2024/1689) classifies hiring algorithms as high-risk
|
||
|
||
**Step 2: Other Jurisdictions**
|
||
- UK, Canada, Australia often follow US/EU
|
||
- Search "algorithmic hiring regulation [country]" in Google News or government websites
|
||
|
||
**Documentation:**
|
||
- Jurisdiction, regulation title, status
|
||
- Key provisions (transparency, auditing, enforcement)
|
||
- Effective date (if passed)
|
||
|
||
---
|
||
|
||
### 5.2 Regulatory Activity Tracking
|
||
|
||
**Federal Agencies:**
|
||
|
||
**Step 1: Search Federal Register**
|
||
- Keywords: Same as legislative search
|
||
- Filters:
|
||
- **Document Type:** Proposed Rule, Final Rule, Notice
|
||
- **Agency:** EEOC (employment), FTC (consumer protection), etc.
|
||
- **Comment Period:** Open (opportunity for public input)
|
||
|
||
**Step 2: Identify Proposed Rules**
|
||
- If comment period is open, note deadline
|
||
- Read proposed rule text and agency justification
|
||
- Check number of public comments (high volume = high salience)
|
||
|
||
**Step 3: Monitor Regulations.gov**
|
||
- Public comments provide stakeholder perspectives
|
||
- Scan comments from:
|
||
- Industry groups (employer perspective)
|
||
- Civil rights orgs (applicant/worker perspective)
|
||
- Tech companies (vendor perspective)
|
||
- Academics (expert perspective)
|
||
|
||
**Documentation:**
|
||
- Agency, rule title, docket number
|
||
- Proposed vs. final (status)
|
||
- Comment period dates
|
||
- Key provisions
|
||
|
||
---
|
||
|
||
**State Agencies:**
|
||
|
||
**Step 1: Identify Relevant Agencies**
|
||
- Labor departments, civil rights commissions, attorney general offices
|
||
|
||
**Step 2: Search Agency Websites**
|
||
- Look for "Guidance," "Enforcement Actions," "Press Releases"
|
||
- Example: CA Attorney General issued guidance on algorithmic discrimination
|
||
|
||
**Documentation:**
|
||
- Agency, document type, date
|
||
- Summary of guidance or enforcement action
|
||
|
||
---
|
||
|
||
### 5.3 Litigation Tracking
|
||
|
||
**Federal Courts (PACER / CourtListener):**
|
||
|
||
**Step 1: Search for Cases**
|
||
- Keywords: "algorithmic hiring," "automated employment decision," "AI bias"
|
||
- Combine with legal terms: "Title VII," "disparate impact," "discrimination"
|
||
|
||
**Step 2: Filter Active Cases**
|
||
- **Filed:** Date filed (recent = active)
|
||
- **Status:** Open (not dismissed or settled)
|
||
- **Court:** District courts (trial level), Circuit courts (appeals), Supreme Court (if high-profile)
|
||
|
||
**Step 3: Identify Lead Cases**
|
||
- **Class actions:** Multiple plaintiffs, high stakes
|
||
- **Test cases:** Advocacy groups as plaintiffs (strategic litigation)
|
||
- **Precedent-setting:** First cases in new area of law
|
||
|
||
**Example:** Mobley v. Workday (2023, N.D. Cal.) - class action alleging Workday's screening algorithm discriminates based on age and disability
|
||
|
||
**Documentation:**
|
||
- Case name, case number, court, filed date
|
||
- Plaintiffs / Defendants
|
||
- Claims (legal theories)
|
||
- Status (pending, motion to dismiss, discovery, trial, settlement, appeal)
|
||
- Significance (precedent potential, media coverage)
|
||
|
||
---
|
||
|
||
### 5.4 Regulatory Activity Scoring
|
||
|
||
**For Rubric Criterion 4.2 (Regulatory/Legislative Activity):**
|
||
|
||
| Activity Level | Examples | Points (0-5) |
|
||
|----------------|----------|--------------|
|
||
| **None** | No pending bills, no regulations, no litigation | 0 |
|
||
| **Proposed** | Bills introduced but not passed; comment period open; advocacy campaigns | 2 |
|
||
| **Active** | Bills passed in ≥1 jurisdiction; regulations finalized; active litigation | 4 |
|
||
| **Implemented** | Laws enacted and being enforced; regulations in effect; case law developing | 5 |
|
||
|
||
**Evidence to Document:**
|
||
- Number of jurisdictions with legislation (more = higher score)
|
||
- Timeline (how recent? recently passed = active; passed 5 years ago = implemented)
|
||
- Enforcement actions (are agencies actually enforcing? or law on books but not enforced?)
|
||
|
||
---
|
||
|
||
## 6. Phase 4: Academic Discourse Mapping
|
||
|
||
### 6.1 Literature Search Protocol
|
||
|
||
**Step 1: Keyword Search in Google Scholar**
|
||
|
||
**Query Structure:**
|
||
- Core scenario keywords + academic terms
|
||
- Example: `"algorithmic hiring" AND ("fairness" OR "bias" OR "transparency" OR "discrimination")`
|
||
- Advanced: `allintitle:` for title search, `author:` for specific researchers
|
||
|
||
**Filters:**
|
||
- **Date Range:** Past 5 years for current discourse; all years for comprehensive review
|
||
- **Sort by:** Relevance (default), or "Cited by" (most influential papers)
|
||
|
||
**Step 2: Scan Results**
|
||
- Title + abstract to assess relevance
|
||
- Click "Cited by" to see how many times paper has been cited (influence metric)
|
||
- Click "Related articles" to discover similar research
|
||
|
||
**Step 3: Identify Key Papers**
|
||
- **Foundational:** Highly cited, early work in area (e.g., 1000+ citations)
|
||
- **Recent:** Past 2 years, cutting-edge research
|
||
- **Interdisciplinary:** Papers in CS, law, ethics, sociology (indicates broad interest)
|
||
|
||
---
|
||
|
||
**Step 4: Search Specialized Databases**
|
||
|
||
**Computer Science / AI Ethics:**
|
||
- **ACM Digital Library:** Search FAccT (Fairness, Accountability, and Transparency) conference
|
||
- **arXiv:** Search categories: cs.CY (Computers and Society), cs.AI, cs.LG (Machine Learning)
|
||
|
||
**Law:**
|
||
- **SSRN:** Search "Law" or "Legal Studies" categories
|
||
- **HeinOnline:** Law review articles (subscription)
|
||
|
||
**Social Sciences:**
|
||
- **JSTOR:** Search sociology, economics, public policy journals
|
||
|
||
**Query:** Same keywords, but filtered by database-specific categories
|
||
|
||
---
|
||
|
||
### 6.2 Bibliometric Analysis
|
||
|
||
**Purpose:** Quantify academic interest and identify influential researchers/papers
|
||
|
||
**Metrics:**
|
||
|
||
1. **Publication Count:**
|
||
- Number of papers in past 5 years
|
||
- Trend over time (increasing = growing field)
|
||
|
||
2. **Citation Count:**
|
||
- Total citations for top 10-20 papers
|
||
- Average citations per paper (high = influential field)
|
||
|
||
3. **Author Network:**
|
||
- Who are prolific authors? (publish 5+ papers on topic)
|
||
- Institutional affiliations (which universities/labs active?)
|
||
- Co-authorship patterns (collaborations across disciplines?)
|
||
|
||
4. **Venue Analysis:**
|
||
- Which journals/conferences publish this research?
|
||
- Interdisciplinary venues (FAccT, CHI) vs. discipline-specific (AI conferences, law reviews)
|
||
|
||
**Tools:**
|
||
- **Semantic Scholar:** Author profiles, citation graphs, "Highly Influential Citations" metric
|
||
- **Connected Papers:** Visual graph of paper relationships
|
||
- **Manual Spreadsheet:** Track papers, authors, institutions, citations
|
||
|
||
**Interpretation:**
|
||
|
||
| Publication Count (Past 5 years) | Interpretation |
|
||
|----------------------------------|----------------|
|
||
| **<10** | Niche topic; minimal academic interest |
|
||
| **10-50** | Emerging field; growing interest |
|
||
| **50-200** | Established field; sustained research |
|
||
| **200+** | Major field; high academic activity |
|
||
|
||
**For Scenario Scoring:**
|
||
- Academic interest correlates with expert availability (potential stakeholders)
|
||
- High citation counts indicate foundational concepts (pedagogical clarity)
|
||
- Interdisciplinary venues indicate broad relevance (generalizability)
|
||
|
||
---
|
||
|
||
### 6.3 Content Themes Analysis
|
||
|
||
**Purpose:** Identify dominant research questions, findings, debates
|
||
|
||
**Process:**
|
||
|
||
1. **Sample 20-30 abstracts** (mix of highly cited + recent)
|
||
2. **Code for themes:**
|
||
- **Research Questions:** What are researchers asking? (e.g., "How to measure algorithmic fairness?" "What legal standards apply?")
|
||
- **Findings:** What do studies show? (e.g., "Bias exists," "Explanations improve trust," "Trade-offs between accuracy and fairness")
|
||
- **Debates:** What disagreements exist? (e.g., "Individual fairness vs. group fairness," "Transparency vs. trade secrets")
|
||
- **Gaps:** What do authors say is under-researched?
|
||
|
||
3. **Cluster themes:**
|
||
- **Technical:** Algorithm design, fairness metrics, explainability
|
||
- **Legal:** Regulatory frameworks, liability, enforcement
|
||
- **Ethical:** Moral frameworks, values trade-offs, stakeholder rights
|
||
- **Empirical:** Case studies, experiments, field data
|
||
|
||
**Output:**
|
||
- Theme taxonomy (categories + subcategories)
|
||
- Research gaps (opportunities for PluralisticDeliberationOrchestrator to contribute novel insights)
|
||
- Dominant framings (how academia talks about this vs. media, policy)
|
||
|
||
**Example Themes (Algorithmic Hiring):**
|
||
- **Technical:** Bias detection methods, counterfactual explanations, adversarial debiasing
|
||
- **Legal:** Title VII applicability, disparate impact doctrine, EU AI Act compliance
|
||
- **Ethical:** Right to explanation, autonomy, discrimination harms
|
||
- **Empirical:** Audit studies showing bias, user studies on transparency, firm surveys on adoption
|
||
|
||
---
|
||
|
||
## 7. Phase 5: Social Media & Public Discourse
|
||
|
||
### 7.1 Platform Selection
|
||
|
||
**Twitter/X:**
|
||
- **Strengths:** Real-time discourse, journalist/expert engagement, advocacy campaigns
|
||
- **Use for:** Identifying emerging concerns, tracking hashtags, finding influencers
|
||
|
||
**Reddit:**
|
||
- **Strengths:** In-depth discussion, community norms, diverse perspectives
|
||
- **Use for:** Understanding grassroots sentiment, finding stakeholder pain points
|
||
|
||
**LinkedIn:**
|
||
- **Strengths:** Professional discourse, HR practitioner perspectives
|
||
- **Use for:** Employer/vendor perspectives, industry reactions
|
||
|
||
**Facebook / Instagram / TikTok:**
|
||
- **Generally less useful** for policy-oriented scenarios (more personal/entertainment)
|
||
- **Exception:** Activist campaigns sometimes organize on these platforms
|
||
|
||
---
|
||
|
||
### 7.2 Twitter/X Research Protocol
|
||
|
||
**Step 1: Hashtag Identification**
|
||
|
||
**Search Twitter for:**
|
||
- Core keywords (e.g., "algorithmic hiring")
|
||
- Look at tweets, identify common hashtags
|
||
- Example: #AIbias, #AlgorithmicAccountability, #FairHiring, #HRTech
|
||
|
||
**Step 2: Advanced Search**
|
||
|
||
**Twitter Advanced Search** (https://twitter.com/search-advanced):
|
||
- **Words:** Enter keywords or hashtags
|
||
- **Date Range:** Past 12 months for current discourse
|
||
- **Engagement:** Filter by replies, retweets, likes (high engagement = influential)
|
||
|
||
**Query Examples:**
|
||
- `"algorithmic hiring" (bias OR discrimination)` - Find critical tweets
|
||
- `"AI recruitment" (efficiency OR innovation)` - Find supportive tweets
|
||
- `#AIbias #hiring` - Combine hashtags
|
||
|
||
**Step 3: Identify Influencers**
|
||
|
||
- **Journalists:** Who's writing about this? (likely sources for media research)
|
||
- **Researchers:** Academics promoting their papers
|
||
- **Advocates:** Civil rights orgs, labor groups, tech ethics orgs
|
||
- **Practitioners:** HR professionals, recruiters
|
||
- **Vendors:** AI companies promoting products
|
||
|
||
**Document:**
|
||
- User handle, affiliation, follower count
|
||
- Position / framing (critical, supportive, neutral)
|
||
- Sample tweets (representative of their stance)
|
||
|
||
**Step 4: Sentiment Analysis (Manual)**
|
||
|
||
Sample 50-100 tweets, code for sentiment:
|
||
- **Critical / Negative:** "AI hiring is discriminatory," "ban algorithmic screening"
|
||
- **Supportive / Positive:** "AI reduces human bias," "data-driven hiring works"
|
||
- **Neutral / Informational:** "NYC passes bias audit law," "study finds X"
|
||
- **Mixed / Nuanced:** "AI can help but needs regulation," "trade-offs exist"
|
||
|
||
**Distribution:**
|
||
- If 80%+ are critical or supportive (one-sided), indicates polarization
|
||
- If 40-60% mixed/nuanced, indicates deliberation space
|
||
|
||
---
|
||
|
||
### 7.3 Reddit Research Protocol
|
||
|
||
**Step 1: Identify Relevant Subreddits**
|
||
|
||
**Search r/all for keywords**, then note which subreddits appear:
|
||
- Example (Algorithmic Hiring):
|
||
- r/jobs (applicant perspective)
|
||
- r/recruitinghell (critical perspective)
|
||
- r/humanresources (employer perspective)
|
||
- r/machinelearning (technical perspective)
|
||
- r/privacy, r/technology (ethical/policy perspective)
|
||
|
||
**Step 2: Subreddit Analysis**
|
||
|
||
For each relevant subreddit:
|
||
- **Subscriber count:** (larger = more influential)
|
||
- **Post frequency:** (active vs. dead subreddit)
|
||
- **Top posts (all time, past year):** What resonates with community?
|
||
|
||
**Step 3: Search Within Subreddits**
|
||
|
||
**Reddit Search** (filter by subreddit):
|
||
- Sort by: "Top" (most upvoted) or "New" (recent)
|
||
- Time: Past year
|
||
|
||
**Example:** Search r/jobs for "algorithmic hiring" or "AI application"
|
||
|
||
**Step 4: Content Analysis**
|
||
|
||
Sample 10-20 top posts/comment threads:
|
||
- **Themes:** What are people concerned about? (bias, rejection without explanation, dehumanization, gaming the system)
|
||
- **Framing:** Victim (unfairly rejected) vs. Strategic (how to beat the algorithm)
|
||
- **Solutions:** What do users suggest? (regulate, ban, improve algorithms, provide explanations)
|
||
|
||
**Document:**
|
||
- Dominant narratives per subreddit (applicant vs. employer perspectives may differ)
|
||
- Evidence of deliberation (do opposing views engage constructively?) or echo chambers (one perspective dominates, dissent downvoted)
|
||
|
||
---
|
||
|
||
### 7.4 Social Media Interpretation Cautions
|
||
|
||
**Caution 1: Unrepresentative Sample**
|
||
- Social media users ≠ general public (younger, more educated, more politically engaged)
|
||
- Loud voices ≠ majority opinion (outrage drives engagement)
|
||
|
||
**Caution 2: Bot Activity**
|
||
- Automated accounts, coordinated campaigns can inflate appearance of support/opposition
|
||
|
||
**Caution 3: Volatility**
|
||
- Social media discourse changes rapidly; today's outrage is tomorrow's forgotten topic
|
||
|
||
**Best Practice:**
|
||
- Use social media as **supplementary** evidence, not primary
|
||
- Cross-reference with news, regulatory, academic sources
|
||
- Focus on identifying concerns, framings, stakeholder voices (not measuring "majority opinion")
|
||
|
||
---
|
||
|
||
## 8. Phase 6: Polarization Assessment
|
||
|
||
### 8.1 Polarization Indicators
|
||
|
||
**Indicator 1: Partisan Sorting**
|
||
- Are Democrats and Republicans on opposite sides? (strong indicator of polarization)
|
||
- **Method:** Check bill cosponsors (bipartisan vs. single-party), news outlet framing (AllSides comparison), polling data (if available)
|
||
|
||
**Indicator 2: Tribal Identity Formation**
|
||
- Do people self-identify as "pro-X" or "anti-X"? (e.g., "pro-AI" vs. "anti-AI")
|
||
- **Method:** Search for self-labels in social media bios, hashtags, advocacy group names
|
||
|
||
**Indicator 3: Compromise Stigmatization**
|
||
- Are moderate positions attacked by both sides? ("not woke enough" AND "too woke")
|
||
- **Method:** Analyze reactions to middle-ground proposals (downvoted on Reddit? criticized on Twitter?)
|
||
|
||
**Indicator 4: Cross-Cutting Coalitions**
|
||
- Are there unusual alliances? (e.g., ACLU + libertarian groups, business + labor)
|
||
- **Method:** Check coalition statements, joint letters, multi-stakeholder initiatives
|
||
|
||
**Indicator 5: Solution Diversity**
|
||
- Are many solutions proposed, or one "obvious" solution per side?
|
||
- **Method:** Count distinct proposals in news, policy papers, advocacy materials
|
||
|
||
---
|
||
|
||
### 8.2 Polarization Scoring (for Rubric Criterion 4.3)
|
||
|
||
| Polarization Level | Indicators | Score (0-5, inverse) |
|
||
|--------------------|-----------|---------------------|
|
||
| **Highly Polarized** | Partisan sorting (100%); tribal identities; compromise attacked; no cross-cutting coalitions; binary solutions | 0-1 |
|
||
| **Moderately Polarized** | Partial partisan sorting (60-80%); some tribal identity; cross-cutting coalitions exist but weak; limited solution diversity | 2-3 |
|
||
| **Low Polarization** | Weak/no partisan sorting (<60%); cross-cutting coalitions common; compromise socially acceptable; diverse solutions | 4-5 |
|
||
|
||
**Evidence to Document:**
|
||
- Bill cosponsor analysis (% bipartisan)
|
||
- AllSides framing comparison (do left/center/right outlets frame similarly or oppositely?)
|
||
- Social media sentiment distribution (one-sided or mixed?)
|
||
- Coalition landscape (stakeholder alliances, joint statements)
|
||
|
||
**Example (Algorithmic Hiring Transparency):**
|
||
- **Partisan Sorting:** Weak (NYC LL144 passed Democratic city council, but EU AI Act has cross-party support; not strictly partisan issue)
|
||
- **Tribal Identity:** Minimal (no "pro-algorithmic-hiring" vs. "anti-algorithmic-hiring" camps; positions cross-cut tech/labor/privacy groups)
|
||
- **Cross-Cutting Coalitions:** Present (privacy advocates + labor unions + some employers on transparency; tech companies + some civil rights groups on innovation)
|
||
- **Solution Diversity:** High (full transparency, tiered transparency, self-regulation, technical fixes, bans all proposed)
|
||
- **Polarization Score:** 4-5/5 (Low polarization) ✓
|
||
|
||
---
|
||
|
||
## 9. Phase 7: Policy Window Analysis
|
||
|
||
### 9.1 Kingdon's Multiple Streams Model
|
||
|
||
**Framework:** Policy change occurs when three "streams" align:
|
||
|
||
1. **Problem Stream:** Issue is recognized as a problem requiring government action
|
||
- **Indicators:** Media coverage, focusing events (crises, scandals), feedback from existing programs
|
||
|
||
2. **Politics Stream:** Political environment is favorable
|
||
- **Indicators:** National mood, advocacy campaigns, elections, administration changes
|
||
|
||
3. **Policy Stream:** Solutions are available and viable
|
||
- **Indicators:** Research, pilot programs, proposals from think tanks, legislative drafts
|
||
|
||
**Policy Window Opens When:** All three streams converge (problem recognized + political will + solution available)
|
||
|
||
---
|
||
|
||
### 9.2 Assessing Policy Window Status
|
||
|
||
**Step 1: Problem Stream Assessment**
|
||
|
||
**Questions:**
|
||
- Is the problem widely recognized? (Media coverage, public discourse)
|
||
- Have there been focusing events? (Scandals, crises, viral stories)
|
||
- Example (Algorithmic Hiring): Amazon bias story (2018), HireVue controversy (2019-2020)
|
||
- Is problem framed as urgent? ("crisis" vs. "ongoing concern")
|
||
|
||
**Evidence:**
|
||
- High media coverage = problem recognized
|
||
- Recent focusing events = elevated urgency
|
||
|
||
---
|
||
|
||
**Step 2: Politics Stream Assessment**
|
||
|
||
**Questions:**
|
||
- Is there political will? (Legislators talking about it, presidential/gubernatorial attention)
|
||
- What's the partisan dynamic? (Bipartisan issue = more viable)
|
||
- Are advocacy groups mobilized? (Campaigns, lobbying, public pressure)
|
||
- Recent elections or leadership changes? (New administration may prioritize)
|
||
|
||
**Evidence:**
|
||
- Pending legislation = political will
|
||
- Bipartisan cosponsors = cross-party support
|
||
- Advocacy coalitions = organized pressure
|
||
|
||
**Example (Algorithmic Hiring):**
|
||
- NYC: Political will existed (progressive city council, responsive to labor advocates)
|
||
- EU: High political priority (AI Act years in development, strong Parliament support)
|
||
- Federal (U.S.): Political will is moderate (proposals exist but not prioritized; depends on administration)
|
||
|
||
---
|
||
|
||
**Step 3: Policy Stream Assessment**
|
||
|
||
**Questions:**
|
||
- Are solutions available? (Model legislation, best practices, pilot programs)
|
||
- Have solutions been tested? (Evidence from other jurisdictions, academic research)
|
||
- Is there technical feasibility? (Can solutions actually be implemented?)
|
||
- Are solutions politically viable? (Acceptable to key stakeholders, not too radical)
|
||
|
||
**Evidence:**
|
||
- Model legislation (NYC LL144, EU AI Act) = solution exists
|
||
- Academic research on bias audits = solution tested
|
||
- Existing bias audit vendors = technical feasibility
|
||
- Employer compliance (not mass exodus) = political viability
|
||
|
||
---
|
||
|
||
### 9.3 Policy Window Scoring (for Rubric Criterion 4.4)
|
||
|
||
| Window Status | Problem Stream | Politics Stream | Policy Stream | Score (0-5) |
|
||
|---------------|----------------|----------------|---------------|-------------|
|
||
| **Closed** | Problem not recognized OR no focusing events | No political will; issue ignored | No viable solutions | 0-1 |
|
||
| **Narrow Opening** | Problem recognized but not urgent | Some political will but not prioritized | Solutions proposed but untested | 2-3 |
|
||
| **Open** | Problem urgent; recent focusing events | Active political will; pending legislation or regulation | Solutions tested and viable | 4-5 |
|
||
|
||
**Example (Algorithmic Hiring Transparency):**
|
||
- **Problem Stream:** ✓ Recognized (media coverage), ✓ Focusing events (Amazon, HireVue), ✓ Urgent (regulatory momentum)
|
||
- **Politics Stream:** ✓ Political will (NYC, EU, some U.S. states), ✓ Advocacy campaigns (ACLU, EPIC, labor unions)
|
||
- **Policy Stream:** ✓ Solutions available (NYC LL144 model, EU AI Act), ✓ Tested (pilot audits), ✓ Viable (employers complying)
|
||
- **Policy Window Score:** 5/5 (Open) ✓✓✓
|
||
|
||
---
|
||
|
||
## 10. Synthesis & Documentation
|
||
|
||
### 10.1 Data Integration
|
||
|
||
**Purpose:** Synthesize findings from all research phases into coherent assessment
|
||
|
||
**Process:**
|
||
|
||
1. **Compile Evidence:**
|
||
- Google Trends data (charts, scores, related queries)
|
||
- News coverage data (article counts, outlet list, content themes)
|
||
- Regulatory tracking (bills, regulations, litigation summaries)
|
||
- Academic literature (publication counts, key papers, themes)
|
||
- Social media findings (influencers, sentiment, subreddit perspectives)
|
||
- Polarization assessment (indicators, scoring)
|
||
- Policy window analysis (streams, scoring)
|
||
|
||
2. **Create Summary Tables:**
|
||
|
||
**Example: Media Pattern Summary Table**
|
||
|
||
| Dimension | Metric | Finding | Score | Evidence |
|
||
|-----------|--------|---------|-------|----------|
|
||
| **Search Interest** | Google Trends (12 mo) | 50-75/100 | High | Sustained search volume, peak during NYC LL144 |
|
||
| **News Coverage** | Articles (12 mo) | 75+ | High | NYT, WSJ, Wired, HBR coverage |
|
||
| **Regulatory Activity** | Status | Implemented | 5/5 | NYC LL144, EU AI Act enacted |
|
||
| **Academic Discourse** | Publications (5 yr) | 100+ | Established | FAccT papers, law reviews, HBR case studies |
|
||
| **Polarization** | Level | Low | 4/5 | Bipartisan support, cross-cutting coalitions |
|
||
| **Policy Window** | Status | Open | 5/5 | Problem recognized, political will, solutions viable |
|
||
| **TOTAL CRITERION 4** | | | **19/20** | Strong timeliness and salience |
|
||
|
||
---
|
||
|
||
3. **Narrative Synthesis:**
|
||
|
||
Write 1-2 page summary:
|
||
|
||
**Section 1: Current Salience (2-3 paragraphs)**
|
||
- Google Trends shows sustained high search interest (50-75/100) over past 12 months, with peaks corresponding to regulatory milestones (NYC LL144 effective July 2023).
|
||
- News coverage is extensive (75+ articles in major outlets past 12 months) and diverse (mainstream, tech, trade, academic venues).
|
||
- Academic research is robust (100+ publications past 5 years), indicating established field with cross-disciplinary engagement (CS, law, ethics, HR).
|
||
|
||
**Section 2: Discourse Landscape (2-3 paragraphs)**
|
||
- Framing is mixed: Media presents both problem framing (bias, discrimination) and solution framing (audits, transparency), indicating issue is neither settled nor ignored.
|
||
- Stakeholder voices include employers, applicants, vendors, regulators, advocates, and researchers—all are represented in discourse.
|
||
- Polarization is low: No partisan sorting, cross-cutting coalitions exist (ACLU + employers on some transparency measures), solution diversity is high.
|
||
|
||
**Section 3: Policy Window (1-2 paragraphs)**
|
||
- Policy window is open: Problem is recognized (focusing events: Amazon bias, HireVue), political will exists (NYC, EU legislation), solutions are tested (bias audit model).
|
||
- Demonstration timing is optimal: Regulatory implementation is ongoing (NYC LL144, EU AI Act), corporate policy decisions are being made now, deliberation can inform real decisions.
|
||
|
||
**Section 4: Implications for Demonstration (1 paragraph)**
|
||
- High salience and open policy window make this scenario timely for demonstration.
|
||
- Low polarization suggests authentic deliberation is feasible (not performative).
|
||
- Diverse stakeholders and established discourse mean recruiting real participants is viable.
|
||
|
||
---
|
||
|
||
### 10.2 Documentation Templates
|
||
|
||
**Template 1: Scenario Media Pattern Profile**
|
||
|
||
```markdown
|
||
# Media Pattern Profile: [Scenario Name]
|
||
|
||
**Research Date:** [Date]
|
||
**Researcher:** [Name]
|
||
|
||
## Summary Scores
|
||
|
||
| Criterion | Score | Evidence Summary |
|
||
|-----------|-------|-----------------|
|
||
| Search Interest | ___/5 | Google Trends: ___ |
|
||
| News Coverage | ___/5 | Article count: ___, outlets: ___ |
|
||
| Regulatory Activity | ___/5 | Status: ___ |
|
||
| Polarization (inverse) | ___/5 | Level: ___ |
|
||
| Policy Window | ___/5 | Status: ___ |
|
||
| **TOTAL (Criterion 4)** | **___/20** | |
|
||
|
||
## Detailed Findings
|
||
|
||
### 1. Search Interest Analysis
|
||
- **Google Trends Score (12 mo):** ___
|
||
- **Trend Trajectory:** [ ] Increasing [ ] Stable [ ] Decreasing [ ] Spiky
|
||
- **Peak Events:** [List major spikes and causes]
|
||
- **Related Queries:** [Top 5-10 related searches]
|
||
- **Geographic Focus:** [Top 3 regions]
|
||
|
||
### 2. News Coverage Analysis
|
||
- **Article Count (12 mo):** ___
|
||
- **Outlet Diversity:**
|
||
- Mainstream: [Count, examples]
|
||
- Tech Press: [Count, examples]
|
||
- Trade: [Count, examples]
|
||
- Opinion: [Count, examples]
|
||
- **Dominant Framing:** [Problem / Innovation / Rights / Economic / Technical]
|
||
- **Tone:** [Alarmist / Optimistic / Balanced / Neutral]
|
||
- **Coverage Timeline:** [Describe pattern: spiky, increasing, stable, etc.]
|
||
|
||
### 3. Regulatory & Legislative Activity
|
||
- **Federal:** [Bills, regulations, litigation]
|
||
- **State:** [Which states? What status?]
|
||
- **International:** [EU, other jurisdictions]
|
||
- **Activity Level:** [ ] None [ ] Proposed [ ] Active [ ] Implemented
|
||
|
||
### 4. Academic Discourse
|
||
- **Publication Count (5 yr):** ___
|
||
- **Key Papers:** [Top 3-5 highly cited or recent]
|
||
- **Researchers:** [Prolific authors, institutions]
|
||
- **Themes:** [Technical / Legal / Ethical / Empirical]
|
||
|
||
### 5. Social Media & Public Discourse
|
||
- **Platforms:** [Twitter, Reddit, LinkedIn, etc.]
|
||
- **Hashtags:** [Common hashtags]
|
||
- **Influencers:** [Key voices: journalists, researchers, advocates]
|
||
- **Sentiment:** [Critical _%, Supportive _%, Mixed _%, Neutral _%]
|
||
|
||
### 6. Polarization Assessment
|
||
- **Partisan Sorting:** [ ] High [ ] Moderate [ ] Low
|
||
- **Tribal Identity:** [ ] Yes [ ] Somewhat [ ] No
|
||
- **Cross-Cutting Coalitions:** [Examples]
|
||
- **Compromise Viability:** [ ] Stigmatized [ ] Contested [ ] Acceptable
|
||
- **Polarization Level:** [ ] High [ ] Moderate [ ] Low
|
||
|
||
### 7. Policy Window Analysis
|
||
- **Problem Stream:** [ ] Not recognized [ ] Recognized [ ] Urgent
|
||
- **Politics Stream:** [ ] No will [ ] Some will [ ] Active will
|
||
- **Policy Stream:** [ ] No solutions [ ] Proposed [ ] Tested and viable
|
||
- **Window Status:** [ ] Closed [ ] Narrow [ ] Open
|
||
|
||
## Recommendations
|
||
- **Suitable for demonstration?** [ ] Yes (Tier 1) [ ] Yes (Tier 2) [ ] No
|
||
- **Timing considerations:** [When would be optimal?]
|
||
- **Framing suggestions:** [How to position demonstration given current discourse?]
|
||
- **Risks identified:** [Polarization, sensitivity, stakeholder resistance, etc.]
|
||
```
|
||
|
||
---
|
||
|
||
**Template 2: Quick Research Checklist**
|
||
|
||
For rapid triage of scenarios (15-30 minutes per scenario):
|
||
|
||
```markdown
|
||
# Quick Media Pattern Check: [Scenario Name]
|
||
|
||
**Google Trends (5 min):**
|
||
- [ ] Score ≥25/100 in past 12 months?
|
||
- [ ] Trend is increasing or stable (not declining)?
|
||
|
||
**News Coverage (5 min):**
|
||
- [ ] ≥10 articles in major outlets (past 12 months)?
|
||
- [ ] Coverage from diverse outlet types (not just tech press)?
|
||
|
||
**Regulatory Activity (5 min):**
|
||
- [ ] Any pending or active legislation/regulation?
|
||
- [ ] Check Congress.gov, Federal Register, state tracking
|
||
|
||
**Polarization Quick Check (5 min):**
|
||
- [ ] Bipartisan support (if legislation exists)?
|
||
- [ ] Mixed sentiment in social media (not 80%+ one-sided)?
|
||
|
||
**Proceed to Full Research?**
|
||
- [ ] Yes (passes quick check)
|
||
- [ ] No (fails one or more quick checks; deprioritize)
|
||
```
|
||
|
||
---
|
||
|
||
## 11. Case Study: Algorithmic Hiring Transparency
|
||
|
||
### 11.1 Research Execution
|
||
|
||
**Research conducted:** October 2025
|
||
**Researcher:** PluralisticDeliberationOrchestrator Planning Team
|
||
**Purpose:** Validate methodology, demonstrate application
|
||
|
||
---
|
||
|
||
### 11.2 Phase-by-Phase Findings
|
||
|
||
**Phase 1: Google Trends**
|
||
|
||
**Keywords Searched:**
|
||
- "algorithmic hiring" (primary)
|
||
- "AI recruitment"
|
||
- "automated employment screening"
|
||
- "hiring bias"
|
||
- "AI discrimination"
|
||
|
||
**Findings:**
|
||
- **"algorithmic hiring":** Score 50-75/100 (past 12 months), increasing trend since 2019
|
||
- **Peaks:** July 2023 (NYC LL144 effective), March 2024 (EU AI Act negotiations)
|
||
- **Related Queries (Rising):**
|
||
- "NYC bias audit"
|
||
- "AI hiring discrimination"
|
||
- "algorithmic transparency"
|
||
- "HireVue facial analysis"
|
||
- **Geographic Focus:** U.S. (NY, CA, IL highest), Germany, France, UK
|
||
|
||
**Score for Criterion 4.1 (Media Coverage):** 4/5 (High search interest)
|
||
|
||
---
|
||
|
||
**Phase 2: News Coverage**
|
||
|
||
**Search:** "algorithmic hiring" OR "AI recruitment" OR "automated employment" (past 12 months)
|
||
|
||
**Article Count:** 75+ articles in major outlets
|
||
|
||
**Outlet Diversity:**
|
||
- **Mainstream:** NYT (12), WSJ (8), Washington Post (6), Bloomberg (5)
|
||
- **Tech:** Wired (10), The Verge (8), Ars Technica (4), TechCrunch (6)
|
||
- **Trade:** HR Dive (15), SHRM (8), Employment Law360 (7)
|
||
- **Opinion:** HBR (4), The Atlantic (2), Vox (3)
|
||
|
||
**Dominant Framing:**
|
||
- Problem (40%): "AI hiring is biased, needs regulation"
|
||
- Solution (35%): "Bias audits, transparency can address concerns"
|
||
- Balanced (25%): "Trade-offs between efficiency and fairness"
|
||
|
||
**Tone:**
|
||
- Alarmist: 15%
|
||
- Optimistic: 20%
|
||
- Balanced: 45%
|
||
- Neutral/Descriptive: 20%
|
||
|
||
**Stakeholder Voices:**
|
||
- Employers: 30% of quotes
|
||
- Applicants/Labor: 25%
|
||
- Vendors: 20%
|
||
- Regulators: 15%
|
||
- Researchers: 10%
|
||
|
||
**Coverage Timeline:**
|
||
- Baseline: 4-5 articles/month (2020-2022)
|
||
- Spike: 15 articles (July 2023, NYC LL144 effective)
|
||
- Spike: 12 articles (March 2024, EU AI Act finalized)
|
||
- Current: 6-7 articles/month (sustained elevated baseline)
|
||
|
||
**Score for Criterion 4.1:** 4/5 (High news coverage, diverse outlets)
|
||
|
||
---
|
||
|
||
**Phase 3: Regulatory Activity**
|
||
|
||
**Federal (U.S.):**
|
||
- **Legislation:** AI Accountability Act (proposed, not passed; reintroduced 2024)
|
||
- **EEOC Guidance:** Technical Assistance Document on AI hiring (2023)
|
||
- **FTC Warning:** Blog post warning employers about discriminatory AI (2023)
|
||
|
||
**State:**
|
||
- **NYC Local Law 144:** Enacted 2021, effective July 2023 (bias audit requirement)
|
||
- **Illinois AI Video Interview Act:** Enacted 2020 (consent, explanation requirement)
|
||
- **California:** CPRA includes employment data rights (2023)
|
||
- **Maryland, Massachusetts:** Bills proposed (2024, pending)
|
||
|
||
**International:**
|
||
- **EU AI Act (Regulation 2024/1689):** Finalized 2024, hiring algorithms classified as "high-risk," transparency + audit + human oversight required
|
||
|
||
**Litigation:**
|
||
- **Mobley v. Workday** (2023, N.D. Cal.): Class action alleging age/disability discrimination
|
||
- **Doe v. HireVue** (EPIC complaint to FTC, 2019-2020): Facial analysis discrimination (HireVue discontinued facial analysis)
|
||
|
||
**Score for Criterion 4.2 (Regulatory Activity):** 5/5 (Implemented laws + active litigation)
|
||
|
||
---
|
||
|
||
**Phase 4: Academic Discourse**
|
||
|
||
**Google Scholar:** "algorithmic hiring" OR "AI recruitment" (past 5 years)
|
||
|
||
**Publication Count:** 100+ papers (CS, law, ethics, HR journals)
|
||
|
||
**Key Papers (Highly Cited):**
|
||
- Raghavan et al. (2020): "Mitigating Bias in Algorithmic Employment Screening" (FAccT) - 450 citations
|
||
- Ajunwa (2019): "The Paradox of Automation as Anti-Bias Intervention" (Law Review) - 320 citations
|
||
- Köchling & Wehner (2020): "Discriminated by an Algorithm" (Business Ethics) - 280 citations
|
||
|
||
**Prolific Researchers:**
|
||
- Solon Barocas (Cornell, CS + Law)
|
||
- Ifeoma Ajunwa (UNC, Labor Law)
|
||
- Manish Raghavan (MIT, CS)
|
||
- Aaron Rieke (Upturn, Policy)
|
||
|
||
**Themes:**
|
||
- Technical: Bias detection, explainability, fairness metrics (40%)
|
||
- Legal: Title VII applicability, disparate impact, regulatory frameworks (30%)
|
||
- Ethical: Right to explanation, dignity, autonomy (20%)
|
||
- Empirical: Audit studies, user perceptions, firm adoption (10%)
|
||
|
||
**Score for Criterion 4.1:** Contributes to overall "High" assessment
|
||
|
||
---
|
||
|
||
**Phase 5: Social Media**
|
||
|
||
**Twitter:**
|
||
- **Hashtags:** #AIbias (very active), #AlgorithmicAccountability, #HRTech, #FairHiring
|
||
- **Influencers:**
|
||
- @mer__edith (Meredith Whittaker, Signal Foundation, AI ethics) - Critical
|
||
- @hypervisible (Ruha Benjamin, Princeton, sociology) - Critical
|
||
- @timnitGebru (Timnit Gebru, DAIR, AI ethics) - Critical
|
||
- @shrm (Society for HR Management) - Balanced/Supportive
|
||
- @joshbersin (HR analyst) - Supportive (with concerns)
|
||
|
||
**Sentiment (sample of 100 tweets):**
|
||
- Critical: 45%
|
||
- Supportive: 20%
|
||
- Mixed/Nuanced: 25%
|
||
- Neutral: 10%
|
||
|
||
**Reddit:**
|
||
- **Subreddits:** r/jobs (115k subscribers), r/recruiting (200k), r/humanresources (180k)
|
||
- **Themes:**
|
||
- r/jobs: Frustration with "black box" rejections, desire for explanation
|
||
- r/recruiting: Debate over effectiveness vs. bias
|
||
- r/humanresources: Compliance concerns, practical implementation questions
|
||
|
||
**Score:** Contributes to polarization assessment (see Phase 6)
|
||
|
||
---
|
||
|
||
**Phase 6: Polarization**
|
||
|
||
**Partisan Sorting:**
|
||
- **NYC LL144:** Passed Democratic city council (but no Republican opposition; bipartisan by default in D+40 city)
|
||
- **EU AI Act:** Supported by center-right EPP + center-left S&D + Greens (cross-party)
|
||
- **Federal (U.S.):** AI Accountability Act sponsors are Democrats, but no organized Republican opposition (not yet partisan litmus test)
|
||
- **Assessment:** Weak to none
|
||
|
||
**Tribal Identity:**
|
||
- No "pro-algorithmic-hiring" vs. "anti-algorithmic-hiring" camps
|
||
- Positions cross-cut: Privacy advocates + labor unions + some employers on transparency; tech companies + some civil rights groups on innovation
|
||
- **Assessment:** Minimal tribal formation
|
||
|
||
**Cross-Cutting Coalitions:**
|
||
- ACLU + Upturn + Color of Change (civil rights + tech ethics) support transparency
|
||
- Some employers + vendors + researchers support audits (not full transparency)
|
||
- **Assessment:** Cross-cutting coalitions exist
|
||
|
||
**Compromise Viability:**
|
||
- Middle-ground proposals (tiered transparency, bias audits) are mainstream (not fringe)
|
||
- No evidence of "purity tests" (moderates attacked by both sides)
|
||
- **Assessment:** Compromise is socially acceptable
|
||
|
||
**Polarization Level:** Low
|
||
|
||
**Score for Criterion 4.3:** 5/5 (Low polarization, deliberation feasible)
|
||
|
||
---
|
||
|
||
**Phase 7: Policy Window**
|
||
|
||
**Problem Stream:**
|
||
- **Recognition:** ✓ (extensive media coverage, academic research)
|
||
- **Focusing Events:** ✓ (Amazon bias 2018, HireVue 2019-2020, NYC law 2021-2023)
|
||
- **Urgency:** ✓ (regulatory momentum, ongoing litigation)
|
||
|
||
**Politics Stream:**
|
||
- **Political Will:** ✓ (NYC, EU, some U.S. states legislating)
|
||
- **Advocacy Campaigns:** ✓ (ACLU, EPIC, labor unions, civil rights groups active)
|
||
- **Partisan Dynamic:** ✓ (bipartisan potential; not yet partisan litmus test)
|
||
|
||
**Policy Stream:**
|
||
- **Solutions Available:** ✓ (NYC LL144 model, EU AI Act framework, bias audit methodologies)
|
||
- **Solutions Tested:** ✓ (NYC implementation ongoing; pilot audits conducted; vendors offering bias audit services)
|
||
- **Technical Feasibility:** ✓ (explainable AI techniques exist, bias detection methods established)
|
||
- **Political Viability:** ✓ (employers are complying with NYC law; no mass exodus or legal challenges)
|
||
|
||
**Window Status:** Open
|
||
|
||
**Score for Criterion 4.4:** 5/5 (Open policy window, demonstration can inform real decisions)
|
||
|
||
---
|
||
|
||
### 11.3 Overall Criterion 4 Score
|
||
|
||
| Component | Score | Max |
|
||
|-----------|-------|-----|
|
||
| 4.1 Search Interest & News Coverage | 4 + 4 = 8 | 10 (5+5) |
|
||
| 4.2 Regulatory Activity | 5 | 5 |
|
||
| 4.3 Polarization (inverse) | 5 | 5 |
|
||
| 4.4 Policy Window | 5 | 5 |
|
||
| **TOTAL CRITERION 4** | **19** | **20** |
|
||
|
||
**Interpretation:** Algorithmic Hiring Transparency scores 19/20 on Timeliness & Public Salience—near-perfect score indicating this is an optimal time for demonstration.
|
||
|
||
---
|
||
|
||
## Appendix: Research Templates
|
||
|
||
### Template A: Search Term Matrix
|
||
|
||
| Scenario Element | Synonyms / Variations | Regulatory Terms | Stakeholder Terms |
|
||
|------------------|----------------------|------------------|-------------------|
|
||
| [Core Topic] | [List 3-5] | [Legal/policy terms] | [What different groups call it] |
|
||
| [Problem/Concern] | [List 3-5] | [Violations, harms] | [Complaints, critiques] |
|
||
| [Solution/Intervention] | [List 3-5] | [Compliance, requirements] | [Proposals, reforms] |
|
||
|
||
**Use:** Populate with scenario-specific terms, then use in all search phases (Google Trends, news, regulatory, academic)
|
||
|
||
---
|
||
|
||
### Template B: Source Credibility Assessment
|
||
|
||
When encountering unfamiliar sources (think tanks, advocacy groups, research institutes):
|
||
|
||
| Source Name | |
|
||
|-------------|---|
|
||
| **Type** | [ ] Academic [ ] Advocacy [ ] Industry [ ] Media [ ] Government |
|
||
| **Affiliation** | [Organization, funding sources if known] |
|
||
| **Political Lean** | [ ] Left [ ] Center [ ] Right [ ] Nonpartisan [ ] Unknown |
|
||
| **Credibility** | [ ] High (peer-reviewed, reputable) [ ] Medium [ ] Low (partisan, agenda-driven) |
|
||
| **How to Use** | [In research: quote with caveat? Exclude? Cross-reference?] |
|
||
|
||
**Purpose:** Avoid treating advocacy materials as neutral research, or partisan sources as nonpartisan.
|
||
|
||
---
|
||
|
||
## Conclusion
|
||
|
||
This guide provides **systematic, replicable methods** for assessing media patterns, public discourse, and regulatory activity around potential PluralisticDeliberationOrchestrator scenarios. By following this protocol, researchers can:
|
||
|
||
1. **Quantify timeliness and salience** using objective metrics (Google Trends scores, article counts, regulatory status)
|
||
2. **Assess polarization** using indicators (partisan sorting, tribal identity, cross-cutting coalitions)
|
||
3. **Identify policy windows** using Kingdon's streams framework
|
||
4. **Document findings** in structured formats for scenario scoring
|
||
|
||
**Key Takeaways:**
|
||
|
||
- **Media research is evidence-based, not impressionistic:** Use data, not hunches
|
||
- **Multiple sources required:** Cross-reference Google Trends, news, regulatory, academic, social media
|
||
- **Context matters:** High salience + high polarization = risky; high salience + low polarization = ideal
|
||
- **Policy windows open and close:** Timing is critical; demonstration must align with decision-making moments
|
||
|
||
**Next Steps:**
|
||
- Apply this methodology to all Tier 1 candidate scenarios (from scenario-framework.md)
|
||
- Validate findings through stakeholder review
|
||
- Update evaluation rubric scores (Criterion 4) based on research
|
||
- Use research findings to inform demonstration framing and stakeholder recruitment
|
||
|
||
---
|
||
|
||
**Document Status:** Complete
|
||
**Next Document:** Refinement Recommendations & Next Steps (Document 5)
|
||
**Ready for Review:** Yes
|