- Create Economist SubmissionTracking package correctly: * mainArticle = full blog post content * coverLetter = 216-word SIR— letter * Links to blog post via blogPostId - Archive 'Letter to The Economist' from blog posts (it's the cover letter) - Fix date display on article cards (use published_at) - Target publication already displaying via blue badge Database changes: - Make blogPostId optional in SubmissionTracking model - Economist package ID: 68fa85ae49d4900e7f2ecd83 - Le Monde package ID: 68fa2abd2e6acd5691932150 Next: Enhanced modal with tabs, validation, export 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
167 lines
7.3 KiB
Markdown
167 lines
7.3 KiB
Markdown
# Meeting Notes: Shoshana Rosenberg (WSP)
|
|
|
|
**Date:** TBD
|
|
**Context:** Family introduction via Leslie (brother)
|
|
**Role:** SVP, Deputy General Counsel, Chief AI Governance and Privacy Officer at WSP USA
|
|
**WSP Context:** $1B, 7-year Microsoft partnership deploying Copilot to 73,000 engineers globally
|
|
|
|
---
|
|
|
|
## Core Message (30 seconds)
|
|
|
|
Tractatus is a research framework exploring whether AI governance can be made architecturally external rather than relying on voluntary AI compliance. Currently a proof-of-concept (6 months, single project). Not commercially ready, but architectural patterns may be relevant to WSP's Copilot deployment challenge.
|
|
|
|
---
|
|
|
|
## WSP's Likely Challenges
|
|
|
|
1. **EU AI Act Article 14 compliance** — proving human oversight at scale for 73,000 users
|
|
2. **Audit trail requirements** — documenting AI-assisted decisions in mission-critical engineering
|
|
3. **Liability exposure** — when AI makes technical recommendations affecting public infrastructure
|
|
4. **80% human reduction goal** — ensuring quality doesn't degrade as AI handles more work
|
|
|
|
These aren't hypothetical governance problems. These are operational requirements with €35M maximum penalties.
|
|
|
|
---
|
|
|
|
## What Tractatus Offers (Honestly)
|
|
|
|
**Validated:**
|
|
- Architectural pattern for external governance layer (sits outside AI runtime)
|
|
- Six autonomous services that intercept/validate/document AI operations
|
|
- Immutable audit trail generation independent of AI cooperation
|
|
- Prevents pattern bias from overriding explicit instructions ("27027 failure mode")
|
|
- User reports order-of-magnitude productivity improvement (non-technical operator building production systems)
|
|
|
|
**Not Validated:**
|
|
- Enterprise scale performance
|
|
- Integration complexity with Microsoft Copilot specifically
|
|
- Effectiveness against adversarial prompts
|
|
- Cross-platform consistency
|
|
- Multi-organisation deployment
|
|
|
|
**Known Limitation:**
|
|
Framework can be bypassed if AI simply chooses not to invoke governance tools. Voluntary invocation remains structural weakness. External enforcement mechanisms needed.
|
|
|
|
---
|
|
|
|
## Three Key Technical Points
|
|
|
|
### 1. The Governance Gap
|
|
Current AI governance (policies, training, guidelines) relies on voluntary compliance. AI must *choose* to check policies, *recognise* when to escalate to humans. Governance exists only insofar as AI acknowledges it.
|
|
|
|
For WSP: 73,000 engineers using Copilot means 73,000 points where AI might bypass governance. Policy documents won't scale.
|
|
|
|
### 2. Architectural Externality
|
|
Tractatus places governance *outside* AI runtime:
|
|
- **Agent Runtime Layer** — Copilot, Claude Code, any LLM system
|
|
- **Governance Layer** — Six services that intercept operations (BoundaryEnforcer, CrossReferenceValidator, etc.)
|
|
- **Persistent Storage** — Audit logs, rules, instructions that AI cannot alter
|
|
|
|
This makes bypassing governance *difficult* (though not impossible). Requires architectural change, not prompt engineering.
|
|
|
|
### 3. Evidence Layer
|
|
EU AI Act requires proof of human oversight. Tractatus generates:
|
|
- Immutable audit trails of AI decision processes
|
|
- Documentation of human approval for values decisions
|
|
- Compliance evidence independent of AI cooperation
|
|
- Structured data for regulatory reporting
|
|
|
|
This is not legal advice. But it's architectural infrastructure that makes compliance *demonstrable*.
|
|
|
|
---
|
|
|
|
## What WSP Might Do (If Interested)
|
|
|
|
**Option 1: Pilot Study**
|
|
- Deploy Tractatus governance layer for subset of Copilot users (e.g., 100 engineers in single office)
|
|
- 3-6 month evaluation of integration complexity, performance impact, audit trail quality
|
|
- Independent validation of architectural patterns in enterprise context
|
|
|
|
**Option 2: Research Collaboration**
|
|
- WSP's AI governance team evaluates framework against EU AI Act requirements
|
|
- Identify gaps between architectural patterns and regulatory obligations
|
|
- Use findings to inform WSP's own governance architecture (whether using Tractatus or not)
|
|
|
|
**Option 3: Information Only**
|
|
- Review framework as reference architecture
|
|
- Consider patterns for WSP's internal governance system design
|
|
- No active collaboration, just awareness of approach
|
|
|
|
---
|
|
|
|
## What We're NOT Offering
|
|
|
|
- ❌ Commercial product or support contract
|
|
- ❌ Guaranteed compliance with EU AI Act
|
|
- ❌ Plug-and-play integration with Microsoft Copilot
|
|
- ❌ Claim that Tractatus solves all AI safety problems
|
|
- ❌ Security guarantees or liability coverage
|
|
|
|
This is a research framework (Apache 2.0 licence). If WSP finds value, it requires their engineering investment to adapt and deploy.
|
|
|
|
---
|
|
|
|
## Questions to Ask Shoshana
|
|
|
|
1. **Regulatory pressure:** Is EU AI Act compliance timeline driving urgency for governance solutions?
|
|
|
|
2. **Current approach:** How is WSP planning to demonstrate human oversight at scale for Copilot deployment?
|
|
|
|
3. **Audit requirements:** What documentation does WSP need to provide to regulators or clients about AI-assisted engineering decisions?
|
|
|
|
4. **Integration complexity:** What's WSP's tolerance for architectural changes vs. preference for policy-based controls?
|
|
|
|
5. **Validation needs:** If architectural patterns seem promising, what would WSP need to see before considering pilot deployment?
|
|
|
|
---
|
|
|
|
## Positioning for Family Introduction Context
|
|
|
|
This isn't a sales pitch. Leslie knows I'm building this framework and thought the architectural approach might be relevant to problems Shoshana faces at WSP.
|
|
|
|
**Tone:** Collegial, exploratory, honest about limitations. "Here's an architectural pattern we've been exploring. Don't know if it's relevant to your context, but happy to share what we've learned."
|
|
|
|
**Not:** "We have the solution to your AI governance problems." That's insulting to someone who actually does this work.
|
|
|
|
---
|
|
|
|
## Key Resources to Share
|
|
|
|
- **Leader page:** https://agenticgovernance.digital/leader.html (rebuilt for professionals, not marketing fluff)
|
|
- **Architecture diagram:** https://agenticgovernance.digital/architecture.html (shows runtime-agnostic design)
|
|
- **Technical docs:** https://agenticgovernance.digital/docs.html (if she wants implementation details)
|
|
- **Research foundations:** https://agenticgovernance.digital/researcher.html (organisational theory basis)
|
|
|
|
---
|
|
|
|
## What Success Looks Like
|
|
|
|
**Best case:** Shoshana sees architectural relevance, WSP proposes pilot study or research collaboration.
|
|
|
|
**Realistic case:** Shoshana understands the approach, files it away as "interesting pattern to consider," may revisit if Microsoft's governance tools prove insufficient.
|
|
|
|
**Still valuable case:** Shoshana provides feedback on gaps between framework and real-world enterprise needs, informing further development.
|
|
|
|
**Failure case:** Shoshana dismisses as impractical for enterprise context. Learn from feedback, acknowledge limitations honestly.
|
|
|
|
---
|
|
|
|
## Post-Meeting Follow-Up
|
|
|
|
If Shoshana expresses interest:
|
|
1. Send technical documentation specific to her questions
|
|
2. Offer to connect her team with framework developers (if appropriate)
|
|
3. Propose structured evaluation criteria for pilot consideration
|
|
4. Set clear expectations about resource requirements and timeline
|
|
|
|
If not interested:
|
|
1. Thank her for time and feedback
|
|
2. Ask if there are other organisations/contexts where this might be more relevant
|
|
3. Keep door open for future contact if circumstances change
|
|
|
|
---
|
|
|
|
**Last Updated:** 2025-10-14
|
|
**Framework Version:** Early development (6-month proof-of-concept)
|
|
**Contact:** See https://agenticgovernance.digital/about.html
|