tractatus/governance/TRA-OPS-0001-ai-content-generation-policy-v1-0.md
TheFlow 2298d36bed fix(submissions): restructure Economist package and fix article display
- Create Economist SubmissionTracking package correctly:
  * mainArticle = full blog post content
  * coverLetter = 216-word SIR— letter
  * Links to blog post via blogPostId
- Archive 'Letter to The Economist' from blog posts (it's the cover letter)
- Fix date display on article cards (use published_at)
- Target publication already displaying via blue badge

Database changes:
- Make blogPostId optional in SubmissionTracking model
- Economist package ID: 68fa85ae49d4900e7f2ecd83
- Le Monde package ID: 68fa2abd2e6acd5691932150

Next: Enhanced modal with tabs, validation, export

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-24 08:47:42 +13:00

10 KiB

TRA-OPS-0001: AI Content Generation Policy v1.0

Document ID: TRA-OPS-0001 Version: 1.0 Classification: OPERATIONAL Status: DRAFT → ACTIVE (upon Phase 2 start) Created: 2025-10-07 Owner: John Stroh Review Cycle: Quarterly Next Review: 2026-01-07


Purpose

This document establishes the operational policy governing all AI-assisted content generation on the Tractatus Framework website. It ensures that AI operations align with the Tractatus framework's core principle: "What cannot be systematized must not be automated."

Scope

This policy applies to all content generated or assisted by AI systems, including but not limited to:

  • Blog posts (topic suggestions, outlines, drafts)
  • Media inquiry responses (classification, prioritization, draft responses)
  • Case study analysis (relevance assessment, categorization)
  • Documentation summaries
  • Social media content (future)

Principles

1. Mandatory Human Approval

Principle: No AI-generated content shall be published, sent, or made public without explicit human approval.

Implementation:

  • All AI outputs routed through moderation queue
  • Two-person rule for sensitive content (admin + reviewer)
  • Audit trail: who approved, when, why
  • Rejection must include reason (for AI training)

Tractatus Mapping: TACTICAL quadrant (execution requires pre-approval)


2. Values Boundary Enforcement

Principle: AI systems must not make decisions involving values, ethics, or human agency.

Implementation:

  • BoundaryEnforcer.service validates all AI actions
  • Values decisions flagged for human review
  • AI may present options but not choose

Examples:

  • AI can suggest blog topics
  • AI cannot decide editorial policy
  • AI can classify inquiry priority
  • AI cannot decide whether to respond

Tractatus Mapping: STRATEGIC quadrant (values require human judgment per §12.1-12.7)


3. Transparency & Attribution

Principle: Users must know when content is AI-assisted.

Implementation:

  • All AI-assisted content labeled "AI-Assisted, Human-Reviewed"
  • Disclosure in footer or metadata
  • Option to view human review notes (future)

Example Labels:

---
AI-Assisted: Claude Sonnet 4.5
Human Reviewer: John Stroh
Reviewed: 2025-10-15
Changes: Minor edits for tone
---

4. Quality & Accuracy Standards

Principle: AI-assisted content must meet the same quality standards as human-authored content.

Implementation:

  • Editorial guidelines (TRA-OPS-0002) apply to all content
  • Fact-checking required for claims
  • Citation validation (all sources verified by human)
  • Tone/voice consistency with brand

Rejection Criteria:

  • Factual errors
  • Unsupported claims
  • Inappropriate tone
  • Plagiarism or copyright violation
  • Hallucinated citations

5. Privacy & Data Protection

Principle: AI systems must not process personal data without consent.

Implementation:

  • No user data sent to Claude API without anonymization
  • Media inquiries: strip PII before AI analysis
  • Case submissions: explicit consent checkbox
  • Audit logs: no personal data retention

Compliance: GDPR-lite principles (even if not EU-based)


6. Cost & Resource Management

Principle: AI usage must be cost-effective and sustainable.

Implementation:

  • Monthly budget cap: $200/month (see TRA-OPS-0005)
  • Rate limiting: 1000 requests/day max
  • Caching: 30-day TTL for identical queries
  • Monitoring: alert if >80% of budget used

Governance: Quarterly cost review, adjust limits as needed


AI System Inventory

Approved AI Systems

System Provider Model Purpose Status
Claude API Anthropic Sonnet 4.5 Blog curation, media triage, case analysis APPROVED

Future Considerations

System Provider Purpose Status
GPT-4 OpenAI Fallback for Claude outages EVALUATION
LLaMA 3 Meta Self-hosted alternative RESEARCH

Approval Process: Any new AI system requires:

  1. Technical evaluation (accuracy, cost, privacy)
  2. Governance review (Tractatus compliance)
  3. John Stroh approval
  4. 30-day pilot period

Operational Workflows

Blog Post Generation Workflow

graph TD
    A[News Feed Ingestion] --> B[AI Topic Suggestion]
    B --> C[Human Approval Queue]
    C -->|Approved| D[AI Outline Generation]
    C -->|Rejected| Z[End]
    D --> E[Human Review & Edit]
    E -->|Accept| F[Human Writes Draft]
    E -->|Reject| Z
    F --> G[Final Human Approval]
    G -->|Approved| H[Publish]
    G -->|Rejected| Z

Key Decision Points:

  1. Topic Approval: Human decides if topic is valuable (STRATEGIC)
  2. Outline Review: Human edits for accuracy/tone (OPERATIONAL)
  3. Draft Approval: Human decides to publish (STRATEGIC)

Media Inquiry Workflow

graph TD
    A[Inquiry Received] --> B[Strip PII]
    B --> C[AI Classification]
    C --> D[AI Priority Scoring]
    D --> E[AI Draft Response]
    E --> F[Human Review Queue]
    F -->|Approve & Send| G[Send Response]
    F -->|Edit & Send| H[Human Edits]
    F -->|Reject| Z[End]
    H --> G

Key Decision Points:

  1. Classification Review: Human verifies AI categorization (OPERATIONAL)
  2. Send Decision: Human decides whether to respond (STRATEGIC)

Case Study Workflow

graph TD
    A[Community Submission] --> B[Consent Check]
    B -->|No Consent| Z[Reject]
    B -->|Consent| C[AI Relevance Analysis]
    C --> D[AI Tractatus Mapping]
    D --> E[Human Moderation Queue]
    E -->|Approve| F[Publish to Portal]
    E -->|Request Edits| G[Contact Submitter]
    E -->|Reject| H[Notify with Reason]

Key Decision Points:

  1. Consent Validation: Automated check (SYSTEM)
  2. Relevance Assessment: Human verifies AI analysis (OPERATIONAL)
  3. Publication Decision: Human decides to publish (STRATEGIC)

Human Oversight Requirements

Minimum Oversight Levels

Content Type Minimum Reviewers Review SLA Escalation
Blog Posts 1 (admin) 48 hours N/A
Media Inquiries (High Priority) 1 (admin) 4 hours John Stroh
Media Inquiries (Low Priority) 1 (admin) 7 days N/A
Case Studies 1 (admin) 7 days N/A
Documentation Changes 1 (admin) 14 days John Stroh

Reviewer Qualifications

Admin Reviewer (minimum requirements):

  • Understands Tractatus framework
  • Technical background (AI/ML familiarity)
  • Editorial experience (writing, fact-checking)
  • Authorized by John Stroh

Future: Multiple reviewer roles (technical, editorial, legal)


Audit & Compliance

Audit Trail Requirements

All AI-assisted content must log:

  • Input: What was sent to AI (prompt + context)
  • Output: Raw AI response (unedited)
  • Review: Human changes (diff)
  • Decision: Approve/reject + reason
  • Metadata: Reviewer, timestamp, model version

Retention: 2 years minimum

Compliance Monitoring

Monthly Review:

  • AI approval rate (target: 70-90%)
  • Rejection reasons (categorized)
  • Cost vs. budget
  • SLA compliance

Quarterly Review:

  • Policy effectiveness
  • User feedback on AI content quality
  • Boundary violations (should be 0)
  • Cost-benefit analysis

Annual Review:

  • Full policy revision
  • AI system evaluation
  • Governance alignment audit

Error Handling & Incidents

AI System Failures

Scenario: Claude API unavailable

Response:

  1. Graceful degradation: disable AI features
  2. Manual workflows: admins handle all tasks
  3. User notification: "AI features temporarily unavailable"
  4. Post-mortem: document incident, adjust SLAs

Content Quality Issues

Scenario: AI-generated content contains factual error

Response:

  1. Immediate retraction/correction (if published)
  2. Root cause analysis: prompt issue, AI hallucination, review failure?
  3. Process update: improve review checklist
  4. Reviewer training: flag similar errors

Boundary Violations

Scenario: AI makes values decision without human approval

Response:

  1. CRITICAL INCIDENT: Escalate to John Stroh immediately
  2. Rollback: revert to manual workflow
  3. Investigation: How did BoundaryEnforcer fail?
  4. System audit: Test all boundary checks
  5. Policy review: Update TRA-OPS-0001

Tractatus Mandate: Zero tolerance for boundary violations


Revision & Amendment Process

Minor Revisions (v1.0 → v1.1)

  • Typos, clarifications, formatting
  • Approval: Admin reviewer
  • Notification: Email to stakeholders

Major Revisions (v1.0 → v2.0)

  • Policy changes, new workflows, scope expansion
  • Approval: John Stroh
  • Review: 30-day comment period
  • Notification: Blog post announcement

Emergency Amendments

  • Security/privacy issues requiring immediate change
  • Approval: John Stroh (verbal, documented within 24h)
  • Review: Retrospective within 7 days

Strategic:

  • STR-VAL-0001: Core Values & Principles (source: sydigital)
  • STR-GOV-0001: Strategic Review Protocol (source: sydigital)
  • STR-GOV-0002: Values Alignment Framework (source: sydigital)

Operational (Tractatus-specific):

  • TRA-OPS-0002: Blog Editorial Guidelines
  • TRA-OPS-0003: Media Inquiry Response Protocol
  • TRA-OPS-0004: Case Study Moderation Standards
  • TRA-OPS-0005: Human Oversight Requirements

Technical:

  • API Documentation: /docs/api-reference.html
  • Tractatus Framework Specification: /docs/technical-proposal.md

Glossary

AI-Assisted Content: Content where AI contributed to generation (topic, outline, draft) but human made final decisions and edits.

Boundary Violation: AI system making a decision in STRATEGIC quadrant (values, ethics, policy) without human approval.

Human Approval: Explicit action by authorized reviewer to publish/send AI-assisted content.

Moderation Queue: System where AI outputs await human review before publication.

Values Decision: Any decision involving ethics, privacy, user agency, editorial policy, or mission alignment.


Approval

Role Name Signature Date
Policy Owner John Stroh [Pending] [TBD]
Technical Reviewer Claude Code [Pending] 2025-10-07
Final Approval John Stroh [Pending] [TBD]

Status: DRAFT (awaiting John Stroh approval to activate) Effective Date: Upon Phase 2 deployment Next Review: 2026-01-07 (3 months post-activation)