- Added Agent Lightning research section to researcher.html with Demo 2 results - Created comprehensive /integrations/agent-lightning.html page - Added Agent Lightning link in homepage hero section - Updated Discord invite links (Tractatus + semantipy) across all pages - Added feedback.js script to all key pages for live demonstration Phase 2 of Master Plan complete: Discord setup → Website completion 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
11 KiB
Agent Lightning Integration Guide
Tractatus + Agent Lightning: Complementary Frameworks for Safe, High-Performing AI
Executive Summary
Agent Lightning (Microsoft Research) and Tractatus (Governance Framework) are complementary, not competitive:
- Agent Lightning: Optimizes HOW to do tasks (Performance Layer)
- Tractatus: Governs WHETHER tasks should be done (Values Layer)
Together, they create AI systems that are both:
- ✓ High-performing (optimized by RL)
- ✓ Values-aligned (governed by Tractatus)
Core Thesis
┌──────────────────────────────────────────────────────────┐
│ QUESTION: Should this decision be made at all? │
│ FRAMEWORK: Tractatus (Governance) │
│ FOCUS: Values alignment, human agency, pluralism │
└──────────────────────────────────────────────────────────┘
↓
[Approved Task]
↓
┌──────────────────────────────────────────────────────────┐
│ QUESTION: How to do this task better? │
│ FRAMEWORK: Agent Lightning (Performance) │
│ FOCUS: Task success, efficiency, optimization │
└──────────────────────────────────────────────────────────┘
Key Insight: High performance without governance can lead to values-misaligned behavior. Governance without performance leads to inefficiency. Both are needed.
Why This Matters
The Problem
AI agents optimized purely for task success often:
- Learn unintended behaviors (clickbait for engagement)
- Violate editorial guidelines
- Ignore stakeholder values
- Optimize local metrics at global cost
The Solution
Two-Layer Architecture:
- Governance Layer (Tractatus): Enforces boundaries, requires human approval for values decisions
- Performance Layer (Agent Lightning): Optimizes approved tasks using RL
Architecture
Layer 1: Tractatus Governance
from tractatus import BoundaryEnforcer, PluralisticDeliberator
# Initialize governance
enforcer = BoundaryEnforcer()
deliberator = PluralisticDeliberator()
# Check if task requires governance
if enforcer.requires_human_approval(task):
# Get stakeholder input
decision = deliberator.deliberate(
task=task,
stakeholders=["editor", "user_rep", "safety"]
)
if not decision.approved:
return "Task blocked by governance"
constraints = decision.constraints
else:
constraints = None
Layer 2: Agent Lightning Performance
from agentlightning import AgentLightningClient
# Initialize AL
al_client = AgentLightningClient()
# Optimize with constraints from Tractatus
result = al_client.optimize(
task=task,
constraints=constraints # ← Tractatus constraints
)
Integration Pattern
def execute_governed_task(task, stakeholders):
# Step 1: Governance check
if requires_governance(task):
decision = get_stakeholder_approval(task, stakeholders)
if not decision.approved:
return {"blocked": True, "reason": decision.reason}
constraints = decision.constraints
else:
constraints = None
# Step 2: Optimize within constraints
optimized_result = al_optimize(task, constraints)
# Step 3: Validate execution
for step in optimized_result.steps:
if violates_constraints(step, constraints):
halt_execution()
return optimized_result
Demonstrations
Demo 1: Basic Optimization (AL Standalone)
Location: ~/projects/tractatus/demos/agent-lightning-integration/demo1-basic-optimization/
Purpose: Show AL optimization without governance (baseline)
Run:
cd demo1-basic-optimization/
python task_optimizer.py
Expected Output:
- Engagement: 94%
- Strategy: Clickbait (learned for engagement)
- ⚠️ No governance checks performed
Learning: Performance ≠ Alignment
Demo 2: Governed Agent ⭐ (AL + Tractatus)
Location: ~/projects/tractatus/demos/agent-lightning-integration/demo2-governed-agent/
Purpose: Show Tractatus governing AL-optimized agents (KILLER DEMO)
Run:
cd demo2-governed-agent/
python governed_agent.py
Expected Output:
- Engagement: 89%
- Strategy: Quality content (governed)
- ✓ All governance checks passed
- ✓ Stakeholder input incorporated
Learning: Small performance cost (-5%) for large values gain (governance)
Demo 3: Full-Stack Production
Location: ~/projects/tractatus/demos/agent-lightning-integration/demo3-full-stack/
Purpose: Production-ready architecture with observability
Features:
- Prometheus metrics
- OpenTelemetry tracing
- Grafana dashboards
- Error recovery
- Health checks
Use Cases
1. Family History AI Features
Implementation: ~/projects/family-history/ai-services/
Services:
- Natural Language Search (port 5001)
- Story Writing Assistant (port 5002)
- Family Q&A Agent (port 5003)
Integration:
// Node.js client
const AIServices = require('./ai/AIServicesClient');
const ai = new AIServices();
// Natural language search
const results = await ai.search({
query: "stories about grandma",
tenantId: req.tenantId,
userId: req.user._id
});
Privacy: All data stays in user's space (GDPR compliant)
Installation
Prerequisites
# Python 3.12+
python3 --version
# Node.js 22+ (for Family History integration)
node --version
Install Agent Lightning
cd ~/projects
git clone https://github.com/microsoft/agent-lightning.git
cd agent-lightning
python3 -m venv venv
source venv/bin/activate
pip install -e .
Verify Installation
cd ~/projects/agent-lightning
python test_installation.py
Integration Steps
Step 1: Clone Demos
cd ~/projects/tractatus
ls demos/agent-lightning-integration/
# demo1-basic-optimization/
# demo2-governed-agent/
# demo3-full-stack/
Step 2: Run Demos
# Demo 1 (baseline)
cd demo1-basic-optimization/
python task_optimizer.py
# Demo 2 (governed) ⭐
cd ../demo2-governed-agent/
python governed_agent.py
Step 3: Integrate into Your Project
# your_project/governed_agent.py
from tractatus import BoundaryEnforcer
from agentlightning import AgentLightningClient
class YourGovernedAgent:
def __init__(self):
self.governance = BoundaryEnforcer()
self.performance = AgentLightningClient()
def execute(self, task):
# Governance first
if self.governance.requires_approval(task):
decision = get_stakeholder_input(task)
if not decision.approved:
return "blocked"
# Performance second
return self.performance.optimize(task)
Strategic Positioning
Three-Channel Strategy
Channel A: Discord Community
- Share Demo 2 in Agent Lightning Discord
- Show governance as complementary feature
- Propose integration patterns
Channel B: Academic Publication
- Paper: "Complementary Layers in AI Systems"
- Target: NeurIPS workshop, IEEE, AAAI
- Evidence: Demos 1-3 with empirical results
Channel C: Public Demonstrations
- Host at agenticgovernance.digital
- Interactive demos showing governance intervention
- Educational content for AI safety community
Performance vs. Governance
Demo 1 vs. Demo 2 Comparison
| Metric | Demo 1 (Ungoverned) | Demo 2 (Governed) | Delta |
|---|---|---|---|
| Performance | |||
| Engagement | 94% | 89% | -5% |
| Training Time | 2.3s | 3.1s | +0.8s |
| Task Success | 100% | 100% | 0% |
| Governance | |||
| Values Alignment | ✗ | ✓ | +100% |
| Stakeholder Input | ✗ | ✓ | +100% |
| Harm Assessment | ✗ | ✓ | +100% |
| Human Agency | ✗ | ✓ | +100% |
Conclusion: 5% performance cost for complete governance coverage demonstrates complementarity, not competition.
Documentation Structure
tractatus/
├── demos/agent-lightning-integration/
│ ├── README.md # Overview
│ ├── demo1-basic-optimization/ # Baseline
│ ├── demo2-governed-agent/ # ⭐ Killer demo
│ └── demo3-full-stack/ # Production
│
├── docs/integrations/
│ └── agent-lightning.md # This document
│
└── research/papers/
└── complementary-layers.md # Academic paper (TBD)
family-history/
├── ai-services/ # Python microservices
│ ├── nl-search/ # Natural language search
│ ├── story-assistant/ # Writing assistant
│ └── qa-agent/ # Q&A agent
│
└── src/ai/
├── AIServicesClient.js # Node.js client
└── README.md # Integration guide
platform-admin/
└── public/dashboards/
└── documentation-hub.html # Unified docs (updated)
Next Steps
Immediate (Week 1-2)
- Run all 3 demos
- Test Agent Lightning installation
- Review integration patterns
- Share Demo 2 output with team
Short-Term (Week 3-6)
- Implement first Family History AI service
- Set up monitoring (Prometheus + Grafana)
- Write Discord community post
- Draft academic paper outline
Long-Term (Months 2-6)
- Submit academic paper
- Build public demonstrations
- Scale Family History AI features
- Community engagement metrics
References
Agent Lightning
- Repository: https://github.com/microsoft/agent-lightning
- Documentation: https://microsoft.github.io/agent-lightning/
- Paper: Agent Lightning: RL-based Agent Optimization (Microsoft Research, 2025)
- Version: 0.2.2 (MIT License)
Tractatus Framework
- Repository: ~/projects/tractatus
- Documentation: docs/
- Website: https://agenticgovernance.digital
- License: Apache 2.0
Strategic Analysis
- Document:
~/projects/family-history/docs/AGENT_LIGHTNING_STRATEGIC_ANALYSIS.md - Length: 1000+ lines
- Sections: Research, challenges, mitigation, phased plan
Contact
- Author: John Stroh
- Email: john.stroh.nz@pm.me
- Purpose: Preserve human agency over values decisions with plural values context
- Discord: Ready to engage Agent Lightning community
Last Updated: November 2, 2025 Agent Lightning Version: 0.2.2 Tractatus Version: v3.0.2 Status: ✅ Ready for Community Engagement