- Full WCAG accessibility: ARIA attributes (aria-expanded, aria-controls), keyboard navigation (Enter/Space)
- Reframed research context: Berlin/Weil as primary intellectual foundation (moral pluralism, categorical imperative)
- Bibliography with proper academic citations: Weil (The Need for Roots, Gravity and Grace), Berlin (Four Essays on Liberty)
- Fixed footer i18n: Implemented recursive deepMerge() to preserve nested translation objects
- Root cause: Shallow merge {...obj1, ...obj2} was overwriting entire footer object from common.json
- Consolidated all footer translations in common.json, removed from page-specific files
- Mobile optimization: 44px/48px touch targets, touch-action: manipulation, responsive design
- Progressive enhancement: <noscript> fallback for JavaScript-disabled users
- Version 1.3.0
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
52 lines
3.5 KiB
JSON
52 lines
3.5 KiB
JSON
{
|
||
"page": {
|
||
"title": "For Researchers | Tractatus AI Safety Framework",
|
||
"description": "Research foundations, empirical observations, and theoretical basis for architectural approaches to AI governance. Early-stage framework exploring structural constraints on LLM systems."
|
||
},
|
||
"header": {
|
||
"badge": "Research Framework • Empirical Observations",
|
||
"title": "Research Foundations & Empirical Observations",
|
||
"subtitle": "Tractatus explores architectural approaches to AI governance through empirical observation of failure modes and application of organisational theory. This page documents research foundations, observed patterns, and theoretical basis for the framework."
|
||
},
|
||
"sections": {
|
||
"research_context": {
|
||
"heading": "Research Context & Scope",
|
||
"development_note": "Development Context",
|
||
"development_text": "Tractatus was developed over six months (April–October 2025) in progressive stages that evolved into a live demonstration of its capabilities in the form of a single-project context (https://agenticgovernance.digital). Observations derive from direct engagement with Claude Code (Anthropic's Sonnet 4.5 model) across approximately 500 development sessions. This is exploratory research, not controlled study."
|
||
},
|
||
"theoretical_foundations": {
|
||
"heading": "Theoretical Foundations",
|
||
"org_theory_title": "Organisational Theory Basis",
|
||
"values_pluralism_title": "Values Pluralism & Moral Philosophy"
|
||
},
|
||
"empirical_observations": {
|
||
"heading": "Empirical Observations: Documented Failure Modes",
|
||
"intro": "Three failure patterns observed repeatedly during framework development. These are not hypothetical scenarios—they are documented incidents that occurred during this project's development.",
|
||
"failure_1_title": "Pattern Recognition Bias Override (The 27027 Incident)",
|
||
"failure_2_title": "Gradual Values Drift Under Context Pressure",
|
||
"failure_3_title": "Silent Quality Degradation at High Context Pressure",
|
||
"research_note": "These patterns emerged from direct observation, not hypothesis testing. We don't claim they're universal to all LLM systems or deployment contexts. They represent empirical basis for framework design decisions—problems we actually encountered and architectural interventions that actually worked in this specific context."
|
||
},
|
||
"architecture": {
|
||
"heading": "Six-Component Architecture",
|
||
"services_title": "Framework Services & Functions",
|
||
"principle": "Services operate external to AI runtime with autonomous triggering. AI doesn't decide \"should I check governance rules?\"—architecture enforces checking by default. This addresses voluntary compliance problem inherent in prompt-based governance."
|
||
},
|
||
"demos": {
|
||
"heading": "Interactive Demonstrations",
|
||
"classification_title": "Instruction Classification",
|
||
"classification_desc": "Explore how instructions are classified across quadrants with persistence levels and temporal scope.",
|
||
"incident_title": "27027 Incident Timeline",
|
||
"incident_desc": "Step through pattern recognition bias failure and architectural intervention that prevented it.",
|
||
"boundary_title": "Boundary Evaluation",
|
||
"boundary_desc": "Test decisions against boundary enforcement to see which require human judgment vs. AI autonomy."
|
||
},
|
||
"resources": {
|
||
"heading": "Research Documentation"
|
||
},
|
||
"limitations": {
|
||
"heading": "Limitations & Future Research Directions",
|
||
"title": "Known Limitations & Research Gaps"
|
||
}
|
||
}
|
||
}
|