Case Submission Portal (Admin Moderation Queue): - Add statistics endpoint (GET /api/cases/submissions/stats) - Enhance filtering: status, failure_mode, AI relevance score - Add sorting options: date, relevance, completeness - Create admin moderation interface (case-moderation.html) - Implement CSP-compliant admin UI (no inline event handlers) - Deploy moderation actions: approve, reject, request-info - Fix API parameter mapping for different action types Internationalization (i18n): - Implement lightweight i18n system (i18n-simple.js, ~5KB) - Add language selector component with flag emojis - Create German and French translations for homepage - Document Te Reo Māori translation requirements - Add i18n attributes to homepage - Integrate language selector into navbar Bug Fixes: - Fix search button modal display on docs.html (remove conflicting flex class) Page Enhancements: - Add dedicated JS modules for researcher, leader, koha pages - Improve page-specific functionality and interactions Documentation: - Add I18N_IMPLEMENTATION_SUMMARY.md (implementation guide) - Add TE_REO_MAORI_TRANSLATION_REQUIREMENTS.md (cultural sensitivity guide) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
106 lines
5.2 KiB
JSON
106 lines
5.2 KiB
JSON
{
|
|
"hero": {
|
|
"title": "Tractatus AI Safety Framework",
|
|
"subtitle": "Structural constraints that require AI systems to preserve human agency for values decisions—tested on Claude Code",
|
|
"cta_architecture": "System Architecture",
|
|
"cta_docs": "Read Documentation",
|
|
"cta_faq": "FAQ"
|
|
},
|
|
"value_prop": {
|
|
"heading": "A Starting Point",
|
|
"text": "Instead of hoping AI systems \"behave correctly,\" we propose structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth."
|
|
},
|
|
"paths": {
|
|
"intro": "We recognize this is one small step in addressing AI safety challenges. Explore the framework through the lens that resonates with your work.",
|
|
"researcher": {
|
|
"title": "Researcher",
|
|
"subtitle": "Academic & technical depth",
|
|
"tooltip": "For AI safety researchers, academics, and scientists investigating LLM failure modes and governance architectures",
|
|
"description": "Explore the theoretical foundations, architectural constraints, and scholarly context of the Tractatus framework.",
|
|
"features": [
|
|
"Technical specifications & proofs",
|
|
"Academic research review",
|
|
"Failure mode analysis",
|
|
"Mathematical foundations"
|
|
],
|
|
"cta": "Explore Research"
|
|
},
|
|
"implementer": {
|
|
"title": "Implementer",
|
|
"subtitle": "Code & integration guides",
|
|
"tooltip": "For software engineers, ML engineers, and technical teams building production AI systems",
|
|
"description": "Get hands-on with implementation guides, API documentation, and reference code examples.",
|
|
"features": [
|
|
"Working code examples",
|
|
"API integration patterns",
|
|
"Service architecture diagrams",
|
|
"Deployment best practices"
|
|
],
|
|
"cta": "View Implementation Guide"
|
|
},
|
|
"leader": {
|
|
"title": "Leader",
|
|
"subtitle": "Strategic AI Safety",
|
|
"tooltip": "For AI executives, research directors, startup founders, and strategic decision makers setting AI safety policy",
|
|
"description": "Navigate the business case, compliance requirements, and competitive advantages of structural AI safety.",
|
|
"features": [
|
|
"Executive briefing & business case",
|
|
"Risk management & compliance (EU AI Act)",
|
|
"Implementation roadmap & ROI",
|
|
"Competitive advantage analysis"
|
|
],
|
|
"cta": "View Leadership Resources"
|
|
}
|
|
},
|
|
"capabilities": {
|
|
"heading": "Framework Capabilities",
|
|
"items": [
|
|
{
|
|
"title": "Instruction Classification",
|
|
"description": "Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging"
|
|
},
|
|
{
|
|
"title": "Cross-Reference Validation",
|
|
"description": "Validates AI actions against explicit user instructions to prevent pattern-based overrides"
|
|
},
|
|
{
|
|
"title": "Boundary Enforcement",
|
|
"description": "Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans"
|
|
},
|
|
{
|
|
"title": "Pressure Monitoring",
|
|
"description": "Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification"
|
|
},
|
|
{
|
|
"title": "Metacognitive Verification",
|
|
"description": "AI self-checks alignment, coherence, safety before execution - structural pause-and-verify"
|
|
},
|
|
{
|
|
"title": "Pluralistic Deliberation",
|
|
"description": "Multi-stakeholder values deliberation without hierarchy - facilitates human decision-making for incommensurable values"
|
|
}
|
|
]
|
|
},
|
|
"validation": {
|
|
"heading": "Real-World Validation",
|
|
"subtitle": "Framework validated in 6-month deployment across ~500 sessions with Claude Code",
|
|
"case_27027": {
|
|
"badge": "Pattern Bias Incident",
|
|
"type": "Interactive Demo",
|
|
"title": "The 27027 Incident",
|
|
"description": "Real production incident where Claude Code defaulted to port 27017 (training pattern) despite explicit user instruction to use port 27027. CrossReferenceValidator detected the conflict and blocked execution—demonstrating how pattern recognition can override instructions under context pressure.",
|
|
"why_matters": "Why this matters: This failure mode gets worse as models improve—stronger pattern recognition means stronger override tendency. Architectural constraints remain necessary regardless of capability level.",
|
|
"cta": "View Interactive Demo"
|
|
},
|
|
"resources": {
|
|
"text": "Additional case studies and research findings documented in technical papers",
|
|
"cta": "Browse Case Studies →"
|
|
}
|
|
},
|
|
"footer": {
|
|
"description": "Reference implementation of architectural AI safety constraints—structural governance validated in single-project deployment.",
|
|
"tagline": "Safety Through Structure, Not Aspiration",
|
|
"built_with": "Built with",
|
|
"acknowledgment": "This framework acknowledges Te Tiriti o Waitangi and indigenous leadership in digital sovereignty. Built with respect for CARE Principles and Māori data sovereignty."
|
|
}
|
|
}
|