tractatus/public/locales/en/homepage.json
TheFlow 23e254a965 feat(messaging): revise homepage value proposition for strategic impact
SUMMARY:
Completely rewrote "A Starting Point" section to emphasize the enormity
of the AI safety challenge and position Tractatus as a potential turning
point. Removed self-deprecating "We recognize this is one small step..."
paragraph and absorbed its invitation into the value proposition.

STRATEGIC CHANGES:

1. New Opening Paragraph:
   - "Aligning advanced AI with human values is among the most
     consequential challenges we face"
   - Names antagonist: "big tech momentum"
   - Frames as "categorical imperative" (echoing Kant, fitting for Tractatus)
   - Stakes: "preserve human agency or risk ceding control entirely"

2. Core Value Proposition (maintained):
   - "Instead of hoping AI systems 'behave correctly'..."
   - Structural constraints requiring human judgment
   - Architectural boundaries adapting to norms

3. Turning Point Positioning (new):
   - "If this approach can work at scale, Tractatus may represent
     a turning point"
   - "AI enhances human capability without compromising human sovereignty"
   - Absorbed invitation: "Explore the framework through the lens
     that resonates with your work"

4. Removed Section:
   - Deleted "We recognize this is one small step..." paragraph
   - Reduced padding above Three Audience Paths (py-16 → pt-4 pb-16)

TRANSLATIONS:
Updated all 3 language versions (en/de/fr) with equivalent messaging:
- English: "categorical imperative" / "turning point"
- German: "kategorischen Imperativ" / "Wendepunkt"
- French: "impératif catégorique" / "tournant"

IMPACT:
This is the first chance to capture visitor interest and summarize
the core argument: we must explore Tractatus to break out of current
big tech momentum. The new messaging is urgent, not self-deprecating.

FILES MODIFIED:
- public/index.html (removed intro paragraph, reduced padding)
- public/locales/en/homepage.json (3-paragraph value_prop.text)
- public/locales/de/homepage.json (3-paragraph value_prop.text)
- public/locales/fr/homepage.json (3-paragraph value_prop.text)

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-19 21:18:31 +13:00

129 lines
6.3 KiB
JSON

{
"hero": {
"title": "Tractatus AI Safety Framework",
"subtitle": "Structural constraints that require AI systems to preserve human agency for values decisions—tested on Claude Code",
"cta_architecture": "System Architecture",
"cta_docs": "Read Documentation",
"cta_faq": "FAQ"
},
"value_prop": {
"heading": "A Starting Point",
"text": "Aligning advanced AI with human values is among the most consequential challenges we face. As capability growth accelerates under big tech momentum, we confront a categorical imperative: preserve human agency over values decisions, or risk ceding control entirely.<br><br>Instead of hoping AI systems \"behave correctly,\" we propose structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.<br><br>If this approach can work at scale, Tractatus may represent a turning point—a path where AI enhances human capability without compromising human sovereignty. Explore the framework through the lens that resonates with your work."
},
"paths": {
"intro": "",
"researcher": {
"title": "Researcher",
"subtitle": "Academic & technical depth",
"tooltip": "For AI safety researchers, academics, and scientists investigating LLM failure modes and governance architectures",
"description": "Explore the theoretical foundations, architectural constraints, and scholarly context of the Tractatus framework.",
"features": [
"Technical specifications & proofs",
"Academic research review",
"Failure mode analysis",
"Mathematical foundations"
],
"cta": "Explore Research"
},
"implementer": {
"title": "Implementer",
"subtitle": "Code & integration guides",
"tooltip": "For software engineers, ML engineers, and technical teams building production AI systems",
"description": "Get hands-on with implementation guides, API documentation, and reference code examples.",
"features": [
"Working code examples",
"API integration patterns",
"Service architecture diagrams",
"Deployment best practices"
],
"cta": "View Implementation Guide"
},
"leader": {
"title": "Leader",
"subtitle": "Strategic AI Safety",
"tooltip": "For AI executives, research directors, startup founders, and strategic decision makers setting AI safety policy",
"description": "Navigate the business case, compliance requirements, and competitive advantages of structural AI safety.",
"features": [
"Executive briefing & business case",
"Risk management & compliance (EU AI Act)",
"Implementation roadmap & ROI",
"Competitive advantage analysis"
],
"cta": "View Leadership Resources"
}
},
"capabilities": {
"heading": "Framework Capabilities",
"items": [
{
"title": "Instruction Classification",
"description": "Quadrant-based classification (STR/OPS/TAC/SYS/STO) with time-persistence metadata tagging"
},
{
"title": "Cross-Reference Validation",
"description": "Validates AI actions against explicit user instructions to prevent pattern-based overrides"
},
{
"title": "Boundary Enforcement",
"description": "Implements Tractatus 12.1-12.7 boundaries - values decisions architecturally require humans"
},
{
"title": "Pressure Monitoring",
"description": "Detects degraded operating conditions (token pressure, errors, complexity) and adjusts verification"
},
{
"title": "Metacognitive Verification",
"description": "AI self-checks alignment, coherence, safety before execution - structural pause-and-verify"
},
{
"title": "Pluralistic Deliberation",
"description": "Multi-stakeholder values deliberation without hierarchy - facilitates human decision-making for incommensurable values"
}
]
},
"validation": {
"heading": "Real-World Validation",
"subtitle": "Framework validated in 6-month deployment across ~500 sessions with Claude Code",
"case_27027": {
"badge": "Pattern Bias Incident",
"type": "Interactive Demo",
"title": "The 27027 Incident",
"description": "Real production incident where Claude Code defaulted to port 27017 (training pattern) despite explicit user instruction to use port 27027. CrossReferenceValidator detected the conflict and blocked execution—demonstrating how pattern recognition can override instructions under context pressure.",
"why_matters": "Why this matters: This failure mode gets worse as models improve—stronger pattern recognition means stronger override tendency. Architectural constraints remain necessary regardless of capability level.",
"cta": "View Interactive Demo"
},
"resources": {
"text": "Additional case studies and research findings documented in technical papers",
"cta": "Browse Case Studies →"
}
},
"footer": {
"about_heading": "Tractatus Framework",
"about_text": "Architectural constraints for AI safety that preserve human agency through structural, not aspirational, guarantees.",
"documentation_heading": "Documentation",
"documentation_links": {
"framework_docs": "Framework Docs",
"about": "About",
"core_values": "Core Values",
"interactive_demo": "Interactive Demo"
},
"support_heading": "Support",
"support_links": {
"koha": "Support (Koha)",
"transparency": "Transparency",
"media_inquiries": "Media Inquiries",
"submit_case": "Submit Case Study"
},
"legal_heading": "Legal",
"legal_links": {
"privacy": "Privacy Policy",
"contact": "Contact Us",
"github": "GitHub"
},
"te_tiriti_label": "Te Tiriti o Waitangi:",
"te_tiriti_text": "We acknowledge Te Tiriti o Waitangi and our commitment to partnership, protection, and participation. This project respects Māori data sovereignty (rangatiratanga) and collective guardianship (kaitiakitanga).",
"copyright": "John G Stroh. Licensed under",
"license": "Apache 2.0",
"location": "Made in Aotearoa New Zealand 🇳🇿"
}
}