fix: Remove absolute assurance language per inst_017 across codebase

Replace "ensures", "guarantee", "foolproof", "world-class" and similar
absolute terms with evidence-based language throughout public pages, JS
components, and FAQ content. Changes apply inst_017 (no absolute
assurance terms) consistently.

Replacements:
- "ensures X" → "validates X", "so that X", "supports X", "maintains X"
- "guarantee" → removed or rephrased with qualified language
- "foolproof" → "infallible"
- "architecturally impossible" → "architecture prevents without
  explicit override flags"

Preserved: published research papers (architectural-alignment*.html),
EU AI Act quotes, Te Tiriti treaty language, and FAQ meta-commentary
that deliberately critiques this language (lines 2842-2896).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
TheFlow 2026-02-07 14:44:45 +13:00
parent 074906608d
commit df8c6ccb03
14 changed files with 26 additions and 26 deletions

View file

@ -258,7 +258,7 @@
Governance is <strong>woven into the deployment architecture</strong>, not bolted on as afterthought. PreToolUse hooks intercept actions before execution. Services run in the critical path. Bypasses require explicit <code>--no-verify</code> flags and are logged. Enforcement is structural, not voluntary.
</p>
<p class="text-gray-700" data-i18n-html="architectural_principles.not_separateness.connection">
<strong>Connects to Sovereignty:</strong> Not-separateness ensures AI cannot bypass governance to override human agency. The architecture makes it structurally difficult to erode boundaries, preserving decision-making authority where it belongs—with affected humans.
<strong>Connects to Sovereignty:</strong> Not-separateness makes it structurally difficult for AI to bypass governance or override human agency, preserving decision-making authority where it belongs—with affected humans.
</p>
</div>
</div>
@ -324,7 +324,7 @@
<span class="text-green-700 font-bold"></span>
</div>
<div>
<strong data-i18n="te_tiriti_section.protection_label">Protection:</strong> <span data-i18n="te_tiriti_section.protection_text">The framework protects against values erosion, ensuring cultural contexts are not overridden by AI assumptions.</span>
<strong data-i18n="te_tiriti_section.protection_label">Protection:</strong> <span data-i18n="te_tiriti_section.protection_text">The framework protects against values erosion so that cultural contexts are not overridden by AI assumptions.</span>
</div>
</div>
<div class="flex items-start">

View file

@ -450,7 +450,7 @@
</button>
</div>
<div class="text-sm text-gray-600 mb-4">
These rules protect the framework from unsafe operations and ensure governance compliance.
These rules protect the framework from unsafe operations and support governance compliance.
</div>
<div id="legend-content" class="space-y-4 max-h-96 overflow-y-auto">
<!-- Will be populated by JS -->

View file

@ -183,7 +183,7 @@
<div class="bg-gradient-to-r from-blue-50 to-purple-50 rounded-xl p-8 border border-blue-200">
<h3 class="text-2xl font-bold text-gray-900 mb-4 text-center" data-i18n="comparison.hypothesis_title"></h3>
<p class="text-lg text-gray-700 text-center max-w-4xl mx-auto" data-i18n-html="comparison.hypothesis_text">
<strong>Jailbreaks often work by manipulating the AI's internal reasoning.</strong> Tractatus boundaries operate <em>external</em> to that reasoning—the AI doesn't directly evaluate governance rules. While not foolproof, this architectural separation makes manipulation significantly harder.
<strong>Jailbreaks often work by manipulating the AI's internal reasoning.</strong> Tractatus boundaries operate <em>external</em> to that reasoning—the AI doesn't directly evaluate governance rules. While not infallible, this architectural separation makes manipulation significantly harder.
</p>
</div>
</section>
@ -213,7 +213,7 @@
<div class="bg-white rounded-lg p-4 mb-4">
<p class="text-sm font-semibold text-gray-900 mb-2" data-i18n="principles.not_separateness.example_label">Example: PreToolUse Hook</p>
<p class="text-sm text-gray-600" data-i18n="principles.not_separateness.example">
When the AI attempts to edit a file, the PreToolUse hook intercepts <em>before execution</em>. BoundaryEnforcer, CrossReferenceValidator, and other services validate the action. If any service blocks, the edit never happens—architecturally impossible to bypass.
When the AI attempts to edit a file, the PreToolUse hook intercepts <em>before execution</em>. BoundaryEnforcer, CrossReferenceValidator, and other services validate the action. If any service blocks, the edit does not proceed—the hook architecture prevents bypass without explicit override flags.
</p>
</div>
<p class="text-sm text-gray-600 italic" data-i18n="principles.not_separateness.contrast">

View file

@ -29,7 +29,7 @@
<h2 class="text-xl font-semibold text-blue-900 mb-2">What is the Tractatus Framework?</h2>
<p class="text-blue-800">
The Tractatus-Based LLM Safety Framework implements <strong>architectural constraints</strong>
that ensure AI systems preserve human agency regardless of capability level. Instead of hoping
designed to preserve human agency regardless of capability level. Instead of hoping
AI "behaves correctly," we build systems where certain decisions <em>structurally require</em>
human judgment.
</p>

View file

@ -220,7 +220,7 @@
<h2 class="text-2xl font-bold text-gray-900 mb-4" data-i18n="section_5.title">5. Security Measures (Article 32)</h2>
<p class="text-gray-700 mb-4" data-i18n="section_5.intro">
We implement appropriate technical and organizational measures to ensure data security:
We implement appropriate technical and organisational measures for data security:
</p>
<h3 class="text-xl font-semibold text-gray-900 mt-6 mb-3" data-i18n="section_5.technical_heading">Technical Measures</h3>

View file

@ -152,7 +152,7 @@
Vector search retrieves relevant documentation and help content, filtered by the member's permission level. The AI generates contextual answers grounded in retrieved documents rather than from its training data alone.
</p>
<p class="text-gray-500 text-xs italic">
Governance: BoundaryEnforcer prevents PII exposure; CrossReferenceValidator ensures responses align with platform policies.
Governance: BoundaryEnforcer prevents PII exposure; CrossReferenceValidator validates responses against platform policies.
</p>
</div>
@ -169,7 +169,7 @@
<div class="bg-white rounded-xl shadow-md p-6 border border-gray-200">
<h3 class="text-lg font-bold text-gray-900 mb-3">Story Assistance</h3>
<p class="text-gray-700 text-sm mb-3">
AI-generated suggestions for writing family stories: prompts, structural advice, and narrative enhancement. Suggestions are filtered through BoundaryEnforcer to ensure the AI does not impose cultural interpretations or values judgments on family narratives.
AI-generated suggestions for writing family stories: prompts, structural advice, and narrative enhancement. Suggestions are filtered through BoundaryEnforcer so that the AI does not impose cultural interpretations or values judgments on family narratives.
</p>
<p class="text-gray-500 text-xs italic">
Governance: Cultural context decisions are deferred to the storyteller, not resolved by the AI.

View file

@ -683,7 +683,7 @@ const result = await verify(action, reasoning)
<div class="bg-white rounded-lg p-6 shadow-lg">
<h3 class="text-xl font-bold text-gray-900 mb-3" data-i18n="services.service_6_name">PluralisticDeliberationOrchestrator</h3>
<p class="text-sm text-gray-600 mb-4" data-i18n="services.service_6_desc">
Manages multi-stakeholder deliberation ensuring value pluralism in decisions.
Manages multi-stakeholder deliberation to support value pluralism in decisions.
</p>
<div class="text-sm text-gray-700 mb-3">
<strong data-i18n="services.service_6_features">Features:</strong>
@ -1175,7 +1175,7 @@ for user_message in conversation:
# Governance audit logs the training update</code></pre>
<div class="mt-4 text-xs text-gray-600">
<strong>Pattern:</strong> Tractatus ensures safety boundaries are never crossed, while Agent Lightning learns to optimize within those safe boundaries.
<strong>Pattern:</strong> Tractatus maintains safety boundaries through architectural enforcement, while Agent Lightning learns to optimise within those boundaries.
</div>
</div>

View file

@ -1,6 +1,6 @@
/**
* Enhanced Submission Modal for Blog Post Submissions
* World-class UI/UX with tabs, content preview, validation
* UI with tabs, content preview, validation
* CSP-compliant: Uses event delegation instead of inline handlers
*/

View file

@ -129,7 +129,7 @@ class ActivityTimeline {
<!-- Timeline Explanation -->
<div class="mb-4 p-3 bg-blue-50 border border-blue-200 rounded-lg">
<p class="text-sm text-gray-700 leading-relaxed mb-2">
This shows the framework's governance components working together to validate and process each request. Each component has a specific role in ensuring safe, values-aligned AI operation.
This shows the framework's governance components working together to validate and process each request. Each component has a specific role in supporting safe, values-aligned AI operation.
</p>
<p class="text-xs text-gray-600 italic">
Note: Timing values are estimates based on current performance statistics and may vary in production.

View file

@ -195,7 +195,7 @@ class TractausFeedback {
</svg>
<div class="text-sm">
<p class="font-semibold text-blue-900">How this works</p>
<p class="text-blue-700 mt-1">Your feedback is automatically classified by our <strong>BoundaryEnforcer</strong> to determine the appropriate response pathway. This ensures you get the right type of response while maintaining governance.</p>
<p class="text-blue-700 mt-1">Your feedback is automatically classified by our <strong>BoundaryEnforcer</strong> to determine the appropriate response pathway, directing your feedback to the right channel while maintaining governance.</p>
</div>
</div>
</div>

View file

@ -80,7 +80,7 @@ class InteractiveDiagram {
'Instruction storage and validation work together to prevent directive fade',
'Boundary enforcement and deliberation coordinate on values decisions',
'Pressure monitoring adjusts verification requirements dynamically',
'Metacognitive gates ensure AI pauses before high-risk operations',
'Metacognitive gates require AI to pause before high-risk operations',
'Each service addresses a different failure mode in AI safety'
],
promise: 'External architectural enforcement that is structurally more difficult to bypass than behavioral training alone.'

View file

@ -63,7 +63,7 @@ const stakeholders = [
perspective: {
concern: 'Compliance & User Rights',
view: 'GDPR and similar frameworks require prompt notification of data breaches. If user data is at risk, you may have legal obligations to disclose within specific timeframes (typically 72 hours).',
priority: 'Ensure compliance with data protection law'
priority: 'Comply with data protection law'
}
}
];

View file

@ -46,7 +46,7 @@ Prompts guide behaviour. Tractatus enforces it architecturally.`,
- MetacognitiveVerifier: 50-200ms (selective, complex operations only)
**Design trade-off:**
Governance services operate synchronously to ensure enforcement cannot be bypassed. This adds latency but provides architectural safety enforcement that asynchronous approaches cannot.
Governance services operate synchronously so that enforcement cannot be bypassed. This adds latency but provides architectural safety enforcement that asynchronous approaches cannot.
**Development context:**
Framework validated in 6-month, single-project deployment. No systematic performance benchmarking conducted. Overhead estimates based on service architecture, not controlled studies.
@ -612,7 +612,7 @@ Validator sensitivity tunable in \`governance_rules\` collection:
\`\`\`
**Why this matters:**
LLMs have two knowledge sources: explicit instructions vs training patterns. Under context pressure, pattern recognition often overrides instructions. CrossReferenceValidator ensures explicit instructions always win.
LLMs have two knowledge sources: explicit instructions vs training patterns. Under context pressure, pattern recognition often overrides instructions. CrossReferenceValidator gives explicit instructions precedence over training patterns.
See [27027 Incident Demo](/demos/27027-demo.html) for interactive visualization.`,
audience: ['researcher', 'implementer'],
@ -707,7 +707,7 @@ node scripts/check-session-pressure.js --tokens 0/200000 --messages 0
Token count (resets to 0)
**Why handoff matters:**
Without handoff, all HIGH persistence instructions could be lost. This is the exact failure mode Tractatus is designed to prevent. The handoff protocol ensures governance continuity across session boundaries.
Without handoff, all HIGH persistence instructions could be lost. This is the exact failure mode Tractatus is designed to prevent. The handoff protocol maintains governance continuity across session boundaries.
**Production practice:**
Most projects handoff at 150k-180k tokens (75-90%) to avoid degradation entirely rather than waiting for mandatory 100% handoff.
@ -1025,7 +1025,7 @@ AI defaults: Python 3.9 (more common in training data)
**Tractatus complements these:**
- Enforces that human review happens for values decisions
- Ensures RAG instructions aren't forgotten under pressure
- Preserves RAG instructions under context pressure
- Maintains audit trail of what AI was instructed to do
**Real example of what Tractatus caught:**
@ -1059,7 +1059,7 @@ It cannot know ground truth about the external world. That requires:
- Human domain expertise
**When to use Tractatus for reliability:**
Ensure AI follows explicit technical requirements
Enforce explicit technical requirements on AI
Detect contradictions within a single session
Verify multi-step operations are complete
Maintain consistency across long conversations
@ -1070,7 +1070,7 @@ It cannot know ground truth about the external world. That requires:
Validate API responses
Check mathematical correctness
**Bottom line**: Tractatus prevents governance failures, not knowledge failures. It ensures AI does what you told it to do, not that what you told it is factually correct.
**Bottom line**: Tractatus prevents governance failures, not knowledge failures. It enforces that AI does what you told it to do, not that what you told it is factually correct.
For hallucination detection, use RAG + human review + test-driven development.`,
audience: ['researcher', 'implementer'],
@ -1552,7 +1552,7 @@ AI facilitates deliberation, humans decide. No values decisions are automated.
**Why this is necessary:**
AI systems deployed in diverse communities will encounter value conflicts. Imposing one moral framework (e.g., Western liberal individualism) excludes other legitimate perspectives (e.g., communitarian, Indigenous relational ethics).
Value pluralism ensures AI governance respects moral diversity while enabling decisions.
Value pluralism provides a basis for AI governance that respects moral diversity while enabling decisions.
See [Value Pluralism FAQ](/downloads/value-pluralism-faq.pdf) for detailed Q&A`,
audience: ['researcher', 'leader'],
@ -2106,7 +2106,7 @@ When stakeholder's preferred language detected:
- **inst\_032**: Multilingual Engagement Protocol (language accommodation)
**Integration:**
AdaptiveCommunicationOrchestrator supports PluralisticDeliberationOrchestratorensuring communication doesn't exclude stakeholders through linguistic/cultural barriers.
AdaptiveCommunicationOrchestrator supports PluralisticDeliberationOrchestratorso that communication does not exclude stakeholders through linguistic/cultural barriers.
See [Value Pluralism FAQ](/downloads/value-pluralism-faq.pdf) Section "Communication & Culture"`,
audience: ['researcher', 'implementer', 'leader'],
@ -2782,7 +2782,7 @@ AI Act not yet in force. Tractatus architecture designed to support anticipated
**Tractatus support**: Audit logs demonstrate governance transparency
**Fairness**: "AI should not discriminate."
**Tractatus support**: PluralisticDeliberationOrchestrator ensures diverse stakeholder input
**Tractatus support**: PluralisticDeliberationOrchestrator coordinates diverse stakeholder input
**Accountability**: "Companies accountable for AI harms."
**Tractatus support**: Audit trail demonstrates due diligence

View file

@ -442,7 +442,7 @@
</ul>
<p class="mb-4" data-i18n="sections.theoretical_foundations.values_conclusion">
This approach recognises that <strong>governance isn't solving values conflicts—it's ensuring they're addressed through appropriate deliberative process with genuine human attention</strong> rather than AI imposing resolution through training data bias or efficiency metrics.
This approach recognises that <strong>governance isn't solving values conflicts—it's structuring how they're addressed through appropriate deliberative process with genuine human attention</strong> rather than AI imposing resolution through training data bias or efficiency metrics.
</p>
<div class="text-sm text-gray-600 border-t border-gray-200 pt-4 mt-4">