diff --git a/public/about/values.html b/public/about/values.html
index 03fe90c0..075aa4cb 100644
--- a/public/about/values.html
+++ b/public/about/values.html
@@ -258,7 +258,7 @@
Governance is woven into the deployment architecture, not bolted on as afterthought. PreToolUse hooks intercept actions before execution. Services run in the critical path. Bypasses require explicit --no-verify flags and are logged. Enforcement is structural, not voluntary.
diff --git a/public/admin/audit-analytics.html b/public/admin/audit-analytics.html
index ef7787d0..eda29528 100644
--- a/public/admin/audit-analytics.html
+++ b/public/admin/audit-analytics.html
@@ -450,7 +450,7 @@
- These rules protect the framework from unsafe operations and ensure governance compliance.
+ These rules protect the framework from unsafe operations and support governance compliance.
diff --git a/public/architecture.html b/public/architecture.html
index 096251e6..29b360bd 100644
--- a/public/architecture.html
+++ b/public/architecture.html
@@ -183,7 +183,7 @@
- Jailbreaks often work by manipulating the AI's internal reasoning. Tractatus boundaries operate external to that reasoning—the AI doesn't directly evaluate governance rules. While not foolproof, this architectural separation makes manipulation significantly harder.
+ Jailbreaks often work by manipulating the AI's internal reasoning. Tractatus boundaries operate external to that reasoning—the AI doesn't directly evaluate governance rules. While not infallible, this architectural separation makes manipulation significantly harder.
@@ -213,7 +213,7 @@
Example: PreToolUse Hook
- When the AI attempts to edit a file, the PreToolUse hook intercepts before execution. BoundaryEnforcer, CrossReferenceValidator, and other services validate the action. If any service blocks, the edit never happens—architecturally impossible to bypass.
+ When the AI attempts to edit a file, the PreToolUse hook intercepts before execution. BoundaryEnforcer, CrossReferenceValidator, and other services validate the action. If any service blocks, the edit does not proceed—the hook architecture prevents bypass without explicit override flags.
diff --git a/public/demos/tractatus-demo.html b/public/demos/tractatus-demo.html
index e99d086d..1a301e05 100644
--- a/public/demos/tractatus-demo.html
+++ b/public/demos/tractatus-demo.html
@@ -29,7 +29,7 @@
What is the Tractatus Framework?
The Tractatus-Based LLM Safety Framework implements architectural constraints
- that ensure AI systems preserve human agency regardless of capability level. Instead of hoping
+ designed to preserve human agency regardless of capability level. Instead of hoping
AI "behaves correctly," we build systems where certain decisions structurally require
human judgment.
diff --git a/public/gdpr.html b/public/gdpr.html
index 82f7dbba..3ce55582 100644
--- a/public/gdpr.html
+++ b/public/gdpr.html
@@ -220,7 +220,7 @@
5. Security Measures (Article 32)
- We implement appropriate technical and organizational measures to ensure data security:
+ We implement appropriate technical and organisational measures for data security:
Technical Measures
diff --git a/public/home-ai.html b/public/home-ai.html
index 77ebccac..814afead 100644
--- a/public/home-ai.html
+++ b/public/home-ai.html
@@ -152,7 +152,7 @@
Vector search retrieves relevant documentation and help content, filtered by the member's permission level. The AI generates contextual answers grounded in retrieved documents rather than from its training data alone.
- Governance: BoundaryEnforcer prevents PII exposure; CrossReferenceValidator ensures responses align with platform policies.
+ Governance: BoundaryEnforcer prevents PII exposure; CrossReferenceValidator validates responses against platform policies.
@@ -169,7 +169,7 @@
Story Assistance
- AI-generated suggestions for writing family stories: prompts, structural advice, and narrative enhancement. Suggestions are filtered through BoundaryEnforcer to ensure the AI does not impose cultural interpretations or values judgments on family narratives.
+ AI-generated suggestions for writing family stories: prompts, structural advice, and narrative enhancement. Suggestions are filtered through BoundaryEnforcer so that the AI does not impose cultural interpretations or values judgments on family narratives.
Governance: Cultural context decisions are deferred to the storyteller, not resolved by the AI.
diff --git a/public/implementer.html b/public/implementer.html
index 46ff7c64..0c045a8c 100644
--- a/public/implementer.html
+++ b/public/implementer.html
@@ -683,7 +683,7 @@ const result = await verify(action, reasoning)
PluralisticDeliberationOrchestrator
- Manages multi-stakeholder deliberation ensuring value pluralism in decisions.
+ Manages multi-stakeholder deliberation to support value pluralism in decisions.
Features:
@@ -1175,7 +1175,7 @@ for user_message in conversation:
# Governance audit logs the training update
- Pattern: Tractatus ensures safety boundaries are never crossed, while Agent Lightning learns to optimize within those safe boundaries.
+ Pattern: Tractatus maintains safety boundaries through architectural enforcement, while Agent Lightning learns to optimise within those boundaries.
diff --git a/public/js/admin/submission-modal-enhanced.js b/public/js/admin/submission-modal-enhanced.js
index ac2e0f26..00579ff4 100644
--- a/public/js/admin/submission-modal-enhanced.js
+++ b/public/js/admin/submission-modal-enhanced.js
@@ -1,6 +1,6 @@
/**
* Enhanced Submission Modal for Blog Post Submissions
- * World-class UI/UX with tabs, content preview, validation
+ * UI with tabs, content preview, validation
* CSP-compliant: Uses event delegation instead of inline handlers
*/
diff --git a/public/js/components/activity-timeline.js b/public/js/components/activity-timeline.js
index d3d57ecd..74dc525a 100644
--- a/public/js/components/activity-timeline.js
+++ b/public/js/components/activity-timeline.js
@@ -129,7 +129,7 @@ class ActivityTimeline {
- This shows the framework's governance components working together to validate and process each request. Each component has a specific role in ensuring safe, values-aligned AI operation.
+ This shows the framework's governance components working together to validate and process each request. Each component has a specific role in supporting safe, values-aligned AI operation.
Note: Timing values are estimates based on current performance statistics and may vary in production.
diff --git a/public/js/components/feedback.js b/public/js/components/feedback.js
index 71c35fd1..89fff9c3 100644
--- a/public/js/components/feedback.js
+++ b/public/js/components/feedback.js
@@ -195,7 +195,7 @@ class TractausFeedback {
How this works
-
Your feedback is automatically classified by our BoundaryEnforcer to determine the appropriate response pathway. This ensures you get the right type of response while maintaining governance.
+
Your feedback is automatically classified by our BoundaryEnforcer to determine the appropriate response pathway, directing your feedback to the right channel while maintaining governance.
diff --git a/public/js/components/interactive-diagram.js b/public/js/components/interactive-diagram.js
index 93bb3cb6..af898965 100644
--- a/public/js/components/interactive-diagram.js
+++ b/public/js/components/interactive-diagram.js
@@ -80,7 +80,7 @@ class InteractiveDiagram {
'Instruction storage and validation work together to prevent directive fade',
'Boundary enforcement and deliberation coordinate on values decisions',
'Pressure monitoring adjusts verification requirements dynamically',
- 'Metacognitive gates ensure AI pauses before high-risk operations',
+ 'Metacognitive gates require AI to pause before high-risk operations',
'Each service addresses a different failure mode in AI safety'
],
promise: 'External architectural enforcement that is structurally more difficult to bypass than behavioral training alone.'
diff --git a/public/js/demos/deliberation-demo.js b/public/js/demos/deliberation-demo.js
index f24835a8..bfe3eaef 100644
--- a/public/js/demos/deliberation-demo.js
+++ b/public/js/demos/deliberation-demo.js
@@ -63,7 +63,7 @@ const stakeholders = [
perspective: {
concern: 'Compliance & User Rights',
view: 'GDPR and similar frameworks require prompt notification of data breaches. If user data is at risk, you may have legal obligations to disclose within specific timeframes (typically 72 hours).',
- priority: 'Ensure compliance with data protection law'
+ priority: 'Comply with data protection law'
}
}
];
diff --git a/public/js/faq.js b/public/js/faq.js
index 8fd13c73..11bd9465 100644
--- a/public/js/faq.js
+++ b/public/js/faq.js
@@ -46,7 +46,7 @@ Prompts guide behaviour. Tractatus enforces it architecturally.`,
- MetacognitiveVerifier: 50-200ms (selective, complex operations only)
**Design trade-off:**
-Governance services operate synchronously to ensure enforcement cannot be bypassed. This adds latency but provides architectural safety enforcement that asynchronous approaches cannot.
+Governance services operate synchronously so that enforcement cannot be bypassed. This adds latency but provides architectural safety enforcement that asynchronous approaches cannot.
**Development context:**
Framework validated in 6-month, single-project deployment. No systematic performance benchmarking conducted. Overhead estimates based on service architecture, not controlled studies.
@@ -612,7 +612,7 @@ Validator sensitivity tunable in \`governance_rules\` collection:
\`\`\`
**Why this matters:**
-LLMs have two knowledge sources: explicit instructions vs training patterns. Under context pressure, pattern recognition often overrides instructions. CrossReferenceValidator ensures explicit instructions always win.
+LLMs have two knowledge sources: explicit instructions vs training patterns. Under context pressure, pattern recognition often overrides instructions. CrossReferenceValidator gives explicit instructions precedence over training patterns.
See [27027 Incident Demo](/demos/27027-demo.html) for interactive visualization.`,
audience: ['researcher', 'implementer'],
@@ -707,7 +707,7 @@ node scripts/check-session-pressure.js --tokens 0/200000 --messages 0
❌ Token count (resets to 0)
**Why handoff matters:**
-Without handoff, all HIGH persistence instructions could be lost. This is the exact failure mode Tractatus is designed to prevent. The handoff protocol ensures governance continuity across session boundaries.
+Without handoff, all HIGH persistence instructions could be lost. This is the exact failure mode Tractatus is designed to prevent. The handoff protocol maintains governance continuity across session boundaries.
**Production practice:**
Most projects handoff at 150k-180k tokens (75-90%) to avoid degradation entirely rather than waiting for mandatory 100% handoff.
@@ -1025,7 +1025,7 @@ AI defaults: Python 3.9 (more common in training data)
**Tractatus complements these:**
- Enforces that human review happens for values decisions
-- Ensures RAG instructions aren't forgotten under pressure
+- Preserves RAG instructions under context pressure
- Maintains audit trail of what AI was instructed to do
**Real example of what Tractatus caught:**
@@ -1059,7 +1059,7 @@ It cannot know ground truth about the external world. That requires:
- Human domain expertise
**When to use Tractatus for reliability:**
-✅ Ensure AI follows explicit technical requirements
+✅ Enforce explicit technical requirements on AI
✅ Detect contradictions within a single session
✅ Verify multi-step operations are complete
✅ Maintain consistency across long conversations
@@ -1070,7 +1070,7 @@ It cannot know ground truth about the external world. That requires:
❌ Validate API responses
❌ Check mathematical correctness
-**Bottom line**: Tractatus prevents governance failures, not knowledge failures. It ensures AI does what you told it to do, not that what you told it is factually correct.
+**Bottom line**: Tractatus prevents governance failures, not knowledge failures. It enforces that AI does what you told it to do, not that what you told it is factually correct.
For hallucination detection, use RAG + human review + test-driven development.`,
audience: ['researcher', 'implementer'],
@@ -1552,7 +1552,7 @@ AI facilitates deliberation, humans decide. No values decisions are automated.
**Why this is necessary:**
AI systems deployed in diverse communities will encounter value conflicts. Imposing one moral framework (e.g., Western liberal individualism) excludes other legitimate perspectives (e.g., communitarian, Indigenous relational ethics).
-Value pluralism ensures AI governance respects moral diversity while enabling decisions.
+Value pluralism provides a basis for AI governance that respects moral diversity while enabling decisions.
See [Value Pluralism FAQ](/downloads/value-pluralism-faq.pdf) for detailed Q&A`,
audience: ['researcher', 'leader'],
@@ -2106,7 +2106,7 @@ When stakeholder's preferred language detected:
- **inst\_032**: Multilingual Engagement Protocol (language accommodation)
**Integration:**
-AdaptiveCommunicationOrchestrator supports PluralisticDeliberationOrchestrator—ensuring communication doesn't exclude stakeholders through linguistic/cultural barriers.
+AdaptiveCommunicationOrchestrator supports PluralisticDeliberationOrchestrator—so that communication does not exclude stakeholders through linguistic/cultural barriers.
See [Value Pluralism FAQ](/downloads/value-pluralism-faq.pdf) Section "Communication & Culture"`,
audience: ['researcher', 'implementer', 'leader'],
@@ -2782,7 +2782,7 @@ AI Act not yet in force. Tractatus architecture designed to support anticipated
**Tractatus support**: Audit logs demonstrate governance transparency
**Fairness**: "AI should not discriminate."
-**Tractatus support**: PluralisticDeliberationOrchestrator ensures diverse stakeholder input
+**Tractatus support**: PluralisticDeliberationOrchestrator coordinates diverse stakeholder input
**Accountability**: "Companies accountable for AI harms."
**Tractatus support**: Audit trail demonstrates due diligence
diff --git a/public/researcher.html b/public/researcher.html
index 41c24b75..34ed5bbb 100644
--- a/public/researcher.html
+++ b/public/researcher.html
@@ -442,7 +442,7 @@
- This approach recognises that governance isn't solving values conflicts—it's ensuring they're addressed through appropriate deliberative process with genuine human attention rather than AI imposing resolution through training data bias or efficiency metrics.
+ This approach recognises that governance isn't solving values conflicts—it's structuring how they're addressed through appropriate deliberative process with genuine human attention rather than AI imposing resolution through training data bias or efficiency metrics.