feat: reading-mode toggle on all 3 architectural-alignment papers
Overview/Standard/Deep density modes applied to academic, community, and policymakers editions. Reuses existing reading-mode.js component. Pre-existing inline-style CSP warnings in licence sections unchanged. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
f8169c4d50
commit
64dd237628
3 changed files with 56 additions and 0 deletions
|
|
@ -28,6 +28,7 @@
|
|||
<link rel="stylesheet" href="/css/fonts.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/tailwind.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/tractatus-theme.min.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/reading-mode.css?v=0.1.2">
|
||||
|
||||
<style>
|
||||
.article-container { max-width: 800px; margin: 0 auto; padding: 2rem 1.5rem 4rem; }
|
||||
|
|
@ -99,6 +100,7 @@
|
|||
</p>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="overview">
|
||||
<section class="executive-summary">
|
||||
<h2>Executive Summary</h2>
|
||||
<p>The question is no longer whether AI will be part of community life, but <strong>who will govern it when it arrives</strong>.</p>
|
||||
|
|
@ -121,7 +123,9 @@
|
|||
</blockquote>
|
||||
<p>The underlying research addresses serious questions about AI safety and alignment. We believe communities benefit from understanding this context—not because your household AI poses existential risks, but because building governance capacity now prepares for a future where such capacity will matter more.</p>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="standard">
|
||||
<h2>1. The Problem: Who Governs Your AI?</h2>
|
||||
|
||||
<h3>1.1 The Current Reality</h3>
|
||||
|
|
@ -321,6 +325,9 @@
|
|||
</blockquote>
|
||||
<p>This doesn't solve all problems. Platform-level accommodation is not a substitute for legislative recognition. But it demonstrates that constitutional governance can respect rather than override indigenous sovereignty.</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2>7. Practical Considerations</h2>
|
||||
|
||||
<h3>7.1 What You Need</h3>
|
||||
|
|
@ -416,6 +423,9 @@
|
|||
<p>Critique that helps us improve</p>
|
||||
</blockquote>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="overview">
|
||||
<h2>10. Conclusion</h2>
|
||||
<p>AI is coming to communities whether communities prepare or not. The question is whether that AI will be governed by vendor terms of service, by constitutional frameworks reflecting community values, or by nothing at all.</p>
|
||||
<p>The Tractatus Framework offers one answer: architectural governance that makes AI accountable to the communities it serves. Not through trust in vendor training, but through visible, auditable, democratically-determined rules.</p>
|
||||
|
|
@ -429,7 +439,9 @@
|
|||
<p>—Māori proverb</p>
|
||||
</blockquote>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2 class="references">References</h2>
|
||||
<div class="references">
|
||||
<p>IBM Institute for Business Value. (2026). <em>The enterprise in 2030</em>. IBM Corporation.</p>
|
||||
|
|
@ -439,6 +451,7 @@
|
|||
<p>Hubinger, E., van Merwijk, C., Mikulik, V., Skalse, J., & Garrabrant, S. (2019). Risks from learned optimization in advanced machine learning systems. arXiv preprint arXiv:1906.01820.</p>
|
||||
<p>Alexander, C., Ishikawa, S., & Silverstein, M. (1977). <em>A Pattern Language: Towns, Buildings, Construction</em>. Oxford University Press.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr style="margin: 3rem 0;">
|
||||
<div style="background: #f9fafb; border: 1px solid #e5e7eb; border-radius: 0.5rem; padding: 1.5rem; margin: 2rem 0;">
|
||||
|
|
@ -455,5 +468,6 @@
|
|||
<!-- Footer -->
|
||||
<script src="/js/components/footer.js?v=0.1.2.1776366945602"></script>
|
||||
|
||||
<script src="/js/reading-mode.js?v=0.1.2" defer></script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
|||
|
|
@ -28,6 +28,7 @@
|
|||
<link rel="stylesheet" href="/css/fonts.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/tailwind.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/tractatus-theme.min.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/reading-mode.css?v=0.1.2">
|
||||
|
||||
<style>
|
||||
.article-container { max-width: 800px; margin: 0 auto; padding: 2rem 1.5rem 4rem; }
|
||||
|
|
@ -99,6 +100,7 @@
|
|||
</p>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="overview">
|
||||
<section class="executive-summary">
|
||||
<h2>Executive Summary</h2>
|
||||
<p>AI deployment is outpacing regulatory capacity. While policymakers debate frameworks for large language models operated by major technology companies, a parallel transformation is underway: the migration of AI capabilities to <strong>small, locally-deployed models in homes, communities, and small organisations</strong>. Recent industry research indicates that 72% of enterprise executives expect small language models (SLMs) to surpass large language models in prominence by 2030 (IBM Institute for Business Value, 2026). This shift creates an urgent governance challenge: <strong>who controls AI deployed at the edge, and under what rules?</strong></p>
|
||||
|
|
@ -118,7 +120,9 @@
|
|||
<p><strong>5. Preparation must precede capability.</strong> Governance frameworks for advanced AI cannot be developed after such systems exist. Building constitutional infrastructure at accessible scales now creates the foundation for higher-stakes governance later.</p>
|
||||
</blockquote>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="standard">
|
||||
<h2>1. The Governance Gap</h2>
|
||||
|
||||
<h3>1.1 Regulatory Lag</h3>
|
||||
|
|
@ -296,6 +300,9 @@
|
|||
</blockquote>
|
||||
<p><strong>Regulatory strategy:</strong> mandate the architectural infrastructure, then specify constitutional content through secondary instruments (guidance, standards, sector-specific rules).</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2>5. Certification Infrastructure</h2>
|
||||
|
||||
<h3>5.1 The Need for Standards</h3>
|
||||
|
|
@ -341,6 +348,9 @@
|
|||
<p><strong>Insurance:</strong> Insurers can offer favourable terms for certified deployments</p>
|
||||
</blockquote>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="standard">
|
||||
<h2>6. Indigenous Data Sovereignty</h2>
|
||||
|
||||
<h3>6.1 Constitutional Requirements in Aotearoa New Zealand</h3>
|
||||
|
|
@ -428,6 +438,9 @@
|
|||
</blockquote>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2>9. Honest Assessment of Limitations</h2>
|
||||
|
||||
<h3>9.1 What Constitutional Gating Cannot Do</h3>
|
||||
|
|
@ -456,12 +469,18 @@
|
|||
</blockquote>
|
||||
<p>The alternative—waiting for certainty before acting—guarantees that governance frameworks arrive after the need has become acute.</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="overview">
|
||||
<h2>10. Conclusion</h2>
|
||||
<p>The governance gap in AI deployment is widening. As capabilities migrate to distributed, locally-deployed systems, traditional regulatory approaches face fundamental challenges of scale, jurisdiction, and verification.</p>
|
||||
<p>Constitutional gating offers a regulatory strategy: mandate auditable architectural infrastructure rather than unverifiable behavioural requirements. The Tractatus Framework provides a concrete specification that can be implemented across deployment paradigms—from cloud LLMs to sovereign home SLLs.</p>
|
||||
<p><strong>The policy window is now.</strong> Within five years, if industry projections hold, AI deployment will be characterised by thousands of small, domain-specific models operating in homes, communities, and small organisations. Governance frameworks developed now will shape that landscape; frameworks developed later will struggle to retrofit.</p>
|
||||
<p>We offer this analysis in the spirit of contribution to ongoing policy deliberation. The questions are hard, the uncertainties substantial, and the stakes significant. Policymakers, researchers, and communities must work together to develop governance frameworks adequate to the challenge.</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2 class="references">References</h2>
|
||||
<div class="references">
|
||||
<p>IBM Institute for Business Value. (2026). <em>The enterprise in 2030</em>. IBM Corporation.</p>
|
||||
|
|
@ -474,6 +493,7 @@
|
|||
<p>Reason, J. (1990). <em>Human Error</em>. Cambridge University Press.</p>
|
||||
<p>Sastry, G., et al. (2024). Computing power and the governance of artificial intelligence. arXiv preprint arXiv:2402.08797.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr style="margin: 3rem 0;">
|
||||
<div style="background: #f9fafb; border: 1px solid #e5e7eb; border-radius: 0.5rem; padding: 1.5rem; margin: 2rem 0;">
|
||||
|
|
@ -490,5 +510,6 @@
|
|||
<!-- Footer -->
|
||||
<script src="/js/components/footer.js?v=0.1.2.1776366945602"></script>
|
||||
|
||||
<script src="/js/reading-mode.js?v=0.1.2" defer></script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
|||
|
|
@ -28,6 +28,7 @@
|
|||
<link rel="stylesheet" href="/css/fonts.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/tailwind.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/tractatus-theme.min.css?v=0.1.2.1776366945602">
|
||||
<link rel="stylesheet" href="/css/reading-mode.css?v=0.1.2">
|
||||
|
||||
<style>
|
||||
.article-container { max-width: 800px; margin: 0 auto; padding: 2rem 1.5rem 4rem; }
|
||||
|
|
@ -98,6 +99,7 @@
|
|||
</p>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="overview">
|
||||
<div class="collaboration-note">
|
||||
This document was developed through human-AI collaboration. The authors believe this collaborative process is itself relevant to the argument: if humans and AI systems can work together to reason about AI governance, the frameworks they produce may have legitimacy that neither could achieve alone.
|
||||
</div>
|
||||
|
|
@ -112,7 +114,9 @@
|
|||
|
||||
<p>The paper contributes: (1) a formal architecture for inference-time constitutional gating; (2) capability threshold specifications with escalation logic; (3) validation methodology for layered containment; (4) an argument connecting existential risk preparation to edge deployment; and (5) a call for sustained deliberation (kōrero) as the epistemically appropriate response to alignment uncertainty.</p>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<div data-reading-level="standard">
|
||||
<h2>1. The Stakes: Why Probabilistic Risk Assessment Fails</h2>
|
||||
|
||||
<h3>1.1 The Standard Framework and Its Breakdown</h3>
|
||||
|
|
@ -310,6 +314,9 @@
|
|||
<p><strong>Small Language Model (SLM).</strong> A technical descriptor for models with fewer parameters than frontier LLMs, designed for efficiency.</p>
|
||||
<p><strong>Situated Language Layer (SLL).</strong> An architectural layer comprising a small language model that is sovereign (locally trained, locally deployed, community-controlled) and situated (shaped by the specific context, values, and vocabulary of the community it serves). The term draws on situated knowledge theory: understanding that emerges from a particular context rather than claiming universality.</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2>7. Capability Thresholds and Escalation</h2>
|
||||
|
||||
<h3>7.1 The Faithful Translation Problem</h3>
|
||||
|
|
@ -357,6 +364,9 @@
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="standard">
|
||||
<h2>9. Implementation: The Village Platform</h2>
|
||||
|
||||
<h3>9.1 Platform as Research Testbed</h3>
|
||||
|
|
@ -383,6 +393,9 @@
|
|||
<h3>11.2 Te Mana Raraunga Principles</h3>
|
||||
<p>Te Mana Raraunga principles include whakapapa (relational context), mana (authority over data), and kaitiakitanga (guardianship responsibilities). The CARE Principles for Indigenous Data Governance extend this framework internationally.</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2>12. What Remains Unknown: A Call for Kōrero</h2>
|
||||
|
||||
<h3>12.1 The Limits of This Analysis</h3>
|
||||
|
|
@ -406,6 +419,9 @@
|
|||
<p>5. Capability threshold specification</p>
|
||||
</blockquote>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="overview">
|
||||
<h3>12.4 Conclusion</h3>
|
||||
<p>The Tractatus Framework provides meaningful containment for AI systems operating in good faith within human-comprehensible parameters. It is worth building and deploying—not because it solves the alignment problem, but because it develops the infrastructure, patterns, and governance culture that may be needed for challenges we cannot yet fully specify.</p>
|
||||
|
||||
|
|
@ -418,6 +434,9 @@
|
|||
<p style="margin-top: 1.5rem; font-style: normal; font-weight: 500;"><strong>The conversation continues.</strong></p>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<div data-reading-level="deep">
|
||||
<h2 class="references">References</h2>
|
||||
<div class="references">
|
||||
<p>Acquisti, A., Brandimarte, L., & Loewenstein, G. (2017). Privacy and human behavior in the age of information. <em>Science</em>, 347(6221), 509-514.</p>
|
||||
|
|
@ -444,6 +463,7 @@
|
|||
<p>Te Mana Raraunga. (2018). <em>Māori Data Sovereignty Principles</em>.</p>
|
||||
<p>Wittgenstein, L. (1921/1961). <em>Tractatus Logico-Philosophicus</em>. Routledge & Kegan Paul.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr style="margin: 3rem 0;">
|
||||
<div style="background: #f9fafb; border: 1px solid #e5e7eb; border-radius: 0.5rem; padding: 1.5rem; margin: 2rem 0;">
|
||||
|
|
@ -460,5 +480,6 @@
|
|||
<!-- Footer -->
|
||||
<script src="/js/components/footer.js?v=0.1.2.1776366945602"></script>
|
||||
|
||||
<script src="/js/reading-mode.js?v=0.1.2" defer></script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue