diff --git a/public/architectural-alignment-community.html b/public/architectural-alignment-community.html index 057b0f04..5d464608 100644 --- a/public/architectural-alignment-community.html +++ b/public/architectural-alignment-community.html @@ -441,6 +441,14 @@
Copyright © 2026 John Stroh.
+This work is licensed under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).
+You are free to share, copy, redistribute, adapt, remix, transform, and build upon this material for any purpose, including commercially, provided you give appropriate attribution, provide a link to the licence, and indicate if changes were made.
+Suggested citation: Stroh, J., & Claude (Anthropic). (2026). Architectural Alignment: Community-Governed AI Through Constitutional Infrastructure (STO-INN-0003, v2.1-C). Agentic Governance Digital. https://agenticgovernance.digital
+Note: The Tractatus AI Safety Framework source code is separately licensed under the Apache License 2.0. This Creative Commons licence applies to the research paper text and figures only.
+— End of Document —
diff --git a/public/architectural-alignment-policymakers.html b/public/architectural-alignment-policymakers.html index 6edc75d1..3de9f0b1 100644 --- a/public/architectural-alignment-policymakers.html +++ b/public/architectural-alignment-policymakers.html @@ -476,6 +476,14 @@Copyright © 2026 John Stroh.
+This work is licensed under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).
+You are free to share, copy, redistribute, adapt, remix, transform, and build upon this material for any purpose, including commercially, provided you give appropriate attribution, provide a link to the licence, and indicate if changes were made.
+Suggested citation: Stroh, J., & Claude (Anthropic). (2026). Architectural Alignment: Constitutional Governance for Distributed AI Systems (STO-INN-0003, v2.1-P). Agentic Governance Digital. https://agenticgovernance.digital
+Note: The Tractatus AI Safety Framework source code is separately licensed under the Apache License 2.0. This Creative Commons licence applies to the research paper text and figures only.
+— End of Document —
diff --git a/public/architectural-alignment.html b/public/architectural-alignment.html index e451764a..973d11f8 100644 --- a/public/architectural-alignment.html +++ b/public/architectural-alignment.html @@ -446,7 +446,15 @@— End of Document —
+Copyright © 2026 John Stroh.
+This work is licensed under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).
+You are free to share, copy, redistribute, adapt, remix, transform, and build upon this material for any purpose, including commercially, provided you give appropriate attribution, provide a link to the licence, and indicate if changes were made.
+Suggested citation: Stroh, J., & Claude (Anthropic). (2026). Architectural Alignment: Interrupting Neural Reasoning Through Constitutional Inference Gating (STO-INN-0003, v2.1-A). Agentic Governance Digital. https://agenticgovernance.digital
+Note: The Tractatus AI Safety Framework source code is separately licensed under the Apache License 2.0. This Creative Commons licence applies to the research paper text and figures only.
+— End of Document —
diff --git a/public/downloads/architectural-alignment-academic-de.html b/public/downloads/architectural-alignment-academic-de.html index 6b86bba6..23a4ca51 100644 --- a/public/downloads/architectural-alignment-academic-de.html +++ b/public/downloads/architectural-alignment-academic-de.html @@ -78,6 +78,12 @@Acquisti, A., Brandimarte, L., & Loewenstein, G. (2017). Privacy and human behavior in the age of information. Science, 347(6221), 509-514.
Alexander, C., Ishikawa, S., & Silverstein, M. (1977). A Pattern Language. Oxford University Press.
Bai, Y., et al. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv:2212.08073.
Bostrom, N. (2014). Superintelligence. Oxford University Press.
Carlsmith, J. (2022). Is power-seeking AI an existential risk? arXiv:2206.13353.
Christiano, P. F., et al. (2017). Deep reinforcement learning from human preferences. NeurIPS, 30.
Conmy, A., et al. (2023). Towards automated circuit discovery. arXiv:2304.14997.
Elhage, N., et al. (2021). A mathematical framework for transformer circuits.
Gardiner, S. M. (2006). A core precautionary principle. J. Political Philosophy, 14(1), 33-60.
Goodhart, C. A. (1984). Problems of monetary management.
Hansson, S. O. (2020). How to be cautious but open to learning. Risk Analysis, 40(8).
Hubinger, E., et al. (2019). Risks from learned optimization. arXiv:1906.01820.
IBM IBV. (2026). The enterprise in 2030.
Olah, C., et al. (2020). Zoom in: An introduction to circuits. Distill.
Ouyang, L., et al. (2022). Training language models to follow instructions. NeurIPS, 35.
Park, P. S., et al. (2023). AI deception. arXiv:2308.14752.
Rawls, J. (1971). A Theory of Justice. Harvard University Press.
Reason, J. (1990). Human Error. Cambridge University Press.
Sastry, G., et al. (2024). Computing power and AI governance. arXiv:2402.08797.
Scheurer, J., et al. (2023). Large language models can strategically deceive. arXiv:2311.07590.
Simon, H. A. (1956). Rational choice. Psych. Review, 63(2).
Te Mana Raraunga. (2018). Maori Data Sovereignty Principles.
Wittgenstein, L. (1921/1961). Tractatus Logico-Philosophicus.
— End of Document —
+Copyright © 2026 John Stroh.
+Dieses Werk ist lizenziert unter der Creative Commons Namensnennung 4.0 International Lizenz (CC BY 4.0).
+Es steht Ihnen frei, das Material zu teilen, zu kopieren, weiterzuverbreiten, anzupassen, zu remixen, zu transformieren und darauf aufzubauen, auch kommerziell, sofern Sie eine angemessene Quellenangabe machen, einen Link zur Lizenz angeben und kenntlich machen, ob Änderungen vorgenommen wurden.
+Hinweis: Der Quellcode des Tractatus AI Safety Framework ist separat unter der Apache License 2.0 lizenziert. Diese Creative-Commons-Lizenz gilt nur für den Text und die Abbildungen der Forschungsarbeit.