docs: Add scholar outreach materials for Taonga paper review

Draft emails and tailored precis documents for Kukutai, Hudson,
Carroll, and Biasiny-Tule, seeking critical review of STO-RES-0010.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
TheFlow 2026-02-11 21:27:59 +13:00
parent 77d1db41f0
commit 40cc27785b
10 changed files with 294 additions and 0 deletions

View file

@ -0,0 +1,151 @@
# Draft Emails to Scholars — Taonga-Centred Steering Governance Paper
## 1. Professor Tahu Kukutai — University of Waikato
**To:** tahuk@waikato.ac.nz
**Subject:** Request for critical review — Taonga-Centred Steering Governance (STO-RES-0010)
---
Tēnā koe Professor Kukutai,
My name is John Stroh. I am a retired technologist based in North Canterbury, now working on a small research programme concerned with sovereign AI deployment for communities in Aotearoa. The programme operates under the name Tractatus and is documented at agenticgovernance.digital.
I am writing to you because we have produced a draft paper that draws substantially on your work — in particular your co-edited volume *Indigenous Data Sovereignty: Toward an Agenda* (2016) and the principles established by Te Mana Raraunga, of which I understand you are a founding member. The paper is entitled "Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models" and proposes an architectural and governance model in which steering vectors — the mathematical instruments used to adjust language model behaviour at inference time — are treated as governed cultural objects rather than engineering affordances.
The central argument is that some domains of cultural knowledge are structurally outside the platform operator's authority to define or correct, and that the governance of model behaviour should be polycentric rather than hierarchical. We draw on the concepts of taonga, tikanga, kaitiakitanga, and tino rangatiratanga to develop this argument.
I must be candid about the paper's principal limitation: it is written by a non-Maori author in collaboration with an AI assistant. It has not been reviewed by Maori scholars or practitioners. We say as much in the paper itself, and we mean it. The governance concepts from te ao Maori that we invoke are complex, living concepts that carry authority far beyond what we can adequately represent. There is a genuine risk that we have misapplied, oversimplified, or inappropriately instrumentalised them.
It is precisely for this reason that I am writing to you. I should be most grateful if you would be willing to read the paper and offer your critical assessment — not as endorsement, but as the kind of rigorous scrutiny the work requires before it can claim to serve the communities it describes. If the proposals are fundamentally misconceived, I would rather know that now than discover it after implementation.
The paper is available here:
https://agenticgovernance.digital/docs-viewer.html?slug=taonga-centred-steering-governance-polycentric-authority-for-sovereign-small-language-models
I attach a short precis that summarises the argument and its relevance to your work. It may be useful for assessing whether the full paper warrants your time.
I am also writing separately to Associate Professor Maui Hudson and to Dr Stephanie Russo Carroll, whose CARE Principles the paper draws upon. I mention this in the interest of transparency, not to imply any prior coordination.
I recognise that your time is heavily committed and that unsolicited requests of this kind arrive frequently. If this is not something you are able to take on, I entirely understand.
Ngā mihi nui,
John Stroh
agenticgovernance.digital
Balcairn, North Canterbury
---
## 2. Associate Professor Maui Hudson — University of Waikato
**To:** maui.hudson@waikato.ac.nz
**Subject:** Request for critical review — Taonga-Centred Steering Governance (STO-RES-0010)
---
Tēnā koe Associate Professor Hudson,
My name is John Stroh. I am a retired technologist based in North Canterbury, working on a small research programme concerned with sovereign AI deployment for communities in Aotearoa, documented at agenticgovernance.digital.
I am writing to you because we have produced a draft paper that is, in a sense, an attempt to address architecturally what your "Tikanga in Technology" programme addresses from within te ao Maori. The paper, "Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models," proposes a governance model for steering vectors — the mathematical instruments used to adjust language model behaviour at inference time — in which iwi and community authorities operate as co-equal peers to the platform operator, not as downstream consumers of its corrections.
Your work is woven through the paper in ways both explicit and structural. You are a co-author of the CARE Principles (Carroll et al., 2020), which we cite directly. You are a founding member of Te Mana Raraunga, whose charter informs our framing of Maori data as taonga. And your development of Biocultural Labels represents a practical precedent for the "taonga steering registries" we propose — governed metadata systems that encode provenance, access conditions, and cultural authority over digital objects.
I must be straightforward about what the paper is and what it is not. It is written by a non-Maori author in collaboration with an AI assistant. It has not been reviewed by Maori scholars or practitioners. We acknowledge this limitation explicitly in the paper, and we do not treat it as a formality. The concepts from te ao Maori that we draw upon — taonga, tikanga, kaitiakitanga, tino rangatiratanga, mana — carry meanings and obligations that we cannot fully represent. The risk of misappropriation or oversimplification is real.
I should be most grateful if you would be willing to read the paper and offer your critical assessment. I am not seeking endorsement. I am seeking the kind of correction that can only come from someone who works within the knowledge systems the paper attempts to engage with. If the direction is fundamentally wrong, that is a finding of equal value.
The paper is available here:
https://agenticgovernance.digital/docs-viewer.html?slug=taonga-centred-steering-governance-polycentric-authority-for-sovereign-small-language-models
I attach a short precis that summarises the argument and its connection to your work. It may be useful for assessing whether the full paper warrants your time.
I am also writing separately to Professor Tahu Kukutai and to Dr Stephanie Russo Carroll. I mention this for transparency.
I am aware that requests of this nature are frequent and that your commitments are substantial. If this does not fit your current programme of work, I entirely understand.
Ngā mihi nui,
John Stroh
agenticgovernance.digital
Balcairn, North Canterbury
---
## 3. Andrew Martinez — Collaboratory for Indigenous Data Governance, University of Arizona
*Note: Dr Stephanie Russo Carroll is on sabbatical (July 2025June 2026). Per her auto-reply, correspondence should be directed to Andrew Martinez, Research Coordinator.*
**To:** andrewmartinez@arizona.edu
**Subject:** For Dr Carroll's consideration when time permits — Taonga-Centred Steering Governance (STO-RES-0010)
---
Dear Mr Martinez,
My name is John Stroh. I am a retired technologist based in New Zealand, working on a small research programme concerned with sovereign AI deployment for indigenous and community governance (agenticgovernance.digital).
I wrote to Dr Carroll regarding a draft paper that builds directly on the CARE Principles (Carroll et al., 2020) and on the Collaboratory's recent IEEE 2890-2025 standard. Her sabbatical auto-reply directed me to you.
The paper, "Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models," proposes a polycentric governance architecture for AI steering vectors — the mathematical instruments used to adjust language model behaviour at inference time. It argues that indigenous and community authorities should maintain co-equal jurisdiction over model behaviour alongside the platform operator, with explicit provenance tracking for every steering intervention. The paper draws on concepts from te ao Maori but is intended to be generalisable to other indigenous governance contexts.
I am not asking for an immediate response. I understand Dr Carroll's time is committed to funded projects and service to Indigenous Peoples, and this falls outside those areas. I should simply be grateful if you could pass the attached precis and paper link to her for consideration when her schedule permits — whether during or after her sabbatical.
The paper is available here:
https://agenticgovernance.digital/docs-viewer.html?slug=taonga-centred-steering-governance-polycentric-authority-for-sovereign-small-language-models
I attach a short precis summarising the argument and its connection to the CARE Principles and IEEE 2890-2025. I am also writing to Professor Tahu Kukutai and Associate Professor Maui Hudson at the University of Waikato, whose work the paper draws upon.
Thank you for your time.
With respect,
John Stroh
agenticgovernance.digital
Balcairn, North Canterbury
---
## 4. Potaua Biasiny-Tule — Digital Natives Academy / UNESCO HILEG-ELT
**To:** [via introduction — confirm address]
**Subject:** Request for critical review — Taonga-Centred Steering Governance (STO-RES-0010)
---
Tēnā koe Potaua,
My name is John Stroh. I am a retired technologist based in North Canterbury, working on a small research programme concerned with sovereign AI deployment for communities in Aotearoa, documented at agenticgovernance.digital.
I am writing to you — via [introducer's name]'s kind introduction — because we have produced a draft paper that attempts to address architecturally what you have been advocating for publicly: that tikanga should shape how AI operates for and with Māori. The paper, "Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models," proposes a governance architecture in which steering vectors — the mathematical instruments used to adjust language model behaviour at inference time — are governed polycentrically, with iwi and community authorities operating as co-equal peers to the platform operator rather than as downstream consumers of its corrections.
The central argument is that some domains of cultural knowledge are structurally outside the platform operator's authority to define or correct. Your work on UNESCO's High-Level Expert Lead Group on the Governance of Ecosystem-Level Transformation in AI addresses this at the ecosystem level — the question of how AI governance structures should accommodate, rather than subordinate, indigenous authority. Our paper arrives at a similar question from a different direction: what does a technical architecture look like that actually implements polycentric authority at the inference layer?
The paper draws on concepts from te ao Māori — taonga, tikanga, kaitiakitanga, tino rangatiratanga — to develop three architectural commitments: that steering packs encoding iwi knowledge are treated as governed cultural objects with iwi-controlled lifecycles; that iwi governance bodies operate as co-equal steering authorities alongside the platform; and that iwi hold an unconditional right of non-participation that the platform must respect as a governed absence, not a gap to fill.
I must be straightforward about what the paper is and what it is not. It is written by a non-Māori author in collaboration with an AI assistant. It has not been reviewed by Māori scholars or practitioners. We say this in the paper itself, and we mean it. The concepts from te ao Māori that we invoke carry authority and obligation far beyond what we can adequately represent. There is a genuine risk that we have misapplied, oversimplified, or inappropriately instrumentalised them.
It is for this reason that I am seeking critical review — not endorsement — from people whose work and practice give them the standing to judge whether these proposals respect the governance traditions they invoke or merely provide new mechanisms for their subordination.
I am also writing to Professor Tahu Kukutai, Associate Professor Maui Hudson, and Dr Stephanie Russo Carroll, whose published work the paper draws on directly. I mention this for transparency.
What your perspective offers that theirs does not — and I say this with full respect for their contributions — is operational. Your work building Digital Natives Academy and Digital Basecamp, your whānau's establishment of Native Tech as an NZQA-registered PTE, the launch of Google Māori, and your collaboration with the Alan Turing Institute on AI and data justice research — these demonstrate that Māori-led digital infrastructure is not a theoretical proposition but an operational reality. You have built and run these systems at a scale where governance frameworks either work or they don't. It is precisely this that makes your assessment of whether our governance proposals are viable or misconceived particularly valuable. A proposal that looks coherent on paper but fails at the coalface is not a contribution.
The paper is available here:
https://agenticgovernance.digital/docs-viewer.html?slug=taonga-centred-steering-governance-polycentric-authority-for-sovereign-small-language-models
I attach a short precis that summarises the argument and its connection to your work. It may be useful for assessing whether the full paper warrants your time.
I recognise that your commitments are substantial and that unsolicited requests of this kind arrive frequently. If this is not something you are able to take on, I entirely understand.
Ngā mihi nui,
John Stroh
agenticgovernance.digital
Balcairn, North Canterbury

View file

@ -0,0 +1,31 @@
# Precis: Taonga-Centred Steering Governance
## Polycentric Authority for Sovereign Small Language Models
*STO-RES-0010 v0.1 DRAFT — Stroh & Claude (2026)*
---
The paper addresses a governance problem that arises when communities deploy their own language models with full access to model weights, rather than consuming AI through commercial APIs.
Such sovereign deployments permit direct modification of the model's internal representations at inference time through steering vectors — interventions that determine how the model represents kinship, place, authority, grief, and spiritual practice. These are instruments of norm enforcement. The paper asks who should govern them.
The prevailing answer is: the platform operator. The operator defines bias, extracts the corrections, distributes them downward. Communities customise within the limits set from above. The paper argues that for domains of Maori cultural knowledge, this hierarchy is structurally wrong, and proposes a polycentric alternative.
The proposal has three elements that connect directly to your work:
First, the paper proposes that steering packs encoding iwi knowledge be treated as taonga — with iwi-controlled lifecycles, access conditions, and constraints on redistribution that the platform cannot override. These packs are governed cultural objects, not plugins. The governance architecture that protects them must be structural, not policy-based — the platform cannot circumvent access conditions or substitute its own values when an iwi declines to participate.
Second, the architecture is polycentric. Iwi governance bodies and community trusts operate as co-equal steering authorities alongside the platform operator, each with distinct jurisdiction. There is no single apex. This maps directly to the distinction between delegation and recognition that the paper develops: in the delegation model, the platform grants communities the ability to customise within limits it defines; in the recognition model, community authority exists independently and the architecture either accommodates it or fails to.
Third, iwi hold a right of non-participation. They may decline to publish packs, may withdraw them at any time, and the platform must not substitute its own values into the gap. This is the architectural expression of tino rangatiratanga: iwi sovereignty does not depend on the platform's existence or goodwill.
You have argued publicly that tikanga should shape how AI operates for and with Maori — what you have called "Tikanga AI." This paper is, in a sense, an attempt to describe from the platform side a technical substrate that would be compatible with that vision. It proposes architecture where tikanga-based governance has co-equal authority over model behaviour, rather than operating as a cultural overlay on top of a platform-sovereign system.
Whether it succeeds is not a question we can answer ourselves. The paper is written by a non-Maori author in collaboration with an AI assistant. The concepts from te ao Maori that it draws upon carry authority and obligation beyond what we can represent.
What makes your assessment particularly valuable is that you have done what few others have: built sovereign Maori digital infrastructure at scale. Digital Natives Academy, Digital Basecamp, the launch of Google Maori — and more recently your whanau's establishment of Native Tech as an NZQA-registered PTE — demonstrate that Maori-led digital infrastructure is not a theoretical proposition but an operational reality. Your collaboration with the Alan Turing Institute on AI and data justice research shows this extends beyond education into precisely the governance questions the paper addresses. The scholars whose review we are also seeking — Kukutai, Hudson, Carroll — have developed the governance frameworks the paper draws on. You know where those frameworks meet the operational realities of actually running Maori-owned digital systems. It is precisely this that makes your judgment about whether the paper's proposals are viable, misconceived, or somewhere between the two, of distinct value.
---
Reference: Stroh, J. & Claude (2026). Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models. STO-RES-0010 v0.1 DRAFT. agenticgovernance.digital

Binary file not shown.

View file

@ -0,0 +1,31 @@
# Precis: Taonga-Centred Steering Governance
## Polycentric Authority for Sovereign Small Language Models
*STO-RES-0010 v0.1 DRAFT — Stroh & Claude (2026)*
---
The paper addresses a governance problem that arises when communities deploy their own language models with full access to model weights, rather than consuming AI through commercial APIs.
Such sovereign deployments permit direct modification of the model's internal representations at inference time through steering vectors — interventions that determine how the model represents kinship, place, authority, grief, and spiritual practice. These are instruments of norm enforcement. The paper asks who should govern them, and argues that the answer should not default to the platform operator.
The CARE Principles provide the paper's normative foundation. The proposed architecture attempts to operationalise each:
**Collective Benefit** is addressed through polycentric governance: indigenous and community authorities maintain co-equal jurisdiction over model behaviour alongside the platform operator, ensuring that corrections to cultural representation serve the communities whose knowledge they encode.
**Authority to Control** is addressed through "taonga steering registries" — governed repositories, operated by indigenous institutions, that maintain independent control over the creation, versioning, access conditions, and withdrawal of steering packs encoding cultural knowledge. The platform integrates with these registries but cannot encapsulate, fork, or redistribute their contents.
**Responsibility** is addressed through mandatory steering provenance: every inference records which steering packs were active, from which authorities, at what magnitude, and under what governance terms. This makes norm enforcement attributable and contestable, rather than opaque.
**Ethics** is addressed through the rejection of a single bias ontology. The paper argues that different governance authorities will define bias differently — and that those definitions may legitimately conflict. The architecture supports multiple bias frameworks simultaneously without requiring reconciliation into a single schema.
The paper is grounded in concepts from te ao Maori — taonga (treasured possessions subject to kaitiakitanga), tikanga (customary practice), tino rangatiratanga (self-determination) — as these are the indigenous governance frameworks most relevant to our context in Aotearoa. The architectural model, however, is intended to be adaptable beyond that context.
**Two questions arise on which your perspective would be particularly valuable.** First, the paper's "steering provenance" bears an obvious resemblance to what your IEEE 2890-2025 standard addresses at the standards level. We may have arrived independently at a narrower application of the same principle, or we may have missed structural requirements that the standard identifies. Second, whether the polycentric governance architecture — co-equal authorities, governed registries, right of non-participation — transfers meaningfully to indigenous governance contexts beyond Aotearoa, or whether its Maori-specific framing limits its applicability.
The paper is a draft written by a non-indigenous author in collaboration with an AI assistant, without indigenous peer review. We are concurrently seeking review from Professor Tahu Kukutai and Associate Professor Maui Hudson at the University of Waikato. We recognise that the concepts we draw upon carry authority and obligation beyond what we can represent, and we invite correction.
---
Reference: Stroh, J. & Claude (2026). Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models. STO-RES-0010 v0.1 DRAFT. agenticgovernance.digital

Binary file not shown.

View file

@ -0,0 +1,21 @@
# Precis: Why "Taonga-Centred Steering Governance" Matters
The dominant paradigm in AI alignment assumes a single normative authority. Whether the corrective mechanism is reinforcement learning from human feedback (RLHF), constitutional AI, or inference-time steering vectors, someone decides what counts as bias, defines the axes of correction, and distributes those corrections to all downstream users. The governance topology is a tree with one root. This paper argues that topology is the problem.
The core move is to reframe AI bias correction as a question of political authority rather than engineering optimisation. Steering vectors -- mathematical interventions that adjust a model's internal representations at inference time -- are not neutral technical affordances. They are instruments of norm enforcement. The question "how do we debias a model?" decomposes into prior questions that are irreducibly political: Who defines what bias is? Through what process? With what recourse for the governed?
The philosophical significance is threefold:
First, the paper makes an ontological argument against monism in bias definition. Bias is not a natural kind discoverable by measurement. It is a judgment made from within a normative framework. Different communities, operating from different frameworks, will define bias differently -- and those definitions may legitimately conflict. The paper draws on Elinor Ostrom's polycentric governance to propose a structure in which multiple authorities maintain co-equal jurisdiction over model behaviour, with no single apex. This is a direct challenge to the implicit universalism of current alignment approaches, which treat "human values" as a singular object to be discovered and encoded.
Second, it introduces a distinction between delegation and recognition models of authority. In the delegation model, the platform operator grants downstream communities the ability to customise the model within limits the platform defines. In the recognition model, community authority exists independently of the platform; the architecture either accommodates that authority or fails to. This distinction matters because delegation preserves the platform as constitutional root -- communities operate within its frame -- while recognition requires the platform to accept structural limits on its own jurisdiction. The paper uses the concept of taonga (treasured possessions subject to kaitiakitanga/guardianship in te ao Maori) to make this concrete: when a steering pack encodes iwi knowledge of whakapapa, tikanga, or spiritual practice, it is not a "plugin" the platform hosts but a governed cultural object whose lifecycle, access conditions, and withdrawal rights belong entirely to iwi institutions.
Third, the paper embeds a right of non-participation as an architectural constraint, not a policy concession. An iwi steering authority can refuse to publish packs, can set conditions on their use, can withdraw them at any time -- and the platform must function without them and must not substitute its own values into the resulting gap. This is the sharpest departure from current practice: in every existing AI system, the platform fills all governance vacuums by default. Here, the absence of an iwi pack is treated as a governed absence -- a boundary the platform must respect, not a space it should colonise.
Why this matters beyond AI ethics: The paper is, at bottom, an argument that technological substrates can either entrench or accommodate political pluralism, and that the choice is architectural. Current AI systems architecturally enforce value monism -- a single set of guardrails, opaque to users, contestable by no one. The polycentric alternative proposed here would make norm enforcement visible (through steering provenance), distributed (across co-equal authorities), and contestable (through explicit conflict-resolution processes). It does not claim to resolve the tension between universal safety baselines and plural cultural norms -- it claims, more modestly, that this tension should be made explicit and politically navigable rather than hidden inside an engineering stack.
The paper is honest about its limits: it is conceptual, not implemented; it is written without Maori co-authorship and awaits indigenous peer review; and it acknowledges that the institutional trust required for polycentric governance cannot be architectured into existence. But as a philosophical contribution, it demonstrates that the question of who steers a language model is not a feature request -- it is a constitutional question, and current architectures have already answered it by defaulting to platform sovereignty.
---
Reference: Stroh, J. & Claude (2026). Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models. STO-RES-0010 v0.1 DRAFT. agenticgovernance.digital

View file

@ -0,0 +1,27 @@
# Precis: Taonga-Centred Steering Governance
## Polycentric Authority for Sovereign Small Language Models
*STO-RES-0010 v0.1 DRAFT — Stroh & Claude (2026)*
---
The paper addresses a governance problem that arises when communities deploy their own language models with full access to model weights, rather than consuming AI through commercial APIs.
Such sovereign deployments permit direct modification of the model's internal representations at inference time through steering vectors — interventions that determine how the model represents kinship, place, authority, grief, and spiritual practice. These are instruments of norm enforcement. The paper asks who should govern them.
The prevailing answer is: the platform operator. The operator defines bias, extracts the corrections, distributes them downward. Communities customise within the limits set from above. The paper argues that for domains of Maori cultural knowledge, this hierarchy is structurally wrong, and proposes a polycentric alternative.
The proposal has three elements that connect directly to your work:
First, steering packs that encode iwi knowledge are treated as taonga — with iwi-controlled lifecycles, access conditions, and constraints on redistribution. The paper proposes iwi-operated "taonga steering registries" that are functionally analogous to the Biocultural Labels and Local Contexts infrastructure you have developed, applied here to AI steering interventions rather than to collections metadata. These registries enforce governance at the API level: access conditions, provenance verification, and revocation rights that the platform cannot circumvent.
Second, the architecture is polycentric. Iwi governance bodies and community trusts operate as co-equal steering authorities alongside the platform operator, each with distinct jurisdiction. There is no single apex. The model's activation space is a shared substrate, not a constitutional order. This maps to the distinction between delegation (platform grants authority downward) and recognition (iwi authority exists independently; the architecture either accommodates it or fails to).
Third, iwi hold a right of non-participation. They may decline to publish packs, may withdraw them at any time, and the platform must not substitute its own values into the gap. This is the architectural expression of tino rangatiratanga: iwi sovereignty does not depend on the platform's existence or goodwill.
The paper is, in a sense, an attempt to describe from the platform side a technical substrate that would be compatible with what your "Tikanga in Technology" programme addresses from within te ao Maori. Whether it succeeds — whether the architecture genuinely respects tikanga-based governance or merely provides new mechanisms for its subordination — is not a question we can answer ourselves.
---
Reference: Stroh, J. & Claude (2026). Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models. STO-RES-0010 v0.1 DRAFT. agenticgovernance.digital

Binary file not shown.

View file

@ -0,0 +1,33 @@
# Precis: Taonga-Centred Steering Governance
## Polycentric Authority for Sovereign Small Language Models
*STO-RES-0010 v0.1 DRAFT — Stroh & Claude (2026)*
---
The paper addresses a governance problem that arises when communities deploy their own language models rather than consuming commercial AI through APIs.
Sovereign small language models — locally hosted, with full access to model weights — permit a class of intervention unavailable to API consumers: direct modification of the model's internal representations at inference time through steering vectors. These interventions determine how the model represents kinship, place, authority, grief, and spiritual practice. They are, in substance, instruments of norm enforcement.
The question the paper poses is not technical but political: who governs these instruments?
The prevailing architecture assumes a single governance root. The platform operator defines what constitutes bias, extracts the corrections, and distributes them to all downstream deployments. Communities may customise within the limits the platform sets, but they cannot contest the root definitions. This is a delegation model: authority flows downward from the platform.
The paper argues that for domains of Maori cultural knowledge — whakapapa, tikanga, kawa, the mana of kaumatua and kuia — this hierarchy is structurally inappropriate regardless of the platform operator's intentions. It proposes a polycentric alternative, drawing on Ostrom and on the principles your work has established through *Indigenous Data Sovereignty* and Te Mana Raraunga.
The core of the proposal is a distinction between delegation and recognition. In the delegation model, the platform accommodates indigenous governance as a feature. In the recognition model, iwi authority exists prior to and independently of the platform, and the architecture either respects that independence or undermines it. The paper argues for recognition, and develops three architectural consequences:
First, steering packs that encode iwi knowledge are treated as taonga — subject to kaitiakitanga, with iwi-controlled lifecycles, access conditions, and constraints on redistribution that the platform cannot override.
Second, iwi governance bodies operate as co-equal steering authorities alongside the platform, not beneath it. Each maintains its own registry, its own bias ontology, its own review processes. The model's activation space is a shared technical substrate, not a constitutional order with the platform at its apex.
Third, iwi hold a right of non-participation. They may refuse to publish steering packs, may withdraw them at any time, and the platform must not fill the resulting space with its own values. The absence of an iwi pack is a governed absence, not a gap.
The paper is honest about what it cannot do. It cannot resolve the tension between platform safety baselines and plural cultural norms — it can only make that tension visible and politically navigable. It cannot create the institutional trust that polycentric governance requires. And it cannot, as a work of non-Maori authorship, speak with authority about the concepts from te ao Maori on which its argument depends.
It is that last limitation which brings us to you.
---
Reference: Stroh, J. & Claude (2026). Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models. STO-RES-0010 v0.1 DRAFT. agenticgovernance.digital

Binary file not shown.