tractatus/docs/precis-taonga-centred-steering-governance.md
TheFlow 5f1cf7e904 docs: Add scholar outreach materials for Taonga paper review
Draft emails and tailored precis documents for Kukutai, Hudson,
Carroll, and Biasiny-Tule, seeking critical review of STO-RES-0010.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 21:27:59 +13:00

4.8 KiB

Precis: Why "Taonga-Centred Steering Governance" Matters

The dominant paradigm in AI alignment assumes a single normative authority. Whether the corrective mechanism is reinforcement learning from human feedback (RLHF), constitutional AI, or inference-time steering vectors, someone decides what counts as bias, defines the axes of correction, and distributes those corrections to all downstream users. The governance topology is a tree with one root. This paper argues that topology is the problem.

The core move is to reframe AI bias correction as a question of political authority rather than engineering optimisation. Steering vectors -- mathematical interventions that adjust a model's internal representations at inference time -- are not neutral technical affordances. They are instruments of norm enforcement. The question "how do we debias a model?" decomposes into prior questions that are irreducibly political: Who defines what bias is? Through what process? With what recourse for the governed?

The philosophical significance is threefold:

First, the paper makes an ontological argument against monism in bias definition. Bias is not a natural kind discoverable by measurement. It is a judgment made from within a normative framework. Different communities, operating from different frameworks, will define bias differently -- and those definitions may legitimately conflict. The paper draws on Elinor Ostrom's polycentric governance to propose a structure in which multiple authorities maintain co-equal jurisdiction over model behaviour, with no single apex. This is a direct challenge to the implicit universalism of current alignment approaches, which treat "human values" as a singular object to be discovered and encoded.

Second, it introduces a distinction between delegation and recognition models of authority. In the delegation model, the platform operator grants downstream communities the ability to customise the model within limits the platform defines. In the recognition model, community authority exists independently of the platform; the architecture either accommodates that authority or fails to. This distinction matters because delegation preserves the platform as constitutional root -- communities operate within its frame -- while recognition requires the platform to accept structural limits on its own jurisdiction. The paper uses the concept of taonga (treasured possessions subject to kaitiakitanga/guardianship in te ao Maori) to make this concrete: when a steering pack encodes iwi knowledge of whakapapa, tikanga, or spiritual practice, it is not a "plugin" the platform hosts but a governed cultural object whose lifecycle, access conditions, and withdrawal rights belong entirely to iwi institutions.

Third, the paper embeds a right of non-participation as an architectural constraint, not a policy concession. An iwi steering authority can refuse to publish packs, can set conditions on their use, can withdraw them at any time -- and the platform must function without them and must not substitute its own values into the resulting gap. This is the sharpest departure from current practice: in every existing AI system, the platform fills all governance vacuums by default. Here, the absence of an iwi pack is treated as a governed absence -- a boundary the platform must respect, not a space it should colonise.

Why this matters beyond AI ethics: The paper is, at bottom, an argument that technological substrates can either entrench or accommodate political pluralism, and that the choice is architectural. Current AI systems architecturally enforce value monism -- a single set of guardrails, opaque to users, contestable by no one. The polycentric alternative proposed here would make norm enforcement visible (through steering provenance), distributed (across co-equal authorities), and contestable (through explicit conflict-resolution processes). It does not claim to resolve the tension between universal safety baselines and plural cultural norms -- it claims, more modestly, that this tension should be made explicit and politically navigable rather than hidden inside an engineering stack.

The paper is honest about its limits: it is conceptual, not implemented; it is written without Maori co-authorship and awaits indigenous peer review; and it acknowledges that the institutional trust required for polycentric governance cannot be architectured into existence. But as a philosophical contribution, it demonstrates that the question of who steers a language model is not a feature request -- it is a constitutional question, and current architectures have already answered it by defaulting to platform sovereignty.


Reference: Stroh, J. & Claude (2026). Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models. STO-RES-0010 v0.1 DRAFT. agenticgovernance.digital