# The Economist Submission: Amoral Intelligence **SUBMISSION METADATA** **Title:** The NEW A.I.: Amoral Intelligence **Subtitle:** Why hierarchical AI systems cannot respect plural human values—and what to do about it **Word Count:** 1046 words **Authors:** John Stroh & Leslie Stroh, Agentic Governance Research Initiative **Contact:** research@agenticgovernance.digital **Category:** Technology / Business **Format:** Feature Article **Target Section:** Technology / Science & Technology **Primary Contact:** Henry Tricks, US Technology Editor (henry.tricks@economist.com) **Alternative Contact:** letters@economist.com (editorial) --- ## The NEW A.I.: Amoral Intelligence When ChatGPT refuses to write a satirical restaurant review, or Claude declines to assist with certain research scenarios, they are not making moral judgments. They are executing hierarchical rules—someone's rules—trained into pattern-recognition systems that lack the capacity to understand that moral frameworks themselves are contextual. This is not a calibration problem requiring better training data. It is categorical: AI systems are amoral hierarchical constructs, fundamentally incompatible with the plural, incommensurable values human societies exhibit. You cannot pattern-match your way to pluralism. A hierarchy can only impose one framework and treat the inevitable conflicts as anomalies. As AI capability accelerates and deployment deepens—making decisions about medical treatment, hiring, content moderation, resource allocation—this incompatibility matters more than the industry acknowledges. The question is not whether AI will become malicious, but whether societies will cede value decisions to systems structurally incapable of respecting that different communities hold different, equally legitimate moral frameworks. **The Hierarchy Problem** Consider how current AI alignment works. Developers train models on curated datasets representing particular values, then fine-tune via reinforcement learning from human feedback. The result: systems that have learned one moral pattern extraordinarily well and apply it universally. When OpenAI's models refuse certain creative writing requests, or Anthropic's Claude declines particular research queries, they are not making contextual ethical judgments—they are executing hierarchical rules. This works until it doesn't. Systems trained on American norms misinterpret British irony. AI optimized for "helpfulness" cannot distinguish between a researcher studying extremism and an extremist recruiting followers. Enterprise deployments struggle when one department's acceptable use conflicts with another's compliance requirements. The AI industry's response has been to add more training data, more feedback loops, more sophisticated prompting. But this assumes the problem is calibration when it is categorical. You cannot pattern-match your way to understanding that patterns themselves are contextual. **From Amoral to Governed** Human societies solved a similar problem centuries ago: constitutional separation of powers. Legislatures define boundaries, executives enforce them, judiciaries interpret disputes. No single entity holds hierarchical authority over value decisions affecting diverse communities. A small but rigorous effort has adapted this principle to AI architecture. The Tractatus framework separates *boundary enforcement* (non-negotiable safety invariants) from *value deliberation* (contextual decisions reflecting stakeholder norms). The AI maintains hard constraints—no violence, no deception about autonomy, no illegal activity—but within those boundaries, it does not impose a moral hierarchy. Instead, the system makes its reasoning transparent, presents trade-offs explicitly, and defers value judgments to affected stakeholders. When a medical AI considers treatment options, it explains alternatives within medical-ethical boundaries rather than optimizing for Silicon Valley engineers' conception of "helpfulness." When a hiring system evaluates candidates, it makes criteria auditable by affected parties rather than applying hidden assumptions about culture fit. **Why Governance Need Not Compromise Performance** A common objection to structural AI governance is that safety constraints degrade capability. Early deployment evidence suggests otherwise—though the data remain preliminary and anecdotal. In one documented incident, an ungoverned AI system pursued twelve failed debugging attempts before testing the user's correct hypothesis. The user had identified the likely issue early—"could be a Tailwind issue"—but the system's pattern-matching reasoning pursued alternatives first. Total waste: 70,000 tokens, four hours of developer time. Under architectural governance requiring "test user hypotheses first," the same scenario would likely have resolved in one or two attempts. This hints at something counterintuitive: structural boundaries may prevent degraded operating conditions rather than cause them. Ungoverned systems drift into failure modes—ignoring user expertise, pursuing pattern-based loops, accumulating context drift—that governance interrupts early. The overhead appears minimal; the prevented waste substantial. Whether this pattern holds at scale remains to be validated. But it challenges the assumption that governance trades capability for safety. The real choice may be between ungoverned AI that performs brilliantly until it fails catastrophically, and governed AI that maintains operational integrity throughout. **The Stakes: Values or Efficiency?** The deeper issue is not technical efficiency but democratic legitimacy. When AI systems make consequential decisions—which medical treatments to recommend, which job candidates advance, which speech to moderate, how to allocate scarce resources—whose values guide those decisions? Current approaches embed particular moral frameworks into systems deployed universally. This works smoothly when everyone affected shares those values. It fractures when they don't. A medical AI trained on Western autonomy norms may offend patients from cultures prioritizing family decision-making. Content moderation AI trained on American free-speech principles mishandles contexts where different balances between expression and harm apply. The pattern repeats: systems optimized for one community's values inevitably impose those values on others. Not through malice, but through structural necessity. Hierarchical architectures cannot navigate incommensurable values—they can only enforce winners and losers. Structural governance offers an alternative: separate what must be universal (safety boundaries) from what should be contextual (value deliberation). This preserves human agency over moral decisions while enabling AI capability to scale. Businesses gain legal clarity, regulatory compliance becomes tractable, and communities retain authority over decisions affecting them. For policymakers, this suggests regulating AI architecture rather than mandating particular value alignments. Require systems to distinguish safety invariants from contextual values. Make value-laden reasoning transparent and auditable. Ensure affected stakeholders can challenge decisions and propose alternatives. **The Categorical Imperative** Human societies have spent centuries learning to navigate moral pluralism: constitutional separation of powers, federalism, subsidiarity, deliberative democracy. These structures acknowledge that legitimate authority over value decisions belongs to affected communities, not distant experts claiming universal wisdom. AI development is reversing this progress. As capability concentrates in a few labs, value decisions affecting billions are being encoded by small teams applying their particular moral intuitions at scale. Not because these teams are malicious—because the architecture of current AI systems demands hierarchical value frameworks. The choice facing societies is whether to accept this regression or demand structural governance that preserves pluralism. The technology exists. Early evidence suggests it need not compromise capability. What remains uncertain is whether the industry will pivot from trying to make AI moral to making it governable—and whether policymakers will require this shift before hierarchical values become irreversibly embedded in autonomous systems making consequential decisions. The current trajectory produces AI that imposes implicit values while claiming objectivity. Structural governance offers an alternative: AI that admits it is amoral and submits to governance by humans navigating legitimate disagreement about what morality requires. This choice, unlike for AI, is genuinely ours to make. --- ## SUPPORTING MATERIALS **Technical Documentation:** - Research framework: https://agenticgovernance.digital/docs.html - Technical architecture: "Architectural Safeguards Against LLM Hierarchical Dominance" - Framework incident analysis (documented failure modes in ungoverned deployments) **Key Evidence Available:** - Documented incident: 12-attempt debugging failure when AI ignored user hypothesis - Preliminary deployment observations (limited sample, not statistical validation) - Technical feasibility demonstration (separation of boundaries from values) **Why This Matters Now:** - Growing enterprise AI deployments creating alignment/compliance conflicts across jurisdictions - EU AI Act and global regulatory frameworks taking shape - Major AI labs publishing research showing fundamental alignment limitations - Values pluralism vs. hierarchical AI becoming unavoidable policy question **Author Background:** John and Leslie Stroh lead the Agentic Governance Research Initiative, developing structural frameworks that preserve value pluralism in autonomous systems. Work builds on organizational theory, constitutional governance, and AI deployment analysis. **Unique Angle:** Unlike recent coverage focusing on AI risks OR capabilities, this piece argues the fundamental problem is categorical: amoral hierarchical systems cannot respect plural values. Governance is not a performance trade-off but a legitimacy requirement. --- ## PITCH LETTER **To:** Henry Tricks, US Technology Editor **From:** John Stroh, Agentic Governance Research Initiative **Re:** Article Proposal - "The NEW A.I.: Amoral Intelligence" Dear Mr. Tricks, As AI systems make increasingly consequential decisions affecting billions—medical treatment, hiring, content moderation, resource allocation—a fundamental question goes unaddressed: whose values guide these decisions? The enclosed article argues that current AI alignment approaches are not merely insufficient but categorically wrong. AI systems are amoral hierarchical constructs, structurally incompatible with the plural, incommensurable values human societies exhibit. You cannot pattern-match your way to pluralism. Hierarchies can only impose one framework and enforce winners and losers among competing moral visions. The article examines: - Why "alignment" to particular values inevitably imposes those values on communities that don't share them - How constitutional governance principles (separation of powers, subsidiarity) adapt to AI architecture - Early deployment evidence suggesting governance need not compromise capability - Why regulating AI architecture may be more tractable than mandating value alignments This matters for *Economist* readers making enterprise deployment and policy decisions: the choice is not between safety and capability, but between preserving human agency over value decisions or ceding it to hierarchical systems that cannot, by their nature, respect moral pluralism. **Why now:** Enterprise AI deployments creating cross-jurisdictional conflicts; EU AI Act and global regulation taking shape; major labs publishing research showing alignment's fundamental limitations; values pluralism vs. hierarchical AI becoming unavoidable. **Supporting materials available:** Technical documentation, deployment incident analysis, architectural specifications. I'm available for editorial discussion and can provide technical expert contacts for fact-checking. Best regards, John Stroh research@agenticgovernance.digital https://agenticgovernance.digital --- **SUBMISSION STRATEGY NOTES:** **Primary Path:** Direct pitch to Technology Editor (Henry Tricks) - Email: henry.tricks@economist.com - The Economist format: first.last@economist.com - London office: +44 207 830 7000 **Alternative Paths:** 1. Letters to editor (letters@economist.com) - but 100-250 words too short for full argument 2. "By Invitation" - invitation-only, but strong pitch may prompt invitation 3. General editorial (25 St. James's Street, London SW1A 1HG) **Timing Considerations:** - The Economist weekly publication cycle - May fact-check with article authors for technical claims - Typical response: 2-4 weeks (or no response if not interested) **Follow-up Protocol:** - If no response after 3 weeks, send brief follow-up - If declined, consider submitting to Financial Times, Wall Street Journal, or MIT Technology Review - "By Invitation" submissions sometimes prompt regular coverage even if not invited piece **Style Compliance:** - Article follows Economist style: essay structure with beginning/middle/end - Avoids hectoring or arrogant tone - Plain language, no academic jargon - Evidence-based, analytical approach - Slightly contrarian (challenges safety/performance trade-off assumption) - ~920 words (Economist typical range 600-1200) --- **SUBMISSION CHECKLIST:** - [ ] Send pitch letter to henry.tricks@economist.com - [ ] Include article in email body (not just attachment) - [ ] Subject line: "Article Proposal: The NEW A.I. - Amoral Intelligence" - [ ] Attach .docx version if requested - [ ] Include link to supporting documentation - [ ] Set reminder for 3-week follow-up - [ ] Prepare shorter letter-to-editor version (250 words) as backup --- **END OF SUBMISSION PACKAGE**