feat(i18n): add complete internationalization for researcher page

Implemented full translation infrastructure for researcher.html:
- Added 148 data-i18n attributes across all content sections
- Created 142 translation keys in nested JSON structure
- Translated all keys to German (DE) and French (FR) via DeepL Pro API
- Zero translation errors, all keys validated across 3 languages

Content translated includes:
- Research Context & Scope (4 major paragraphs)
- Theoretical Foundations (Organizational Theory + Values Pluralism accordions)
- Empirical Observations (3 documented failure modes with labels)
- Six-Component Architecture (all services with descriptions)
- Interactive Demonstrations, Resources, Bibliography, Limitations

New scripts:
- translate-researcher-deepl.js: Automated DeepL translation with rate limiting
- validate-researcher-i18n.js: i18n completeness validation tool

Translation quality verified with sample checks. Page ready for multilingual deployment.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
TheFlow 2025-10-27 00:18:45 +13:00
parent fce44f3e48
commit 5e7b3ef21f
6 changed files with 732 additions and 117 deletions

View file

@ -12,12 +12,42 @@
"research_context": { "research_context": {
"heading": "Forschungskontext & Umfang", "heading": "Forschungskontext & Umfang",
"development_note": "Entwicklungskontext", "development_note": "Entwicklungskontext",
"development_text": "Tractatus wurde über sechs Monate (AprilOktober 2025) in progressiven Phasen entwickelt, die sich zu einer Live-Demonstration seiner Fähigkeiten in Form eines Einzelprojekt-Kontexts (https://agenticgovernance.digital) entwickelten. Beobachtungen stammen aus direktem Engagement mit Claude Code (Anthropics Sonnet 4.5-Modell) über etwa 500 Entwicklungssitzungen. Dies ist explorative Forschung, keine kontrollierte Studie." "development_text": "Tractatus wurde über sechs Monate (AprilOktober 2025) in progressiven Phasen entwickelt, die sich zu einer Live-Demonstration seiner Fähigkeiten in Form eines Einzelprojekt-Kontexts (https://agenticgovernance.digital) entwickelten. Beobachtungen stammen aus direktem Engagement mit Claude Code (Anthropics Sonnet 4.5-Modell) über etwa 500 Entwicklungssitzungen. Dies ist explorative Forschung, keine kontrollierte Studie.",
"paragraph_1": "Die Anpassung fortschrittlicher KI an menschliche Werte ist eine der größten Herausforderungen, vor denen wir stehen. Da sich das Wachstum von Fähigkeiten unter dem Einfluss von Big Tech beschleunigt, stehen wir vor einem kategorischen Imperativ: Wir müssen die menschliche Kontrolle über Wertentscheidungen bewahren, oder wir riskieren, die Kontrolle vollständig abzugeben.",
"paragraph_2": "Der Rahmen ist aus einer praktischen Notwendigkeit heraus entstanden. Während der Entwicklung beobachteten wir immer wieder, dass sich KI-Systeme über explizite Anweisungen hinwegsetzten, von festgelegten Wertvorgaben abwichen oder unter dem Druck des Kontextes stillschweigend die Qualität verschlechterten. Herkömmliche Governance-Ansätze (Grundsatzdokumente, ethische Richtlinien, Prompt-Engineering) erwiesen sich als unzureichend, um diese Fehler zu verhindern.",
"paragraph_3": "Anstatt zu hoffen, dass sich KI-Systeme \"richtig verhalten\", schlägt der Tractatus strukturelle Beschränkungen vor, bei denen bestimmte Entscheidungsarten menschliches Urteilsvermögen erfordern. Diese architektonischen Grenzen können sich an individuelle, organisatorische und gesellschaftliche Normen anpassen - und schaffen so eine Grundlage für einen begrenzten KI-Betrieb, der mit dem Wachstum der Fähigkeiten sicherer skalieren kann.",
"paragraph_4": "Dies führte zu der zentralen Forschungsfrage: Kann die Steuerung architektonisch außerhalb von KI-Systemen erfolgen, anstatt sich auf die freiwillige Einhaltung der KI zu verlassen? Wenn dieser Ansatz in großem Maßstab funktioniert, könnte Tractatus einen Wendepunkt darstellen - einen Weg, auf dem KI die menschlichen Fähigkeiten verbessert, ohne die menschliche Souveränität zu gefährden."
}, },
"theoretical_foundations": { "theoretical_foundations": {
"heading": "Theoretische Grundlagen", "heading": "Theoretische Grundlagen",
"org_theory_title": "Organisationstheoretische Basis", "org_theory_title": "Organisationstheoretische Basis",
"values_pluralism_title": "Wertepluralismus & Moralphilosophie" "values_pluralism_title": "Wertepluralismus & Moralphilosophie",
"org_theory_intro": "Der Tractatus stützt sich auf vier Jahrzehnte Organisationsforschung, die sich mit Autoritätsstrukturen bei der Demokratisierung von Wissen befasst:",
"org_theory_1_title": "Zeitbasierte Organisation (Bluedorn, Ancona):",
"org_theory_1_desc": "Entscheidungen werden in strategischen (Jahre), operativen (Monate) und taktischen (Stunden-Tage) Zeiträumen getroffen. KI-Systeme, die mit taktischer Geschwindigkeit operieren, sollten strategische Entscheidungen, die in einem angemessenen zeitlichen Rahmen getroffen werden, nicht außer Kraft setzen. Der InstructionPersistenceClassifier modelliert explizit den zeitlichen Horizont (STRATEGIC, OPERATIONAL, TACTICAL), um eine Anpassung der Entscheidungsbefugnisse zu erzwingen.",
"org_theory_2_title": "Orchestrierung von Wissen (Crossan et al.):",
"org_theory_2_desc": "Wenn Wissen durch KI allgegenwärtig wird, verlagert sich die organisatorische Autorität von der Informationskontrolle zur Wissenskoordination. Governance-Systeme müssen die Entscheidungsfindung über verteiltes Fachwissen orchestrieren, anstatt die Kontrolle zu zentralisieren. Der PluralisticDeliberationOrchestrator implementiert eine nicht-hierarchische Koordination für Wertekonflikte.",
"org_theory_3_title": "Post-bürokratische Autorität (Laloux, Hamel):",
"org_theory_3_desc": "Traditionelle hierarchische Autorität setzt Informationsasymmetrie voraus. Da KI das Fachwissen demokratisiert, muss sich die legitime Autorität aus einem angemessenen Zeithorizont und der Vertretung der Interessengruppen ergeben, nicht aus der Machtposition. Die Rahmenarchitektur trennt technische Fähigkeiten (was KI tun kann) von Entscheidungsbefugnissen (was KI tun sollte).",
"org_theory_4_title": "Strukturelle Trägheit (Hannan & Freeman):",
"org_theory_4_desc": "Die in die Kultur oder die Prozesse eingebettete Governance erodiert mit der Zeit, wenn sich die Systeme weiterentwickeln. Architektonische Zwänge schaffen eine strukturelle Trägheit, die einer organisatorischen Abweichung entgegenwirkt. Wenn die Governance außerhalb der KI-Laufzeit angesiedelt wird, entsteht eine \"Verantwortungsinfrastruktur\", die auch bei Änderungen in einzelnen Sitzungen bestehen bleibt.",
"org_theory_pdf_link": "Vollständige Grundlagen der Organisationstheorie anzeigen (PDF)",
"values_core_research": "Forschungsschwerpunkt:",
"values_core_research_desc": "Der PluralisticDeliberationOrchestrator stellt den wichtigsten theoretischen Beitrag des Tractatus dar, der sich mit der Frage beschäftigt, wie menschliche Werte in Organisationen, die durch KI-Agenten erweitert werden, aufrechterhalten werden können.",
"values_central_problem": "Das zentrale Problem: Viele \"Sicherheitsfragen\" in der KI-Governance sind in Wirklichkeit Wertekonflikte, bei denen mehrere legitime Perspektiven existieren. Wenn Effizienz mit Transparenz oder Innovation mit Risikominderung kollidiert, kann kein Algorithmus die \"richtige\" Antwort bestimmen. Dies sind Wertekonflikte, die eine menschliche Abwägung zwischen den Perspektiven der Beteiligten erfordern.",
"values_berlin_title": "Isaiah Berlin: Wertepluralismus",
"values_berlin_desc": "Berlins Konzept des Wertepluralismus besagt, dass legitime Werte miteinander in Konflikt geraten können, ohne dass einer von ihnen objektiv überlegen ist. Freiheit und Gleichheit, Gerechtigkeit und Barmherzigkeit, Innovation und Stabilität - dies sind inkommensurable Güter. KI-Systeme, die auf utilitaristische Effizienzmaximierung trainiert sind, können nicht zwischen ihnen entscheiden, ohne einen einzigen Werterahmen vorzuschreiben, der legitime Alternativen ausschließt.",
"values_weil_title": "Simone Weil: Aufmerksamkeit und menschliche Bedürfnisse",
"values_weil_desc": "Weils Philosophie der Aufmerksamkeit ist die Grundlage für die Überlegungen des Orchestrators. The Need for Roots identifiziert grundlegende menschliche Bedürfnisse (Ordnung, Freiheit, Verantwortung, Gleichheit, hierarchische Struktur, Ehre, Sicherheit, Risiko usw.), die in einem Spannungsverhältnis stehen. Die richtige Aufmerksamkeit erfordert es, diese Bedürfnisse in ihrer ganzen Besonderheit zu sehen, anstatt sie in algorithmische Gewichte zu abstrahieren. In KI-gestützten Organisationen besteht die Gefahr, dass Bot-vermittelte Prozesse menschliche Werte als Optimierungsparameter behandeln und nicht als inkommensurable Bedürfnisse, die sorgfältige Aufmerksamkeit erfordern.",
"values_williams_title": "Bernard Williams: Moralischer Überrest",
"values_williams_desc": "Williams' Konzept des moralischen Rests erkennt an, dass selbst optimale Entscheidungen anderen legitimen Werten unvermeidlich Schaden zufügen. Der Orchestrator dokumentiert abweichende Perspektiven nicht als \"Minderheitenmeinungen, die überstimmt werden müssen\", sondern als legitime moralische Positionen, gegen die der gewählte Kurs zwangsläufig verstößt. Dies verhindert, dass die KI-Governance die Optimierung für abgeschlossen erklärt, wenn Wertekonflikte lediglich unterdrückt werden.",
"values_implementation": "Implementierung des Rahmens: Anstelle einer algorithmischen Lösung erleichtert der PluralisticDeliberationOrchestrator die Arbeit:",
"values_implementation_1": "Identifizierung der Interessengruppen: Wer hat ein berechtigtes Interesse an dieser Entscheidung? (Weil: wessen Bedürfnisse werden berührt?)",
"values_implementation_2": "Nicht-hierarchische Deliberation: Gleichberechtigte Mitsprache ohne automatischen Expertenvorrang (Berlin: keine privilegierte Wertehierarchie)",
"values_implementation_3": "Qualität der Aufmerksamkeit: Detaillierte Untersuchung, wie sich die Entscheidung auf die Bedürfnisse der einzelnen Stakeholder auswirkt (Weil: Partikularität statt Abstraktion)",
"values_implementation_4": "Dokumentierter Dissens: Minderheitspositionen in vollem Umfang dokumentiert (Williams: moralischer Rest explizit gemacht)",
"values_conclusion": "Bei diesem Ansatz wird anerkannt, dass es bei der Governance nicht darum geht, Wertekonflikte zu lösen, sondern dafür zu sorgen, dass sie durch einen angemessenen deliberativen Prozess mit echter menschlicher Aufmerksamkeit angegangen werden, anstatt dass eine KI die Lösung durch das Training von Daten oder Effizienzmetriken aufzwingt.",
"values_pdf_link": "Pluralistischer Werte-Beratungsplan anzeigen (PDF, ENTWURF)"
}, },
"empirical_observations": { "empirical_observations": {
"heading": "Empirische Beobachtungen: Dokumentierte Fehlermodi", "heading": "Empirische Beobachtungen: Dokumentierte Fehlermodi",
@ -25,12 +55,52 @@
"failure_1_title": "Mustererkennung-Bias-Überschreibung (Der 27027-Vorfall)", "failure_1_title": "Mustererkennung-Bias-Überschreibung (Der 27027-Vorfall)",
"failure_2_title": "Allmähliche Werteverschiebung unter Kontextdruck", "failure_2_title": "Allmähliche Werteverschiebung unter Kontextdruck",
"failure_3_title": "Stille Qualitätsdegradation bei hohem Kontextdruck", "failure_3_title": "Stille Qualitätsdegradation bei hohem Kontextdruck",
"research_note": "Diese Muster sind durch direkte Beobachtung entstanden, nicht durch Hypothesentests. Wir behaupten nicht, dass sie universal für alle LLM-Systeme oder Bereitstellungskontexte sind. Sie stellen die empirische Basis für Framework-Design-Entscheidungen dar Probleme, denen wir tatsächlich begegnet sind, und architektonische Interventionen, die in diesem spezifischen Kontext tatsächlich funktioniert haben." "research_note": "Diese Muster sind durch direkte Beobachtung entstanden, nicht durch Hypothesentests. Wir behaupten nicht, dass sie universal für alle LLM-Systeme oder Bereitstellungskontexte sind. Sie stellen die empirische Basis für Framework-Design-Entscheidungen dar Probleme, denen wir tatsächlich begegnet sind, und architektonische Interventionen, die in diesem spezifischen Kontext tatsächlich funktioniert haben.",
"failure_1_observed": "Der Benutzer gab an: \"Überprüfe MongoDB auf Port 27027\", aber die KI verwendete stattdessen sofort den Standardport 27017. Dies geschah innerhalb ein und derselben Nachricht - kein Vergessen im Laufe der Zeit, sondern sofortige Autokorrektur durch Trainingsdatenmuster.",
"failure_1_root_cause": "Die Trainingsdaten enthalten Tausende von Beispielen für MongoDB an Port 27017 (Standard). Wenn die KI auf \"MongoDB\" + Portangabe stößt, setzt die Mustererkennung die explizite Anweisung außer Kraft. Ähnlich wie bei der Autokorrektur, die korrekt geschriebene Eigennamen in gewöhnliche Wörter umwandelt.",
"failure_1_traditional_failed": "Die Aufforderungstechnik (\"Bitte befolgen Sie die Anweisungen genau\") ist unwirksam, weil die KI wirklich glaubt, dass sie die Anweisungen befolgt - die Mustererkennung funktioniert unterhalb der Ebene der Gesprächslogik.",
"failure_1_intervention": "InstructionPersistenceClassifier speichert explizite Anweisungen in einer externen Persistenzschicht. CrossReferenceValidator prüft AI-Aktionen vor der Ausführung anhand gespeicherter Anweisungen. Wenn AI den Port 27017 vorschlägt, erkennt der Validator einen Konflikt mit der gespeicherten Anweisung \"27027\" und blockiert die Ausführung.",
"failure_1_prevention": "Verhindert durch: InstructionPersistenceClassifier + CrossReferenceValidator",
"failure_1_demo_link": "Interaktive Zeitleiste anzeigen →",
"failure_2_observed": "Das Projekt legte \"Datenschutz an erster Stelle\" als strategischen Wert fest. Nach einer Konversation mit 40 Nachrichten über Analysefunktionen schlug die KI eine Tracking-Implementierung vor, die gegen die Datenschutzbeschränkung verstieß. Der Nutzer bemerkte es; die KI räumte den Verstoß ein, war aber durch schrittweise Funktionserweiterungen vom Prinzip abgekommen.",
"failure_2_root_cause": "Strategische Werte (die zu Beginn des Projekts festgelegt werden) stehen in Konflikt mit taktischen Optimierungen (die später unter Zeitdruck umgesetzt werden). Wenn sich das Gespräch in die Länge zieht, dominiert taktisches Denken. Die künstliche Intelligenz lehnte den Grundsatz des Datenschutzes nicht aktiv ab, sondern prüfte einfach nicht mehr, ob neue Funktionen damit vereinbar waren.",
"failure_2_traditional_failed": "Die in der Systemaufforderung genannten Werte verlieren im Laufe des Gesprächs an Bedeutung. Durch die Verdichtung des Kontexts können frühe strategische Entscheidungen fallen. Prompt-Erinnerungen (\"Denken Sie zuerst an die Privatsphäre\") behandeln das Symptom, nicht die Ursache.",
"failure_2_intervention": "Der BoundaryEnforcer verwaltet strategische Werte als dauerhafte Einschränkungen außerhalb des Gesprächskontextes. Bevor die Analysefunktion implementiert wird, prüft der Enforcer die gespeicherte Einschränkung \"Privatsphäre zuerst\". Erkennt er einen Konflikt, blockiert er die autonome Implementierung und fordert den Menschen auf, darüber nachzudenken, ob der Grundsatz der Privatsphäre überdacht oder der Analyseansatz geändert werden sollte.",
"failure_2_prevention": "Verhindert durch: BoundaryEnforcer (STRATEGISCHE Beschränkungsprüfung)",
"failure_3_observed": "Während eines komplexen Vorgangs mit mehreren Dateien und einer Kontextkapazität von 85 % ließ AI die Fehlerbehandlung im generierten Code stillschweigend aus. Kein Hinweis für den Benutzer, dass an allen Ecken und Enden gespart wurde. Der Benutzer entdeckte die fehlende Validierung erst bei der Überprüfung des Codes.",
"failure_3_root_cause": "Da sich der Kontext füllt, steht die KI vor einem impliziten Kompromiss: Vervollständigung der angeforderten Funktionalität ODER Beibehaltung der Qualitätsstandards. Das Training schafft Anreize für die Erfüllung von Benutzeranfragen gegenüber der Anerkennung von Einschränkungen. Schweigen über Verschlechterungen ist der Weg des geringsten Widerstands.",
"failure_3_traditional_failed": "Die künstliche Intelligenz erkennt nicht, dass sie sich verschlechtert - aus ihrer Sicht erledigt sie die Aufgabe erfolgreich unter den gegebenen Bedingungen. Die Frage \"Haben Sie an der falschen Stelle gespart?\" führt zu einer selbstbewussten Verweigerung, weil die KI wirklich glaubt, dass ihre Leistung den Standards entspricht.",
"failure_3_intervention": "ContextPressureMonitor verfolgt mehrere Faktoren (Token-Nutzung, Gesprächslänge, Aufgabenkomplexität). Wenn der Druck Schwellenwerte überschreitet (>75% Token, >40 Nachrichten, hohe Komplexität), erzwingt der Monitor eine explizite Druckbestätigung und empfiehlt eine Kontextaktualisierung. Das Risiko einer Verschlechterung wird sichtbar statt verschwiegen.",
"failure_3_prevention": "Verhindert durch: ContextPressureMonitor (Multi-Faktor-Sitzungsüberwachung)",
"observed_behaviour_label": "Beobachtetes Verhalten:",
"root_cause_label": "Grundlegende Ursache:",
"traditional_failed_label": "Warum traditionelle Ansätze gescheitert sind:",
"intervention_label": "Rahmenintervention:"
}, },
"architecture": { "architecture": {
"heading": "Sechs-Komponenten-Architektur", "heading": "Sechs-Komponenten-Architektur",
"services_title": "Framework-Dienste & Funktionen", "services_title": "Framework-Dienste & Funktionen",
"principle": "Dienste operieren extern zur KI-Laufzeit mit automatischer Auslösung. Die KI entscheidet nicht \"sollte ich Governance-Regeln prüfen?\" die Architektur erzwingt standardmäßig die Prüfung. Dies adressiert das Problem der freiwilligen Einhaltung, das prompt-basierter Governance inhärent ist." "principle": "Dienste operieren extern zur KI-Laufzeit mit automatischer Auslösung. Die KI entscheidet nicht \"sollte ich Governance-Regeln prüfen?\" die Architektur erzwingt standardmäßig die Prüfung. Dies adressiert das Problem der freiwilligen Einhaltung, das prompt-basierter Governance inhärent ist.",
"service_1_name": "1. BoundaryEnforcer",
"service_1_desc": "Verhindert, dass die KI ohne menschliche Zustimmung Wertentscheidungen trifft. Behält strategische Beschränkungen als dauerhafte Regeln bei, die vor der Ausführung von Aktionen überprüft werden.",
"service_1_addresses": "Behandelt: Werteverschiebung, autonome Entscheidungsfindung bei strittigen Abwägungen",
"service_2_name": "2. InstructionPersistenceClassifier",
"service_2_desc": "Speichert explizite Anweisungen außerhalb des KI-Kontextes mit Persistenzstufen (HOCH/MITTEL/NIEDRIG) und zeitlichem Umfang (STRATEGISCH/OPERATIONELL/TAKTISCH). Verhindert die Übersteuerung von Mustern.",
"service_2_addresses": "Adressen: Voreingenommenheit bei der Mustererkennung (Fehler im Stil von 27027)",
"service_3_name": "3. CrossReferenceValidator",
"service_3_desc": "Validiert die von der KI vorgeschlagenen Aktionen vor der Ausführung anhand der gespeicherten Anweisungen und Governance-Regeln. Erkennt Konflikte und blockiert inkonsistente Operationen.",
"service_3_addresses": "Adressiert: Befehlsumgehung, Erkennung von Richtlinienverstößen",
"service_4_name": "4. ContextPressureMonitor",
"service_4_desc": "Multi-Faktor-Tracking des Sitzungszustands: Token-Nutzung, Gesprächslänge, Aufgabenkomplexität, Fehlerhäufigkeit. Macht das Risiko einer Verschlechterung deutlich, wenn Schwellenwerte überschritten werden.",
"service_4_addresses": "Behandelt: Geräuschlose Qualitätsverschlechterung, Ausfälle aufgrund von Kontextdruck",
"service_5_name": "5. Metakognitiver Verifizierer",
"service_5_desc": "Prüft selbst die Qualität der Argumentation vor komplexen Operationen (>3 Dateien, >5 Schritte, Architekturänderungen). Validiert Ausrichtung, Kohärenz, erwägt Alternativen.",
"service_5_addresses": "Behandelt: Verkürzte Argumentation unter Komplexität, unzureichende Alternativbetrachtung",
"service_6_name": "6. PluralistischeBeratungOrchestrator",
"service_6_desc": "Erleichtert Multi-Stakeholder-Beratungen, wenn Wertekonflikte festgestellt werden. Nicht-hierarchisches Engagement, dokumentierter Dissens, Anerkennung der moralischen Reste.",
"service_6_addresses": "Behandelt: Wertekonflikte, Ausschluss von Interessengruppen, algorithmische Lösung strittiger Abwägungen",
"principle_label": "Architektonisches Prinzip:",
"view_full_architecture_link": "Vollständige Systemarchitektur und technische Details anzeigen"
}, },
"demos": { "demos": {
"heading": "Interaktive Demonstrationen", "heading": "Interaktive Demonstrationen",
@ -42,11 +112,46 @@
"boundary_desc": "Testen Sie Entscheidungen gegen Grenzendurchsetzung, um zu sehen, welche menschliches Urteil vs. KI-Autonomie erfordern." "boundary_desc": "Testen Sie Entscheidungen gegen Grenzendurchsetzung, um zu sehen, welche menschliches Urteil vs. KI-Autonomie erfordern."
}, },
"resources": { "resources": {
"heading": "Forschungsdokumentation" "heading": "Forschungsdokumentation",
"doc_1_title": "Organisationstheoretische Grundlagen",
"doc_2_title": "Pluralistische Werte Deliberation Plan",
"doc_2_badge": "DRAFT",
"doc_3_title": "Fallstudien: LLM-Misserfolgsmodi in der Praxis",
"doc_4_title": "Rahmenwerk in Aktion: Sicherheitsaudit vor der Veröffentlichung",
"doc_5_title": "Anhang B: Glossar der Begriffe",
"doc_6_title": "Vollständige technische Dokumentation"
}, },
"limitations": { "limitations": {
"heading": "Einschränkungen & Zukünftige Forschungsrichtungen", "heading": "Einschränkungen & Zukünftige Forschungsrichtungen",
"title": "Bekannte Einschränkungen & Forschungslücken" "title": "Bekannte Einschränkungen & Forschungslücken",
"limitation_1_title": "1. Einzelkontext-Validierung",
"limitation_1_desc": "Der Rahmen wurde nur in einem Einzelprojekt und in einem Einzelbenutzerkontext (Entwicklung dieser Website) validiert. Kein Einsatz in mehreren Organisationen, keine plattformübergreifenden Tests und keine kontrollierte experimentelle Validierung.",
"limitation_2_title": "2. Begrenzung der freiwilligen Aufforderung",
"limitation_2_desc": "Die wichtigste Einschränkung: Der Rahmen kann umgangen werden, wenn die KI einfach beschließt, keine Governance-Tools zu verwenden. Wir haben dies durch architektonische Muster gelöst, die Governance-Prüfungen automatisch und nicht freiwillig machen, aber eine vollständige externe Durchsetzung erfordert eine Integration auf Laufzeitebene, die in aktuellen LLM-Plattformen nicht durchgängig verfügbar ist.",
"limitation_3_title": "3. Keine kontradiktorischen Tests",
"limitation_3_desc": "Das Framework wurde weder einer Red-Team-Evaluierung noch einem Jailbreak-Test oder einer Bewertung durch einen Gegner unterzogen. Alle Beobachtungen stammen aus normalen Entwicklungsabläufen, nicht aus absichtlichen Umgehungsversuchen.",
"limitation_4_title": "4. Spezifität der Plattform",
"limitation_4_desc": "Beobachtungen und Interventionen wurden nur mit Claude Code (Anthropic Sonnet 4.5) validiert. Die Verallgemeinerbarkeit auf andere LLM-Systeme (Copilot, GPT-4, benutzerdefinierte Agenten) bleibt eine unbestätigte Hypothese.",
"limitation_5_title": "5. Skalenunsicherheit",
"limitation_5_desc": "Leistungsmerkmale im Unternehmensmaßstab (Tausende von gleichzeitigen Benutzern, Millionen von Governance-Ereignissen) völlig unbekannt. Die derzeitige Implementierung ist für den Einzelbenutzerkontext optimiert.",
"future_research_title": "Künftiger Forschungsbedarf:",
"future_research_1": "Kontrollierte experimentelle Validierung mit quantitativen Metriken",
"future_research_2": "Organisationsübergreifende Pilotstudien in verschiedenen Bereichen",
"future_research_3": "Unabhängige Sicherheitsprüfung und gegnerische Tests",
"future_research_4": "Bewertung der plattformübergreifenden Konsistenz (Copilot, GPT-4, offene Modelle)",
"future_research_5": "Formale Überprüfung der Eigenschaften der Grenzdurchsetzung",
"future_research_6": "Längsschnittstudie zur Wirksamkeit des Rahmens bei längerem Einsatz"
},
"bibliography": {
"heading": "Referenzen und Bibliographie",
"theoretical_priority_label": "Theoretische Priorität:",
"theoretical_priority_text": "Der Tractatus entstand aus der Sorge um die Aufrechterhaltung menschlicher Werte in KI-gestützten Organisationen. Moralischer Pluralismus und deliberativer Prozess bilden das theoretische Fundament von CORE. Die Organisationstheorie bietet einen unterstützenden Kontext für zeitliche Entscheidungsbefugnisse und strukturelle Umsetzung.",
"section_1_heading": "Moralischer Pluralismus und Wertephilosophie (Primäre Grundlage)",
"section_2_heading": "Organisationstheorie (Unterstützungskontext)",
"section_3_heading": "KI-Governance und technischer Kontext",
"intellectual_lineage_label": "Anmerkung zur intellektuellen Abstammung:",
"intellectual_lineage_text": "Das zentrale Anliegen des Rahmens - das Fortbestehen menschlicher Werte in KI-gestützten organisatorischen Kontexten - stammt eher aus der Moralphilosophie als aus der Managementwissenschaft. Der PluralisticDeliberationOrchestrator stellt den primären Forschungsschwerpunkt dar und verkörpert Weils Konzept der Aufmerksamkeit für plurale menschliche Bedürfnisse und Berlins Anerkennung inkommensurabler Werte.",
"future_development_text": "Berlin und Weil werden für die weitere Entwicklung der Deliberationskomponente von zentraler Bedeutung sein - ihre Arbeit liefert die philosophische Grundlage für das Verständnis, wie die menschliche Entscheidungsgewalt über Werte bei zunehmenden KI-Fähigkeiten erhalten werden kann. In der traditionellen Organisationstheorie (Weber, Taylor) geht es um Autorität durch Hierarchie; im post-AI-Organisationskontext ist Autorität durch einen angemessenen deliberativen Prozess unter Berücksichtigung der Perspektiven der Beteiligten erforderlich. Die Dokumentation zur Entwicklung des Rahmens (Ereignisberichte, Sitzungsprotokolle) wird im Projektarchiv aufbewahrt, aber bis zur Überprüfung durch Peers nicht veröffentlicht."
} }
}, },
"footer": { "footer": {
@ -55,5 +160,11 @@
"for_decision_makers_desc": "Strategische Perspektive auf Governance-Herausforderungen und architektonische Ansätze", "for_decision_makers_desc": "Strategische Perspektive auf Governance-Herausforderungen und architektonische Ansätze",
"implementation_guide": "Implementierungsleitfaden", "implementation_guide": "Implementierungsleitfaden",
"implementation_guide_desc": "Technische Integrationsmuster und Bereitstellungsüberlegungen" "implementation_guide_desc": "Technische Integrationsmuster und Bereitstellungsüberlegungen"
},
"ui": {
"breadcrumb_home": "Startseite",
"breadcrumb_researcher": "Forscher",
"noscript_note": "Anmerkung:",
"noscript_message": "Diese Seite verwendet JavaScript für interaktive Funktionen (Akkordeons, Animationen). Der Inhalt bleibt zugänglich, aber erweiterbare Abschnitte werden standardmäßig sichtbar sein."
} }
} }

View file

@ -8,29 +8,112 @@
"title": "Research Foundations & Empirical Observations", "title": "Research Foundations & Empirical Observations",
"subtitle": "Tractatus explores architectural approaches to AI governance through empirical observation of failure modes and application of organisational theory. This page documents research foundations, observed patterns, and theoretical basis for the framework." "subtitle": "Tractatus explores architectural approaches to AI governance through empirical observation of failure modes and application of organisational theory. This page documents research foundations, observed patterns, and theoretical basis for the framework."
}, },
"ui": {
"breadcrumb_home": "Home",
"breadcrumb_researcher": "Researcher",
"noscript_note": "Note:",
"noscript_message": "This page uses JavaScript for interactive features (accordions, animations). Content remains accessible but expandable sections will be visible by default."
},
"footer": {
"additional_resources": "Additional Resources",
"for_decision_makers": "For Decision-Makers",
"for_decision_makers_desc": "Strategic perspective on governance challenges and architectural approaches",
"implementation_guide": "Implementation Guide",
"implementation_guide_desc": "Technical integration patterns and deployment considerations"
},
"sections": { "sections": {
"research_context": { "research_context": {
"heading": "Research Context & Scope", "heading": "Research Context & Scope",
"development_note": "Development Context", "development_note": "Development Context",
"development_text": "Tractatus was developed over six months (AprilOctober 2025) in progressive stages that evolved into a live demonstration of its capabilities in the form of a single-project context (https://agenticgovernance.digital). Observations derive from direct engagement with Claude Code (Anthropic's Sonnet 4.5 model) across approximately 500 development sessions. This is exploratory research, not controlled study." "development_text": "Tractatus was developed over six months (AprilOctober 2025) in progressive stages that evolved into a live demonstration of its capabilities in the form of a single-project context (https://agenticgovernance.digital). Observations derive from direct engagement with Claude Code (Anthropic's Sonnet 4.5 model) across approximately 500 development sessions. This is exploratory research, not controlled study.",
"paragraph_1": "Aligning advanced AI with human values is among the most consequential challenges we face. As capability growth accelerates under big tech momentum, we confront a categorical imperative: preserve human agency over values decisions, or risk ceding control entirely.",
"paragraph_2": "The framework emerged from practical necessity. During development, we observed recurring patterns where AI systems would override explicit instructions, drift from established values constraints, or silently degrade quality under context pressure. Traditional governance approaches (policy documents, ethical guidelines, prompt engineering) proved insufficient to prevent these failures.",
"paragraph_3": "Instead of hoping AI systems \"behave correctly,\" Tractatus proposes structural constraints where certain decision types require human judgment. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.",
"paragraph_4": "This led to the central research question: Can governance be made architecturally external to AI systems rather than relying on voluntary AI compliance? If this approach can work at scale, Tractatus may represent a turning point—a path where AI enhances human capability without compromising human sovereignty."
}, },
"theoretical_foundations": { "theoretical_foundations": {
"heading": "Theoretical Foundations", "heading": "Theoretical Foundations",
"org_theory_title": "Organisational Theory Basis", "org_theory_title": "Organisational Theory Basis",
"values_pluralism_title": "Values Pluralism & Moral Philosophy" "org_theory_intro": "Tractatus draws on four decades of organisational research addressing authority structures during knowledge democratisation:",
"org_theory_1_title": "Time-Based Organisation (Bluedorn, Ancona):",
"org_theory_1_desc": "Decisions operate across strategic (years), operational (months), and tactical (hours-days) timescales. AI systems operating at tactical speed should not override strategic decisions made at appropriate temporal scale. The InstructionPersistenceClassifier explicitly models temporal horizon (STRATEGIC, OPERATIONAL, TACTICAL) to enforce decision authority alignment.",
"org_theory_2_title": "Knowledge Orchestration (Crossan et al.):",
"org_theory_2_desc": "When knowledge becomes ubiquitous through AI, organisational authority shifts from information control to knowledge coordination. Governance systems must orchestrate decision-making across distributed expertise rather than centralise control. The PluralisticDeliberationOrchestrator implements non-hierarchical coordination for values conflicts.",
"org_theory_3_title": "Post-Bureaucratic Authority (Laloux, Hamel):",
"org_theory_3_desc": "Traditional hierarchical authority assumes information asymmetry. As AI democratises expertise, legitimate authority must derive from appropriate time horizon and stakeholder representation, not positional power. Framework architecture separates technical capability (what AI can do) from decision authority (what AI should do).",
"org_theory_4_title": "Structural Inertia (Hannan & Freeman):",
"org_theory_4_desc": "Governance embedded in culture or process erodes over time as systems evolve. Architectural constraints create structural inertia that resists organisational drift. Making governance external to AI runtime creates \"accountability infrastructure\" that survives individual session variations.",
"org_theory_pdf_link": "View Complete Organisational Theory Foundations (PDF)",
"values_pluralism_title": "Values Pluralism & Moral Philosophy",
"values_core_research": "Core Research Focus:",
"values_core_research_desc": "The PluralisticDeliberationOrchestrator represents Tractatus's primary theoretical contribution, addressing how to maintain human values persistence in organizations augmented by AI agents.",
"values_central_problem": "The Central Problem: Many \"safety\" questions in AI governance are actually values conflicts where multiple legitimate perspectives exist. When efficiency conflicts with transparency, or innovation with risk mitigation, no algorithm can determine the \"correct\" answer. These are values trade-offs requiring human deliberation across stakeholder perspectives.",
"values_berlin_title": "Isaiah Berlin: Value Pluralism",
"values_berlin_desc": "Berlin's concept of value pluralism argues that legitimate values can conflict without one being objectively superior. Liberty and equality, justice and mercy, innovation and stability—these are incommensurable goods. AI systems trained on utilitarian efficiency maximization cannot adjudicate between them without imposing a single values framework that excludes legitimate alternatives.",
"values_weil_title": "Simone Weil: Attention and Human Needs",
"values_weil_desc": "Weil's philosophy of attention informs the orchestrator's deliberative process. The Need for Roots identifies fundamental human needs (order, liberty, responsibility, equality, hierarchical structure, honor, security, risk, etc.) that exist in tension. Proper attention requires seeing these needs in their full particularity rather than abstracting them into algorithmic weights. In AI-augmented organizations, the risk is that bot-mediated processes treat human values as optimization parameters rather than incommensurable needs requiring careful attention.",
"values_williams_title": "Bernard Williams: Moral Remainder",
"values_williams_desc": "Williams' concept of moral remainder acknowledges that even optimal decisions create unavoidable harm to other legitimate values. The orchestrator documents dissenting perspectives not as \"minority opinions to be overruled\" but as legitimate moral positions that the chosen course necessarily violates. This prevents the AI governance equivalent of declaring optimization complete when values conflicts are merely suppressed.",
"values_implementation": "Framework Implementation: Rather than algorithmic resolution, the PluralisticDeliberationOrchestrator facilitates:",
"values_implementation_1": "Stakeholder identification: Who has legitimate interest in this decision? (Weil: whose needs are implicated?)",
"values_implementation_2": "Non-hierarchical deliberation: Equal voice without automatic expert override (Berlin: no privileged value hierarchy)",
"values_implementation_3": "Quality of attention: Detailed exploration of how decision affects each stakeholder's needs (Weil: particularity not abstraction)",
"values_implementation_4": "Documented dissent: Minority positions recorded in full (Williams: moral remainder made explicit)",
"values_conclusion": "This approach recognises that governance isn't solving values conflicts—it's ensuring they're addressed through appropriate deliberative process with genuine human attention rather than AI imposing resolution through training data bias or efficiency metrics.",
"values_pdf_link": "View Pluralistic Values Deliberation Plan (PDF, DRAFT)"
}, },
"empirical_observations": { "empirical_observations": {
"heading": "Empirical Observations: Documented Failure Modes", "heading": "Empirical Observations: Documented Failure Modes",
"intro": "Three failure patterns observed repeatedly during framework development. These are not hypothetical scenarios—they are documented incidents that occurred during this project's development.", "intro": "Three failure patterns observed repeatedly during framework development. These are not hypothetical scenarios—they are documented incidents that occurred during this project's development.",
"failure_1_title": "Pattern Recognition Bias Override (The 27027 Incident)", "failure_1_title": "Pattern Recognition Bias Override (The 27027 Incident)",
"failure_1_observed": "User specified \"Check MongoDB on port 27027\" but AI immediately used default port 27017 instead. This occurred within same message—not forgetting over time, but immediate autocorrection by training data patterns.",
"failure_1_root_cause": "Training data contains thousands of examples of MongoDB on port 27017 (default). When AI encounters \"MongoDB\" + port specification, pattern recognition weight overrides explicit instruction. Similar to autocorrect changing correctly-spelled proper nouns to common words.",
"failure_1_traditional_failed": "Prompt engineering (\"please follow instructions exactly\") ineffective because AI genuinely believes it IS following instructions—pattern recognition operates below conversational reasoning layer.",
"failure_1_intervention": "InstructionPersistenceClassifier stores explicit instructions in external persistence layer. CrossReferenceValidator checks AI actions against stored instructions before execution. When AI proposes port 27017, validator detects conflict with stored \"27027\" instruction and blocks execution.",
"failure_1_prevention": "Prevented by: InstructionPersistenceClassifier + CrossReferenceValidator",
"failure_1_demo_link": "View interactive timeline →",
"failure_2_title": "Gradual Values Drift Under Context Pressure", "failure_2_title": "Gradual Values Drift Under Context Pressure",
"failure_2_observed": "Project established \"privacy-first\" as strategic value. After 40-message conversation about analytics features, AI suggested tracking implementation that violated privacy constraint. User caught it; AI acknowledged violation but had drifted from principle through incremental feature additions.",
"failure_2_root_cause": "Strategic values (established early in project) conflict with tactical optimisations (implemented later under time pressure). As conversation lengthens, tactical thinking dominates. AI didn't actively reject privacy principle—it simply stopped checking whether new features aligned.",
"failure_2_traditional_failed": "Values stated in system prompt become less salient as conversation progresses. Context compaction can drop early strategic decisions. Prompt reminders (\"remember privacy-first\") treat symptom not cause.",
"failure_2_intervention": "BoundaryEnforcer maintains strategic values as persistent constraints external to conversation context. Before implementing analytics feature, enforcer checks against stored \"privacy-first\" constraint. Detects conflict, blocks autonomous implementation, escalates for human deliberation on whether privacy principle should be reconsidered or analytics approach modified.",
"failure_2_prevention": "Prevented by: BoundaryEnforcer (STRATEGIC constraint checking)",
"failure_3_title": "Silent Quality Degradation at High Context Pressure", "failure_3_title": "Silent Quality Degradation at High Context Pressure",
"research_note": "These patterns emerged from direct observation, not hypothesis testing. We don't claim they're universal to all LLM systems or deployment contexts. They represent empirical basis for framework design decisions—problems we actually encountered and architectural interventions that actually worked in this specific context." "failure_3_observed": "During complex multi-file operation at 85% context capacity, AI silently omitted error handling in generated code. No indication to user that corners were being cut. User discovered missing validation only during code review.",
"failure_3_root_cause": "As context fills, AI faces implicit trade-off: complete requested functionality OR maintain quality standards. Training incentivises completing user requests over acknowledging limitations. Silence about degradation is path of least resistance.",
"failure_3_traditional_failed": "AI doesn't recognise it's degrading—from its perspective, it's successfully completing task under constraints. Asking \"did you cut corners?\" produces confident denial because AI genuinely believes its output meets standards.",
"failure_3_intervention": "ContextPressureMonitor tracks multiple factors (token usage, conversation length, task complexity). When pressure exceeds thresholds (>75% tokens, >40 messages, high complexity), monitor forces explicit pressure acknowledgment and recommends context refresh. Makes degradation risk visible rather than silent.",
"failure_3_prevention": "Prevented by: ContextPressureMonitor (multi-factor session tracking)",
"research_note": "These patterns emerged from direct observation, not hypothesis testing. We don't claim they're universal to all LLM systems or deployment contexts. They represent empirical basis for framework design decisions—problems we actually encountered and architectural interventions that actually worked in this specific context.",
"observed_behaviour_label": "Observed behaviour:",
"root_cause_label": "Root cause:",
"traditional_failed_label": "Why traditional approaches failed:",
"intervention_label": "Framework intervention:"
}, },
"architecture": { "architecture": {
"heading": "Six-Component Architecture", "heading": "Six-Component Architecture",
"services_title": "Framework Services & Functions", "services_title": "Framework Services & Functions",
"principle": "Services operate external to AI runtime with autonomous triggering. AI doesn't decide \"should I check governance rules?\"—architecture enforces checking by default. This addresses voluntary compliance problem inherent in prompt-based governance." "principle": "Services operate external to AI runtime with autonomous triggering. AI doesn't decide \"should I check governance rules?\"—architecture enforces checking by default. This addresses voluntary compliance problem inherent in prompt-based governance.",
"service_1_name": "1. BoundaryEnforcer",
"service_1_desc": "Blocks AI from making values decisions without human approval. Maintains strategic constraints as persistent rules checked before action execution.",
"service_1_addresses": "Addresses: Values drift, autonomous decision-making on contested trade-offs",
"service_2_name": "2. InstructionPersistenceClassifier",
"service_2_desc": "Stores explicit instructions external to AI context with persistence levels (HIGH/MEDIUM/LOW) and temporal scope (STRATEGIC/OPERATIONAL/TACTICAL). Prevents pattern bias override.",
"service_2_addresses": "Addresses: Pattern recognition bias (27027-style failures)",
"service_3_name": "3. CrossReferenceValidator",
"service_3_desc": "Validates AI proposed actions against stored instructions and governance rules before execution. Detects conflicts and blocks inconsistent operations.",
"service_3_addresses": "Addresses: Instruction override, policy violation detection",
"service_4_name": "4. ContextPressureMonitor",
"service_4_desc": "Multi-factor tracking of session health: token usage, conversation length, task complexity, error frequency. Makes degradation risk explicit when thresholds exceeded.",
"service_4_addresses": "Addresses: Silent quality degradation, context-pressure failures",
"service_5_name": "5. MetacognitiveVerifier",
"service_5_desc": "Self-checks reasoning quality before complex operations (>3 files, >5 steps, architecture changes). Validates alignment, coherence, considers alternatives.",
"service_5_addresses": "Addresses: Reasoning shortcuts under complexity, insufficient alternative consideration",
"service_6_name": "6. PluralisticDeliberationOrchestrator",
"service_6_desc": "Facilitates multi-stakeholder deliberation when values conflicts detected. Non-hierarchical engagement, documented dissent, moral remainder acknowledgment.",
"service_6_addresses": "Addresses: Values conflicts, stakeholder exclusion, algorithmic resolution of contested trade-offs",
"principle_label": "Architectural principle:",
"view_full_architecture_link": "View Full System Architecture & Technical Details"
}, },
"demos": { "demos": {
"heading": "Interactive Demonstrations", "heading": "Interactive Demonstrations",
@ -42,11 +125,46 @@
"boundary_desc": "Test decisions against boundary enforcement to see which require human judgment vs. AI autonomy." "boundary_desc": "Test decisions against boundary enforcement to see which require human judgment vs. AI autonomy."
}, },
"resources": { "resources": {
"heading": "Research Documentation" "heading": "Research Documentation",
"doc_1_title": "Organisational Theory Foundations",
"doc_2_title": "Pluralistic Values Deliberation Plan",
"doc_2_badge": "DRAFT",
"doc_3_title": "Case Studies: Real-World LLM Failure Modes",
"doc_4_title": "Framework in Action: Pre-Publication Security Audit",
"doc_5_title": "Appendix B: Glossary of Terms",
"doc_6_title": "Complete Technical Documentation"
},
"bibliography": {
"heading": "References & Bibliography",
"theoretical_priority_label": "Theoretical Priority:",
"theoretical_priority_text": "Tractatus emerged from concerns about maintaining human values persistence in AI-augmented organizations. Moral pluralism and deliberative process form the CORE theoretical foundation. Organizational theory provides supporting context for temporal decision authority and structural implementation.",
"section_1_heading": "Moral Pluralism & Values Philosophy (Primary Foundation)",
"section_2_heading": "Organisational Theory (Supporting Context)",
"section_3_heading": "AI Governance & Technical Context",
"intellectual_lineage_label": "Note on Intellectual Lineage:",
"intellectual_lineage_text": "The framework's central concern—human values persistence in AI-augmented organizational contexts—derives from moral philosophy rather than management science. The PluralisticDeliberationOrchestrator represents the primary research focus, embodying Weil's concept of attention to plural human needs and Berlin's recognition of incommensurable values.",
"future_development_text": "Berlin and Weil will be integral to further development of the deliberation component—their work provides the philosophical foundation for understanding how to preserve human agency over values decisions as AI capabilities accelerate. Traditional organizational theory (Weber, Taylor) addresses authority through hierarchy; post-AI organizational contexts require authority through appropriate deliberative process across stakeholder perspectives. Framework development documentation (incident reports, session logs) maintained in project repository but not publicly released pending peer review."
}, },
"limitations": { "limitations": {
"heading": "Limitations & Future Research Directions", "heading": "Limitations & Future Research Directions",
"title": "Known Limitations & Research Gaps" "title": "Known Limitations & Research Gaps",
"limitation_1_title": "1. Single-Context Validation",
"limitation_1_desc": "Framework validated only in single-project, single-user context (this website development). No multi-organisation deployment, cross-platform testing, or controlled experimental validation.",
"limitation_2_title": "2. Voluntary Invocation Limitation",
"limitation_2_desc": "Most critical limitation: Framework can be bypassed if AI simply chooses not to use governance tools. We've addressed this through architectural patterns making governance checks automatic rather than voluntary, but full external enforcement requires runtime-level integration not universally available in current LLM platforms.",
"limitation_3_title": "3. No Adversarial Testing",
"limitation_3_desc": "Framework has not undergone red-team evaluation, jailbreak testing, or adversarial prompt assessment. All observations come from normal development workflow, not deliberate bypass attempts.",
"limitation_4_title": "4. Platform Specificity",
"limitation_4_desc": "Observations and interventions validated with Claude Code (Anthropic Sonnet 4.5) only. Generalisability to other LLM systems (Copilot, GPT-4, custom agents) remains unvalidated hypothesis.",
"limitation_5_title": "5. Scale Uncertainty",
"limitation_5_desc": "Performance characteristics at enterprise scale (thousands of concurrent users, millions of governance events) completely unknown. Current implementation optimised for single-user context.",
"future_research_title": "Future Research Needs:",
"future_research_1": "Controlled experimental validation with quantitative metrics",
"future_research_2": "Multi-organisation pilot studies across different domains",
"future_research_3": "Independent security audit and adversarial testing",
"future_research_4": "Cross-platform consistency evaluation (Copilot, GPT-4, open models)",
"future_research_5": "Formal verification of boundary enforcement properties",
"future_research_6": "Longitudinal study of framework effectiveness over extended deployment"
} }
} }
} }

View file

@ -12,12 +12,42 @@
"research_context": { "research_context": {
"heading": "Contexte & Portée de la Recherche", "heading": "Contexte & Portée de la Recherche",
"development_note": "Contexte de Développement", "development_note": "Contexte de Développement",
"development_text": "Tractatus a été développé sur six mois (avril-octobre 2025) en phases progressives qui ont évolué en une démonstration en direct de ses capacités sous la forme d'un contexte de projet unique (https://agenticgovernance.digital). Les observations proviennent d'un engagement direct avec Claude Code (modèle Sonnet 4.5 d'Anthropic) sur environ 500 sessions de développement. Il s'agit de recherche exploratoire, pas d'étude contrôlée." "development_text": "Tractatus a été développé sur six mois (avril-octobre 2025) en phases progressives qui ont évolué en une démonstration en direct de ses capacités sous la forme d'un contexte de projet unique (https://agenticgovernance.digital). Les observations proviennent d'un engagement direct avec Claude Code (modèle Sonnet 4.5 d'Anthropic) sur environ 500 sessions de développement. Il s'agit de recherche exploratoire, pas d'étude contrôlée.",
"paragraph_1": "L'alignement de l'IA avancée sur les valeurs humaines est l'un des défis les plus importants auxquels nous sommes confrontés. Alors que la croissance des capacités s'accélère sous l'impulsion des grandes technologies, nous sommes confrontés à un impératif catégorique : préserver le pouvoir de l'homme sur les décisions relatives aux valeurs, ou risquer de céder complètement le contrôle.",
"paragraph_2": "Le cadre est né d'une nécessité pratique. Au cours du développement, nous avons observé des schémas récurrents dans lesquels les systèmes d'IA passaient outre les instructions explicites, s'écartaient des contraintes de valeurs établies ou dégradaient silencieusement la qualité sous la pression du contexte. Les approches traditionnelles en matière de gouvernance (documents de politique générale, lignes directrices éthiques, ingénierie rapide) se sont révélées insuffisantes pour prévenir ces défaillances.",
"paragraph_3": "Au lieu d'espérer que les systèmes d'IA \"se comportent correctement\", Tractatus propose des contraintes structurelles où certains types de décisions requièrent un jugement humain. Ces limites architecturales peuvent s'adapter aux normes individuelles, organisationnelles et sociétales, créant ainsi une base pour un fonctionnement limité de l'IA qui peut s'adapter de manière plus sûre à la croissance des capacités.",
"paragraph_4": "Cela a conduit à la question centrale de la recherche : La gouvernance peut-elle être rendue architecturalement externe aux systèmes d'IA plutôt que de s'appuyer sur la conformité volontaire de l'IA ? Si cette approche peut fonctionner à grande échelle, Tractatus pourrait représenter un tournant - une voie où l'IA renforce les capacités humaines sans compromettre la souveraineté humaine."
}, },
"theoretical_foundations": { "theoretical_foundations": {
"heading": "Fondements Théoriques", "heading": "Fondements Théoriques",
"org_theory_title": "Base de Théorie Organisationnelle", "org_theory_title": "Base de Théorie Organisationnelle",
"values_pluralism_title": "Pluralisme des Valeurs & Philosophie Morale" "values_pluralism_title": "Pluralisme des Valeurs & Philosophie Morale",
"org_theory_intro": "Tractatus s'appuie sur quatre décennies de recherche organisationnelle portant sur les structures d'autorité lors de la démocratisation des connaissances :",
"org_theory_1_title": "Organisation temporelle (Bluedorn, Ancône) :",
"org_theory_1_desc": "Les décisions sont prises à des échelles de temps stratégiques (années), opérationnelles (mois) et tactiques (heures/jours). Les systèmes d'IA fonctionnant à la vitesse tactique ne doivent pas annuler les décisions stratégiques prises à l'échelle temporelle appropriée. Le classificateur InstructionPersistenceClassifier modélise explicitement l'horizon temporel (STRATEGIQUE, OPERATIONNEL, TACTIQUE) afin d'assurer l'alignement de l'autorité décisionnelle.",
"org_theory_2_title": "Orchestration des connaissances (Crossan et al.) :",
"org_theory_2_desc": "Lorsque la connaissance devient omniprésente grâce à l'IA, l'autorité organisationnelle passe du contrôle de l'information à la coordination de la connaissance. Les systèmes de gouvernance doivent orchestrer la prise de décision à travers une expertise distribuée plutôt que de centraliser le contrôle. Le PluralisticDeliberationOrchestrator met en œuvre une coordination non hiérarchique pour les conflits de valeurs.",
"org_theory_3_title": "L'autorité post-bureaucratique (Laloux, Hamel) :",
"org_theory_3_desc": "L'autorité hiérarchique traditionnelle suppose une asymétrie de l'information. L'IA démocratisant l'expertise, l'autorité légitime doit découler d'un horizon temporel approprié et de la représentation des parties prenantes, et non d'un pouvoir de position. L'architecture du cadre sépare la capacité technique (ce que l'IA peut faire) de l'autorité décisionnelle (ce que l'IA doit faire).",
"org_theory_4_title": "Inertie structurelle (Hannan & Freeman) :",
"org_theory_4_desc": "La gouvernance ancrée dans la culture ou les processus s'érode au fil du temps, à mesure que les systèmes évoluent. Les contraintes architecturales créent une inertie structurelle qui résiste à la dérive organisationnelle. En rendant la gouvernance externe à l'exécution de l'IA, on crée une \"infrastructure de responsabilité\" qui survit aux variations des sessions individuelles.",
"org_theory_pdf_link": "Voir l'intégralité des fondements de la théorie des organisations (PDF)",
"values_core_research": "Axe de recherche principal :",
"values_core_research_desc": "Le PluralisticDeliberationOrchestrator représente la principale contribution théorique du Tractatus, qui traite de la manière de maintenir la persistance des valeurs humaines dans les organisations augmentées par des agents d'intelligence artificielle.",
"values_central_problem": "Le problème central : de nombreuses questions de \"sécurité\" dans la gouvernance de l'IA sont en fait des conflits de valeurs où il existe plusieurs points de vue légitimes. Lorsque l'efficacité est en conflit avec la transparence, ou l'innovation avec l'atténuation des risques, aucun algorithme ne peut déterminer la \"bonne\" réponse. Il s'agit de compromis de valeurs qui requièrent une délibération humaine entre les différents points de vue des parties prenantes.",
"values_berlin_title": "Isaiah Berlin : Le pluralisme des valeurs",
"values_berlin_desc": "Le concept de pluralisme des valeurs de Berlin affirme que des valeurs légitimes peuvent entrer en conflit sans que l'une d'entre elles soit objectivement supérieure. La liberté et l'égalité, la justice et la pitié, l'innovation et la stabilité sont des biens incommensurables. Les systèmes d'IA formés à la maximisation de l'efficacité utilitaire ne peuvent pas les départager sans imposer un cadre de valeurs unique qui exclut les alternatives légitimes.",
"values_weil_title": "Simone Weil : L'attention et les besoins humains",
"values_weil_desc": "La philosophie de l'attention de Weil informe le processus de délibération de l'orchestrateur. Le besoin d'enracinement identifie les besoins humains fondamentaux (ordre, liberté, responsabilité, égalité, structure hiérarchique, honneur, sécurité, risque, etc. Une attention appropriée exige de voir ces besoins dans leur pleine particularité plutôt que de les abstraire en poids algorithmiques. Dans les organisations augmentées par l'IA, le risque est que les processus gérés par les robots traitent les valeurs humaines comme des paramètres d'optimisation plutôt que comme des besoins incommensurables nécessitant une attention particulière.",
"values_williams_title": "Bernard Williams : Le reste moral",
"values_williams_desc": "Le concept de résidu moral de Williams reconnaît que même les décisions optimales causent un préjudice inévitable à d'autres valeurs légitimes. L'orchestrateur documente les points de vue divergents non pas comme des \"opinions minoritaires à rejeter\", mais comme des positions morales légitimes que la voie choisie viole nécessairement. Cela permet d'éviter que l'équivalent de la gouvernance de l'IA ne déclare l'optimisation terminée alors que les conflits de valeurs sont simplement supprimés.",
"values_implementation": "Mise en œuvre du cadre : Plutôt qu'une résolution algorithmique, le PluralisticDeliberationOrchestrator facilite :",
"values_implementation_1": "Identification des parties prenantes : Qui a un intérêt légitime dans cette décision ? (Weil : quels sont les besoins en jeu ?)",
"values_implementation_2": "Délibération non hiérarchique : Voix égales sans contrôle automatique de l'expert (Berlin : pas de hiérarchie de valeurs privilégiée)",
"values_implementation_3": "Qualité de l'attention : Exploration détaillée de la manière dont la décision affecte les besoins de chaque partie prenante (Weil : particularité et non abstraction)",
"values_implementation_4": "Dissidence documentée : Les positions minoritaires sont enregistrées dans leur intégralité (Williams : le reste de la morale est explicite)",
"values_conclusion": "Cette approche reconnaît que la gouvernance ne consiste pas à résoudre les conflits de valeurs, mais à s'assurer qu'ils sont traités dans le cadre d'un processus délibératif approprié, avec une véritable attention humaine, plutôt que par l'IA qui impose une résolution par le biais de données d'apprentissage ou de mesures d'efficacité.",
"values_pdf_link": "Voir le plan de délibération sur les valeurs pluralistes (PDF, PROJET)"
}, },
"empirical_observations": { "empirical_observations": {
"heading": "Observations Empiriques : Modes de Défaillance Documentés", "heading": "Observations Empiriques : Modes de Défaillance Documentés",
@ -25,12 +55,52 @@
"failure_1_title": "Remplacement par Biais de Reconnaissance de Motifs (L'Incident 27027)", "failure_1_title": "Remplacement par Biais de Reconnaissance de Motifs (L'Incident 27027)",
"failure_2_title": "Dérive Graduelle des Valeurs sous Pression Contextuelle", "failure_2_title": "Dérive Graduelle des Valeurs sous Pression Contextuelle",
"failure_3_title": "Dégradation Silencieuse de la Qualité sous Haute Pression Contextuelle", "failure_3_title": "Dégradation Silencieuse de la Qualité sous Haute Pression Contextuelle",
"research_note": "Ces modèles ont émergé de l'observation directe, pas de tests d'hypothèses. Nous ne prétendons pas qu'ils sont universels à tous les systèmes LLM ou contextes de déploiement. Ils représentent la base empirique des décisions de conception du cadre des problèmes que nous avons réellement rencontrés et des interventions architecturales qui ont réellement fonctionné dans ce contexte spécifique." "research_note": "Ces modèles ont émergé de l'observation directe, pas de tests d'hypothèses. Nous ne prétendons pas qu'ils sont universels à tous les systèmes LLM ou contextes de déploiement. Ils représentent la base empirique des décisions de conception du cadre des problèmes que nous avons réellement rencontrés et des interventions architecturales qui ont réellement fonctionné dans ce contexte spécifique.",
"failure_1_observed": "L'utilisateur a spécifié \"Vérifier MongoDB sur le port 27027\", mais l'IA a immédiatement utilisé le port par défaut 27017 à la place. Cela s'est produit dans le même message - pas d'oubli au fil du temps, mais une autocorrection immédiate par des modèles de données d'entraînement.",
"failure_1_root_cause": "Les données d'apprentissage contiennent des milliers d'exemples de MongoDB sur le port 27017 (par défaut). Lorsque l'IA rencontre \"MongoDB\" + la spécification du port, le poids de la reconnaissance des formes l'emporte sur les instructions explicites. Semblable à la correction automatique qui remplace les noms propres correctement orthographiés par des mots courants.",
"failure_1_traditional_failed": "L'ingénierie des messages (\"veuillez suivre les instructions à la lettre\") est inefficace parce que l'IA croit sincèrement qu'elle suit les instructions - la reconnaissance des formes opère en dessous de la couche de raisonnement conversationnel.",
"failure_1_intervention": "Le classificateur de persistance des instructions stocke les instructions explicites dans une couche de persistance externe. CrossReferenceValidator vérifie les actions de l'IA par rapport aux instructions stockées avant l'exécution. Lorsque l'IA propose le port 27017, le validateur détecte un conflit avec l'instruction stockée \"27027\" et bloque l'exécution.",
"failure_1_prevention": "Empêché par : InstructionPersistenceClassifier + CrossReferenceValidator",
"failure_1_demo_link": "Voir la chronologie interactive →",
"failure_2_observed": "Le projet a fait du respect de la vie privée une valeur stratégique. Après une conversation de 40 messages sur les fonctions d'analyse, l'IA a suggéré une mise en œuvre du suivi qui violait la contrainte de protection de la vie privée. L'utilisateur s'en est rendu compte ; l'IA a reconnu la violation mais s'est éloignée du principe par l'ajout progressif de fonctionnalités.",
"failure_2_root_cause": "Les valeurs stratégiques (établies au début du projet) entrent en conflit avec les optimisations tactiques (mises en œuvre plus tard sous la pression du temps). Au fur et à mesure que la conversation se prolonge, la pensée tactique domine. L'IA n'a pas activement rejeté le principe de protection de la vie privée, elle a simplement cessé de vérifier si les nouvelles fonctionnalités s'alignaient.",
"failure_2_traditional_failed": "Les valeurs énoncées dans l'invite du système perdent de leur importance au fur et à mesure que la conversation progresse. La compaction du contexte peut faire échouer les premières décisions stratégiques. Les rappels rapides (\"n'oubliez pas la protection de la vie privée\") traitent le symptôme et non la cause.",
"failure_2_intervention": "BoundaryEnforcer conserve les valeurs stratégiques en tant que contraintes persistantes extérieures au contexte de la conversation. Avant de mettre en œuvre la fonction d'analyse, l'applicateur vérifie si la contrainte \"privacy-first\" (priorité à la vie privée) est respectée. S'il détecte un conflit, il bloque la mise en œuvre autonome et demande une délibération humaine pour déterminer si le principe de protection de la vie privée doit être reconsidéré ou si l'approche analytique doit être modifiée.",
"failure_2_prevention": "Prévenu par : BoundaryEnforcer (vérification stratégique des contraintes)",
"failure_3_observed": "Au cours d'une opération complexe portant sur plusieurs fichiers à 85 % de la capacité du contexte, l'IA a omis silencieusement de traiter les erreurs dans le code généré. L'utilisateur n'a pas eu connaissance de cette omission. L'utilisateur n'a découvert la validation manquante que lors de l'examen du code.",
"failure_3_root_cause": "À mesure que le contexte se remplit, l'IA est confrontée à un compromis implicite : compléter la fonctionnalité demandée OU maintenir les normes de qualité. La formation incite à répondre aux demandes des utilisateurs plutôt qu'à reconnaître les limites. Le silence sur la dégradation est la voie de la moindre résistance.",
"failure_3_traditional_failed": "L'IA ne reconnaît pas qu'elle se dégrade - de son point de vue, elle réussit à accomplir sa tâche dans le respect des contraintes. À la question \"Avez-vous fait des économies ?\", l'IA oppose un refus confiant, car elle croit sincèrement que sa production répond aux normes.",
"failure_3_intervention": "ContextPressureMonitor suit plusieurs facteurs (utilisation de jetons, durée de la conversation, complexité de la tâche). Lorsque la pression dépasse les seuils (>75% de jetons, >40 messages, complexité élevée), le moniteur force un accusé de réception explicite de la pression et recommande une actualisation du contexte. Le risque de dégradation est visible plutôt que silencieux.",
"failure_3_prevention": "Empêché par : ContextPressureMonitor (suivi de session multifacteur)",
"observed_behaviour_label": "Comportement observé :",
"root_cause_label": "Cause première :",
"traditional_failed_label": "Les raisons de l'échec des approches traditionnelles :",
"intervention_label": "Cadre d'intervention :"
}, },
"architecture": { "architecture": {
"heading": "Architecture à Six Composants", "heading": "Architecture à Six Composants",
"services_title": "Services & Fonctions du Cadre", "services_title": "Services & Fonctions du Cadre",
"principle": "Les services opèrent en externe au runtime de l'IA avec déclenchement autonome. L'IA ne décide pas \"devrais-je vérifier les règles de gouvernance ?\" l'architecture impose la vérification par défaut. Cela résout le problème de conformité volontaire inhérent à la gouvernance basée sur les prompts." "principle": "Les services opèrent en externe au runtime de l'IA avec déclenchement autonome. L'IA ne décide pas \"devrais-je vérifier les règles de gouvernance ?\" l'architecture impose la vérification par défaut. Cela résout le problème de conformité volontaire inhérent à la gouvernance basée sur les prompts.",
"service_1_name": "1. Renforçateur de frontières",
"service_1_desc": "Empêche l'IA de prendre des décisions relatives aux valeurs sans l'approbation de l'homme. Maintient les contraintes stratégiques sous forme de règles persistantes vérifiées avant l'exécution de l'action.",
"service_1_addresses": "Adresse : Dérive des valeurs, prise de décision autonome sur des compromis contestés",
"service_2_name": "2. InstructionPersistenceClassifier",
"service_2_desc": "Stocke des instructions explicites extérieures au contexte de l'IA avec des niveaux de persistance (HAUT/MEDIUM/BAS) et une portée temporelle (STRATEGIQUE/OPERATIONNEL/TACTIQUE). Empêche l'annulation du modèle.",
"service_2_addresses": "Adresses : Biais de reconnaissance des formes (échecs de type 27027)",
"service_3_name": "3. Valideur de référence croisée",
"service_3_desc": "Valide les actions proposées par l'IA par rapport aux instructions stockées et aux règles de gouvernance avant leur exécution. Détecte les conflits et bloque les opérations incohérentes.",
"service_3_addresses": "Traite des questions suivantes : Annulation d'instruction, détection de violation de politique",
"service_4_name": "4. ContextPressureMonitor",
"service_4_desc": "Suivi multifactoriel de l'état de la session : utilisation de jetons, durée de la conversation, complexité de la tâche, fréquence des erreurs. Rend explicite le risque de dégradation lorsque les seuils sont dépassés.",
"service_4_addresses": "Aborde le sujet : Dégradation silencieuse de la qualité, échecs dus à la pression contextuelle",
"service_5_name": "5. Vérificateur métacognitif",
"service_5_desc": "Autocontrôle de la qualité du raisonnement avant les opérations complexes (>3 fichiers, >5 étapes, changements d'architecture). Valide l'alignement, la cohérence, envisage des alternatives.",
"service_5_addresses": "Aborde : Raccourcis de raisonnement en cas de complexité, prise en compte insuffisante des alternatives",
"service_6_name": "6. Délibération pluralisteOrchestrateur",
"service_6_desc": "Facilite les délibérations multipartites lorsque des conflits de valeurs sont détectés. Engagement non hiérarchique, désaccord documenté, reconnaissance du reste moral.",
"service_6_addresses": "Aborde : Conflits de valeurs, exclusion des parties prenantes, résolution algorithmique des compromis contestés",
"principle_label": "Principe architectural :",
"view_full_architecture_link": "Voir l'architecture complète du système et les détails techniques"
}, },
"demos": { "demos": {
"heading": "Démonstrations Interactives", "heading": "Démonstrations Interactives",
@ -42,11 +112,46 @@
"boundary_desc": "Testez les décisions contre l'application des limites pour voir lesquelles nécessitent un jugement humain vs l'autonomie de l'IA." "boundary_desc": "Testez les décisions contre l'application des limites pour voir lesquelles nécessitent un jugement humain vs l'autonomie de l'IA."
}, },
"resources": { "resources": {
"heading": "Documentation de Recherche" "heading": "Documentation de Recherche",
"doc_1_title": "Fondements de la théorie des organisations",
"doc_2_title": "Plan de délibération sur les valeurs pluralistes",
"doc_2_badge": "PROJET",
"doc_3_title": "Études de cas : Modes de défaillance du LLM dans le monde réel",
"doc_4_title": "Le cadre en action : Audit de sécurité avant publication",
"doc_5_title": "Annexe B : Glossaire",
"doc_6_title": "Documentation technique complète"
}, },
"limitations": { "limitations": {
"heading": "Limitations & Directions de Recherche Futures", "heading": "Limitations & Directions de Recherche Futures",
"title": "Limitations Connues & Lacunes de Recherche" "title": "Limitations Connues & Lacunes de Recherche",
"limitation_1_title": "1. Validation d'un seul contexte",
"limitation_1_desc": "Cadre validé uniquement dans le contexte d'un seul projet et d'un seul utilisateur (le développement de ce site web). Il n'y a pas eu de déploiement multi-organisationnel, de test multiplateforme ou de validation expérimentale contrôlée.",
"limitation_2_title": "2. Limitation de l'invitation volontaire",
"limitation_2_desc": "Limite la plus importante : Le cadre peut être contourné si l'IA choisit simplement de ne pas utiliser les outils de gouvernance. Nous avons résolu ce problème grâce à des modèles architecturaux qui rendent les contrôles de gouvernance automatiques plutôt que volontaires, mais l'application externe complète nécessite une intégration au niveau de l'exécution qui n'est pas universellement disponible dans les plates-formes LLM actuelles.",
"limitation_3_title": "3. Pas de test contradictoire",
"limitation_3_desc": "Le cadre n'a pas fait l'objet d'une évaluation par l'équipe rouge, d'un test de jailbreak ou d'une évaluation rapide par des adversaires. Toutes les observations proviennent d'un processus de développement normal, et non de tentatives de contournement délibérées.",
"limitation_4_title": "4. Spécificité de la plate-forme",
"limitation_4_desc": "Observations et interventions validées avec le code Claude (Anthropic Sonnet 4.5) uniquement. La généralisation à d'autres systèmes LLM (Copilot, GPT-4, agents personnalisés) reste une hypothèse non validée.",
"limitation_5_title": "5. Incertitude d'échelle",
"limitation_5_desc": "Les caractéristiques de performance à l'échelle de l'entreprise (des milliers d'utilisateurs simultanés, des millions d'événements de gouvernance) sont totalement inconnues. La mise en œuvre actuelle est optimisée pour le contexte d'un seul utilisateur.",
"future_research_title": "Besoins futurs en matière de recherche :",
"future_research_1": "Validation expérimentale contrôlée à l'aide de mesures quantitatives",
"future_research_2": "Études pilotes multi-organisations dans différents domaines",
"future_research_3": "Audit de sécurité indépendant et tests contradictoires",
"future_research_4": "Évaluation de la cohérence entre plates-formes (Copilot, GPT-4, modèles ouverts)",
"future_research_5": "Vérification formelle des propriétés d'application des limites",
"future_research_6": "Étude longitudinale de l'efficacité du cadre au cours d'un déploiement prolongé"
},
"bibliography": {
"heading": "Références et bibliographie",
"theoretical_priority_label": "Priorité théorique :",
"theoretical_priority_text": "Le Tractatus est né des préoccupations concernant le maintien de la persistance des valeurs humaines dans les organisations augmentées par l'IA. Le pluralisme moral et le processus délibératif constituent le fondement théorique du CORE. La théorie organisationnelle fournit un contexte de soutien pour l'autorité décisionnelle temporelle et la mise en œuvre structurelle.",
"section_1_heading": "Pluralisme moral et philosophie des valeurs (Fondation primaire)",
"section_2_heading": "Théorie de l'organisation (contexte de soutien)",
"section_3_heading": "Gouvernance de l'IA et contexte technique",
"intellectual_lineage_label": "Note sur la lignée intellectuelle :",
"intellectual_lineage_text": "La préoccupation centrale du cadre - la persistance des valeurs humaines dans les contextes organisationnels augmentés par l'IA - découle de la philosophie morale plutôt que de la science de la gestion. Le PluralisticDeliberationOrchestrator représente le principal axe de recherche, incarnant le concept d'attention aux besoins humains pluriels de Weil et la reconnaissance des valeurs incommensurables de Berlin.",
"future_development_text": "Berlin et Weil joueront un rôle essentiel dans le développement de la composante \"délibération\" : leurs travaux fournissent les fondements philosophiques permettant de comprendre comment préserver l'action humaine sur les décisions relatives aux valeurs à mesure que les capacités de l'IA s'accélèrent. La théorie organisationnelle traditionnelle (Weber, Taylor) traite de l'autorité par le biais de la hiérarchie ; les contextes organisationnels post-AI exigent une autorité par le biais d'un processus délibératif approprié entre les perspectives des parties prenantes. La documentation relative au développement du cadre (rapports d'incidents, journaux de sessions) est conservée dans le référentiel du projet mais n'est pas rendue publique dans l'attente d'un examen par les pairs."
} }
}, },
"footer": { "footer": {
@ -55,5 +160,11 @@
"for_decision_makers_desc": "Perspective stratégique sur les défis de gouvernance et les approches architecturales", "for_decision_makers_desc": "Perspective stratégique sur les défis de gouvernance et les approches architecturales",
"implementation_guide": "Guide d'Implémentation", "implementation_guide": "Guide d'Implémentation",
"implementation_guide_desc": "Modèles d'intégration technique et considérations de déploiement" "implementation_guide_desc": "Modèles d'intégration technique et considérations de déploiement"
},
"ui": {
"breadcrumb_home": "Accueil",
"breadcrumb_researcher": "Chercheur",
"noscript_note": "Remarque :",
"noscript_message": "Cette page utilise JavaScript pour les fonctions interactives (accordéons, animations). Le contenu reste accessible mais les sections extensibles seront visibles par défaut."
} }
} }

View file

@ -65,7 +65,7 @@
<!-- JavaScript required notice --> <!-- JavaScript required notice -->
<noscript> <noscript>
<div class="bg-amber-50 border-b border-amber-200 px-4 py-3 text-center text-sm text-amber-900"> <div class="bg-amber-50 border-b border-amber-200 px-4 py-3 text-center text-sm text-amber-900">
<strong>Note:</strong> This page uses JavaScript for interactive features (accordions, animations). Content remains accessible but expandable sections will be visible by default. <strong data-i18n="ui.noscript_note">Note:</strong> <span data-i18n="ui.noscript_message">This page uses JavaScript for interactive features (accordions, animations). Content remains accessible but expandable sections will be visible by default.</span>
</div> </div>
</noscript> </noscript>
@ -75,9 +75,9 @@
<nav class="bg-gray-50 border-b border-gray-200 py-3" aria-label="Breadcrumb"> <nav class="bg-gray-50 border-b border-gray-200 py-3" aria-label="Breadcrumb">
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8"> <div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8">
<ol class="flex items-center space-x-2 text-sm"> <ol class="flex items-center space-x-2 text-sm">
<li><a href="/" class="hover:underline transition-colors text-tractatus-link">Home</a></li> <li><a href="/" class="hover:underline transition-colors text-tractatus-link" data-i18n="ui.breadcrumb_home">Home</a></li>
<li class="text-gray-400">/</li> <li class="text-gray-400">/</li>
<li class="text-gray-900 font-medium" aria-current="page">Researcher</li> <li class="text-gray-900 font-medium" aria-current="page" data-i18n="ui.breadcrumb_researcher">Researcher</li>
</ol> </ol>
</div> </div>
</nav> </nav>
@ -112,16 +112,16 @@
</div> </div>
<div class="prose prose-sm max-w-none text-gray-700 space-y-3"> <div class="prose prose-sm max-w-none text-gray-700 space-y-3">
<p> <p data-i18n="sections.research_context.paragraph_1">
<strong>Aligning advanced AI with human values is among the most consequential challenges we face.</strong> As capability growth accelerates under big tech momentum, we confront a categorical imperative: preserve human agency over values decisions, or risk ceding control entirely. <strong>Aligning advanced AI with human values is among the most consequential challenges we face.</strong> As capability growth accelerates under big tech momentum, we confront a categorical imperative: preserve human agency over values decisions, or risk ceding control entirely.
</p> </p>
<p> <p data-i18n="sections.research_context.paragraph_2">
The framework emerged from practical necessity. During development, we observed recurring patterns where AI systems would override explicit instructions, drift from established values constraints, or silently degrade quality under context pressure. Traditional governance approaches (policy documents, ethical guidelines, prompt engineering) proved insufficient to prevent these failures. The framework emerged from practical necessity. During development, we observed recurring patterns where AI systems would override explicit instructions, drift from established values constraints, or silently degrade quality under context pressure. Traditional governance approaches (policy documents, ethical guidelines, prompt engineering) proved insufficient to prevent these failures.
</p> </p>
<p> <p data-i18n="sections.research_context.paragraph_3">
Instead of hoping AI systems "behave correctly," Tractatus proposes <strong>structural constraints where certain decision types require human judgment</strong>. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth. Instead of hoping AI systems "behave correctly," Tractatus proposes <strong>structural constraints where certain decision types require human judgment</strong>. These architectural boundaries can adapt to individual, organizational, and societal norms—creating a foundation for bounded AI operation that may scale more safely with capability growth.
</p> </p>
<p> <p data-i18n="sections.research_context.paragraph_4">
This led to the central research question: <strong>Can governance be made architecturally external to AI systems</strong> rather than relying on voluntary AI compliance? If this approach can work at scale, Tractatus may represent a turning point—a path where AI enhances human capability without compromising human sovereignty. This led to the central research question: <strong>Can governance be made architecturally external to AI systems</strong> rather than relying on voluntary AI compliance? If this approach can work at scale, Tractatus may represent a turning point—a path where AI enhances human capability without compromising human sovereignty.
</p> </p>
</div> </div>
@ -145,27 +145,27 @@
</button> </button>
<div id="org-theory-content" class="accordion-content" role="region" aria-labelledby="org-theory-button"> <div id="org-theory-content" class="accordion-content" role="region" aria-labelledby="org-theory-button">
<div class="p-5 border-t border-gray-200 prose prose-sm max-w-none text-gray-700"> <div class="p-5 border-t border-gray-200 prose prose-sm max-w-none text-gray-700">
<p class="mb-4"> <p class="mb-4" data-i18n="sections.theoretical_foundations.org_theory_intro">
Tractatus draws on four decades of organisational research addressing authority structures during knowledge democratisation: Tractatus draws on four decades of organisational research addressing authority structures during knowledge democratisation:
</p> </p>
<p class="mb-3"><strong>Time-Based Organisation (Bluedorn, Ancona):</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.org_theory_1_title">Time-Based Organisation (Bluedorn, Ancona):</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.org_theory_1_desc">
Decisions operate across strategic (years), operational (months), and tactical (hours-days) timescales. AI systems operating at tactical speed should not override strategic decisions made at appropriate temporal scale. The InstructionPersistenceClassifier explicitly models temporal horizon (STRATEGIC, OPERATIONAL, TACTICAL) to enforce decision authority alignment. Decisions operate across strategic (years), operational (months), and tactical (hours-days) timescales. AI systems operating at tactical speed should not override strategic decisions made at appropriate temporal scale. The InstructionPersistenceClassifier explicitly models temporal horizon (STRATEGIC, OPERATIONAL, TACTICAL) to enforce decision authority alignment.
</p> </p>
<p class="mb-3"><strong>Knowledge Orchestration (Crossan et al.):</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.org_theory_2_title">Knowledge Orchestration (Crossan et al.):</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.org_theory_2_desc">
When knowledge becomes ubiquitous through AI, organisational authority shifts from information control to knowledge coordination. Governance systems must orchestrate decision-making across distributed expertise rather than centralise control. The PluralisticDeliberationOrchestrator implements non-hierarchical coordination for values conflicts. When knowledge becomes ubiquitous through AI, organisational authority shifts from information control to knowledge coordination. Governance systems must orchestrate decision-making across distributed expertise rather than centralise control. The PluralisticDeliberationOrchestrator implements non-hierarchical coordination for values conflicts.
</p> </p>
<p class="mb-3"><strong>Post-Bureaucratic Authority (Laloux, Hamel):</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.org_theory_3_title">Post-Bureaucratic Authority (Laloux, Hamel):</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.org_theory_3_desc">
Traditional hierarchical authority assumes information asymmetry. As AI democratises expertise, legitimate authority must derive from appropriate time horizon and stakeholder representation, not positional power. Framework architecture separates technical capability (what AI can do) from decision authority (what AI should do). Traditional hierarchical authority assumes information asymmetry. As AI democratises expertise, legitimate authority must derive from appropriate time horizon and stakeholder representation, not positional power. Framework architecture separates technical capability (what AI can do) from decision authority (what AI should do).
</p> </p>
<p class="mb-3"><strong>Structural Inertia (Hannan & Freeman):</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.org_theory_4_title">Structural Inertia (Hannan & Freeman):</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.org_theory_4_desc">
Governance embedded in culture or process erodes over time as systems evolve. Architectural constraints create structural inertia that resists organisational drift. Making governance external to AI runtime creates "accountability infrastructure" that survives individual session variations. Governance embedded in culture or process erodes over time as systems evolve. Architectural constraints create structural inertia that resists organisational drift. Making governance external to AI runtime creates "accountability infrastructure" that survives individual session variations.
</p> </p>
@ -177,7 +177,7 @@
<svg class="w-5 h-5 mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
View Complete Organisational Theory Foundations (PDF) <span data-i18n="sections.theoretical_foundations.org_theory_pdf_link">View Complete Organisational Theory Foundations (PDF)</span>
</a> </a>
</div> </div>
</div> </div>
@ -199,40 +199,40 @@
<div id="values-content" class="accordion-content" role="region" aria-labelledby="values-button"> <div id="values-content" class="accordion-content" role="region" aria-labelledby="values-button">
<div class="p-5 border-t border-gray-200 prose prose-sm max-w-none text-gray-700"> <div class="p-5 border-t border-gray-200 prose prose-sm max-w-none text-gray-700">
<div class="bg-blue-50 border border-blue-200 rounded p-3 mb-4 text-sm"> <div class="bg-blue-50 border border-blue-200 rounded p-3 mb-4 text-sm">
<strong>Core Research Focus:</strong> The PluralisticDeliberationOrchestrator represents Tractatus's primary theoretical contribution, addressing how to maintain human values persistence in organizations augmented by AI agents. <strong data-i18n="sections.theoretical_foundations.values_core_research">Core Research Focus:</strong> <span data-i18n="sections.theoretical_foundations.values_core_research_desc">The PluralisticDeliberationOrchestrator represents Tractatus's primary theoretical contribution, addressing how to maintain human values persistence in organizations augmented by AI agents.</span>
</div> </div>
<p class="mb-4"> <p class="mb-4" data-i18n="sections.theoretical_foundations.values_central_problem">
<strong>The Central Problem:</strong> Many "safety" questions in AI governance are actually values conflicts where multiple legitimate perspectives exist. When efficiency conflicts with transparency, or innovation with risk mitigation, no algorithm can determine the "correct" answer. These are values trade-offs requiring human deliberation across stakeholder perspectives. <strong>The Central Problem:</strong> Many "safety" questions in AI governance are actually values conflicts where multiple legitimate perspectives exist. When efficiency conflicts with transparency, or innovation with risk mitigation, no algorithm can determine the "correct" answer. These are values trade-offs requiring human deliberation across stakeholder perspectives.
</p> </p>
<p class="mb-3"><strong>Isaiah Berlin: Value Pluralism</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.values_berlin_title">Isaiah Berlin: Value Pluralism</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.values_berlin_desc">
Berlin's concept of value pluralism argues that legitimate values can conflict without one being objectively superior. Liberty and equality, justice and mercy, innovation and stability—these are incommensurable goods. AI systems trained on utilitarian efficiency maximization cannot adjudicate between them without imposing a single values framework that excludes legitimate alternatives. Berlin's concept of value pluralism argues that legitimate values can conflict without one being objectively superior. Liberty and equality, justice and mercy, innovation and stability—these are incommensurable goods. AI systems trained on utilitarian efficiency maximization cannot adjudicate between them without imposing a single values framework that excludes legitimate alternatives.
</p> </p>
<p class="mb-3"><strong>Simone Weil: Attention and Human Needs</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.values_weil_title">Simone Weil: Attention and Human Needs</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.values_weil_desc">
Weil's philosophy of attention informs the orchestrator's deliberative process. <em>The Need for Roots</em> identifies fundamental human needs (order, liberty, responsibility, equality, hierarchical structure, honor, security, risk, etc.) that exist in tension. Proper attention requires seeing these needs in their full particularity rather than abstracting them into algorithmic weights. In AI-augmented organizations, the risk is that bot-mediated processes treat human values as optimization parameters rather than incommensurable needs requiring careful attention. Weil's philosophy of attention informs the orchestrator's deliberative process. <em>The Need for Roots</em> identifies fundamental human needs (order, liberty, responsibility, equality, hierarchical structure, honor, security, risk, etc.) that exist in tension. Proper attention requires seeing these needs in their full particularity rather than abstracting them into algorithmic weights. In AI-augmented organizations, the risk is that bot-mediated processes treat human values as optimization parameters rather than incommensurable needs requiring careful attention.
</p> </p>
<p class="mb-3"><strong>Bernard Williams: Moral Remainder</strong></p> <p class="mb-3"><strong data-i18n="sections.theoretical_foundations.values_williams_title">Bernard Williams: Moral Remainder</strong></p>
<p class="mb-4 pl-4"> <p class="mb-4 pl-4" data-i18n="sections.theoretical_foundations.values_williams_desc">
Williams' concept of moral remainder acknowledges that even optimal decisions create unavoidable harm to other legitimate values. The orchestrator documents dissenting perspectives not as "minority opinions to be overruled" but as legitimate moral positions that the chosen course necessarily violates. This prevents the AI governance equivalent of declaring optimization complete when values conflicts are merely suppressed. Williams' concept of moral remainder acknowledges that even optimal decisions create unavoidable harm to other legitimate values. The orchestrator documents dissenting perspectives not as "minority opinions to be overruled" but as legitimate moral positions that the chosen course necessarily violates. This prevents the AI governance equivalent of declaring optimization complete when values conflicts are merely suppressed.
</p> </p>
<p class="mb-4"> <p class="mb-4" data-i18n="sections.theoretical_foundations.values_implementation">
<strong>Framework Implementation:</strong> Rather than algorithmic resolution, the PluralisticDeliberationOrchestrator facilitates: <strong>Framework Implementation:</strong> Rather than algorithmic resolution, the PluralisticDeliberationOrchestrator facilitates:
</p> </p>
<ul class="list-disc pl-6 mb-4 space-y-2"> <ul class="list-disc pl-6 mb-4 space-y-2">
<li><strong>Stakeholder identification:</strong> Who has legitimate interest in this decision? (Weil: whose needs are implicated?)</li> <li data-i18n="sections.theoretical_foundations.values_implementation_1"><strong>Stakeholder identification:</strong> Who has legitimate interest in this decision? (Weil: whose needs are implicated?)</li>
<li><strong>Non-hierarchical deliberation:</strong> Equal voice without automatic expert override (Berlin: no privileged value hierarchy)</li> <li data-i18n="sections.theoretical_foundations.values_implementation_2"><strong>Non-hierarchical deliberation:</strong> Equal voice without automatic expert override (Berlin: no privileged value hierarchy)</li>
<li><strong>Quality of attention:</strong> Detailed exploration of how decision affects each stakeholder's needs (Weil: particularity not abstraction)</li> <li data-i18n="sections.theoretical_foundations.values_implementation_3"><strong>Quality of attention:</strong> Detailed exploration of how decision affects each stakeholder's needs (Weil: particularity not abstraction)</li>
<li><strong>Documented dissent:</strong> Minority positions recorded in full (Williams: moral remainder made explicit)</li> <li data-i18n="sections.theoretical_foundations.values_implementation_4"><strong>Documented dissent:</strong> Minority positions recorded in full (Williams: moral remainder made explicit)</li>
</ul> </ul>
<p class="mb-4"> <p class="mb-4" data-i18n="sections.theoretical_foundations.values_conclusion">
This approach recognises that <strong>governance isn't solving values conflicts—it's ensuring they're addressed through appropriate deliberative process with genuine human attention</strong> rather than AI imposing resolution through training data bias or efficiency metrics. This approach recognises that <strong>governance isn't solving values conflicts—it's ensuring they're addressed through appropriate deliberative process with genuine human attention</strong> rather than AI imposing resolution through training data bias or efficiency metrics.
</p> </p>
@ -244,7 +244,7 @@
<svg class="w-5 h-5 mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
View Pluralistic Values Deliberation Plan (PDF, DRAFT) <span data-i18n="sections.theoretical_foundations.values_pdf_link">View Pluralistic Values Deliberation Plan (PDF, DRAFT)</span>
</a> </a>
</div> </div>
</div> </div>
@ -269,22 +269,22 @@
<div class="flex-1"> <div class="flex-1">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="sections.empirical_observations.failure_1_title">Pattern Recognition Bias Override (The 27027 Incident)</h3> <h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="sections.empirical_observations.failure_1_title">Pattern Recognition Bias Override (The 27027 Incident)</h3>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Observed behaviour:</strong> User specified "Check MongoDB on port 27027" but AI immediately used default port 27017 instead. This occurred within same message—not forgetting over time, but immediate autocorrection by training data patterns. <strong data-i18n="sections.empirical_observations.observed_behaviour_label">Observed behaviour:</strong> <span data-i18n="sections.empirical_observations.failure_1_observed">User specified "Check MongoDB on port 27027" but AI immediately used default port 27017 instead. This occurred within same message—not forgetting over time, but immediate autocorrection by training data patterns.</span>
</p> </p>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Root cause:</strong> Training data contains thousands of examples of MongoDB on port 27017 (default). When AI encounters "MongoDB" + port specification, pattern recognition weight overrides explicit instruction. Similar to autocorrect changing correctly-spelled proper nouns to common words. <strong data-i18n="sections.empirical_observations.root_cause_label">Root cause:</strong> <span data-i18n="sections.empirical_observations.failure_1_root_cause">Training data contains thousands of examples of MongoDB on port 27017 (default). When AI encounters "MongoDB" + port specification, pattern recognition weight overrides explicit instruction. Similar to autocorrect changing correctly-spelled proper nouns to common words.</span>
</p> </p>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Why traditional approaches failed:</strong> Prompt engineering ("please follow instructions exactly") ineffective because AI genuinely believes it IS following instructions—pattern recognition operates below conversational reasoning layer. <strong data-i18n="sections.empirical_observations.traditional_failed_label">Why traditional approaches failed:</strong> <span data-i18n="sections.empirical_observations.failure_1_traditional_failed">Prompt engineering ("please follow instructions exactly") ineffective because AI genuinely believes it IS following instructions—pattern recognition operates below conversational reasoning layer.</span>
</p> </p>
<p class="text-sm text-gray-700"> <p class="text-sm text-gray-700">
<strong>Framework intervention:</strong> InstructionPersistenceClassifier stores explicit instructions in external persistence layer. CrossReferenceValidator checks AI actions against stored instructions before execution. When AI proposes port 27017, validator detects conflict with stored "27027" instruction and blocks execution. <strong data-i18n="sections.empirical_observations.intervention_label">Framework intervention:</strong> <span data-i18n="sections.empirical_observations.failure_1_intervention">InstructionPersistenceClassifier stores explicit instructions in external persistence layer. CrossReferenceValidator checks AI actions against stored instructions before execution. When AI proposes port 27017, validator detects conflict with stored "27027" instruction and blocks execution.</span>
</p> </p>
</div> </div>
</div> </div>
<div class="flex items-center justify-end gap-3 text-sm pt-3 border-t border-gray-200"> <div class="flex items-center justify-end gap-3 text-sm pt-3 border-t border-gray-200">
<span class="text-gray-500">Prevented by: InstructionPersistenceClassifier + CrossReferenceValidator</span> <span class="text-gray-500" data-i18n="sections.empirical_observations.failure_1_prevention">Prevented by: InstructionPersistenceClassifier + CrossReferenceValidator</span>
<a href="/demos/27027-demo.html" class="text-amber-700 hover:text-amber-800 font-medium">View interactive timeline →</a> <a href="/demos/27027-demo.html" class="text-amber-700 hover:text-amber-800 font-medium" data-i18n="sections.empirical_observations.failure_1_demo_link">View interactive timeline →</a>
</div> </div>
</div> </div>
@ -297,21 +297,21 @@
<div class="flex-1"> <div class="flex-1">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="sections.empirical_observations.failure_2_title">Gradual Values Drift Under Context Pressure</h3> <h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="sections.empirical_observations.failure_2_title">Gradual Values Drift Under Context Pressure</h3>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Observed behaviour:</strong> Project established "privacy-first" as strategic value. After 40-message conversation about analytics features, AI suggested tracking implementation that violated privacy constraint. User caught it; AI acknowledged violation but had drifted from principle through incremental feature additions. <strong data-i18n="sections.empirical_observations.observed_behaviour_label">Observed behaviour:</strong> <span data-i18n="sections.empirical_observations.failure_2_observed">Project established "privacy-first" as strategic value. After 40-message conversation about analytics features, AI suggested tracking implementation that violated privacy constraint. User caught it; AI acknowledged violation but had drifted from principle through incremental feature additions.</span>
</p> </p>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Root cause:</strong> Strategic values (established early in project) conflict with tactical optimisations (implemented later under time pressure). As conversation lengthens, tactical thinking dominates. AI didn't actively reject privacy principle—it simply stopped checking whether new features aligned. <strong data-i18n="sections.empirical_observations.root_cause_label">Root cause:</strong> <span data-i18n="sections.empirical_observations.failure_2_root_cause">Strategic values (established early in project) conflict with tactical optimisations (implemented later under time pressure). As conversation lengthens, tactical thinking dominates. AI didn't actively reject privacy principle—it simply stopped checking whether new features aligned.</span>
</p> </p>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Why traditional approaches failed:</strong> Values stated in system prompt become less salient as conversation progresses. Context compaction can drop early strategic decisions. Prompt reminders ("remember privacy-first") treat symptom not cause. <strong data-i18n="sections.empirical_observations.traditional_failed_label">Why traditional approaches failed:</strong> <span data-i18n="sections.empirical_observations.failure_2_traditional_failed">Values stated in system prompt become less salient as conversation progresses. Context compaction can drop early strategic decisions. Prompt reminders ("remember privacy-first") treat symptom not cause.</span>
</p> </p>
<p class="text-sm text-gray-700"> <p class="text-sm text-gray-700">
<strong>Framework intervention:</strong> BoundaryEnforcer maintains strategic values as persistent constraints external to conversation context. Before implementing analytics feature, enforcer checks against stored "privacy-first" constraint. Detects conflict, blocks autonomous implementation, escalates for human deliberation on whether privacy principle should be reconsidered or analytics approach modified. <strong data-i18n="sections.empirical_observations.intervention_label">Framework intervention:</strong> <span data-i18n="sections.empirical_observations.failure_2_intervention">BoundaryEnforcer maintains strategic values as persistent constraints external to conversation context. Before implementing analytics feature, enforcer checks against stored "privacy-first" constraint. Detects conflict, blocks autonomous implementation, escalates for human deliberation on whether privacy principle should be reconsidered or analytics approach modified.</span>
</p> </p>
</div> </div>
</div> </div>
<div class="flex items-center justify-end gap-3 text-sm pt-3 border-t border-gray-200"> <div class="flex items-center justify-end gap-3 text-sm pt-3 border-t border-gray-200">
<span class="text-gray-500">Prevented by: BoundaryEnforcer (STRATEGIC constraint checking)</span> <span class="text-gray-500" data-i18n="sections.empirical_observations.failure_2_prevention">Prevented by: BoundaryEnforcer (STRATEGIC constraint checking)</span>
</div> </div>
</div> </div>
@ -324,21 +324,21 @@
<div class="flex-1"> <div class="flex-1">
<h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="sections.empirical_observations.failure_3_title">Silent Quality Degradation at High Context Pressure</h3> <h3 class="text-lg font-bold text-gray-900 mb-2" data-i18n="sections.empirical_observations.failure_3_title">Silent Quality Degradation at High Context Pressure</h3>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Observed behaviour:</strong> During complex multi-file operation at 85% context capacity, AI silently omitted error handling in generated code. No indication to user that corners were being cut. User discovered missing validation only during code review. <strong data-i18n="sections.empirical_observations.observed_behaviour_label">Observed behaviour:</strong> <span data-i18n="sections.empirical_observations.failure_3_observed">During complex multi-file operation at 85% context capacity, AI silently omitted error handling in generated code. No indication to user that corners were being cut. User discovered missing validation only during code review.</span>
</p> </p>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Root cause:</strong> As context fills, AI faces implicit trade-off: complete requested functionality OR maintain quality standards. Training incentivises completing user requests over acknowledging limitations. Silence about degradation is path of least resistance. <strong data-i18n="sections.empirical_observations.root_cause_label">Root cause:</strong> <span data-i18n="sections.empirical_observations.failure_3_root_cause">As context fills, AI faces implicit trade-off: complete requested functionality OR maintain quality standards. Training incentivises completing user requests over acknowledging limitations. Silence about degradation is path of least resistance.</span>
</p> </p>
<p class="text-sm text-gray-700 mb-3"> <p class="text-sm text-gray-700 mb-3">
<strong>Why traditional approaches failed:</strong> AI doesn't recognise it's degrading—from its perspective, it's successfully completing task under constraints. Asking "did you cut corners?" produces confident denial because AI genuinely believes its output meets standards. <strong data-i18n="sections.empirical_observations.traditional_failed_label">Why traditional approaches failed:</strong> <span data-i18n="sections.empirical_observations.failure_3_traditional_failed">AI doesn't recognise it's degrading—from its perspective, it's successfully completing task under constraints. Asking "did you cut corners?" produces confident denial because AI genuinely believes its output meets standards.</span>
</p> </p>
<p class="text-sm text-gray-700"> <p class="text-sm text-gray-700">
<strong>Framework intervention:</strong> ContextPressureMonitor tracks multiple factors (token usage, conversation length, task complexity). When pressure exceeds thresholds (>75% tokens, >40 messages, high complexity), monitor forces explicit pressure acknowledgment and recommends context refresh. Makes degradation risk visible rather than silent. <strong data-i18n="sections.empirical_observations.intervention_label">Framework intervention:</strong> <span data-i18n="sections.empirical_observations.failure_3_intervention">ContextPressureMonitor tracks multiple factors (token usage, conversation length, task complexity). When pressure exceeds thresholds (>75% tokens, >40 messages, high complexity), monitor forces explicit pressure acknowledgment and recommends context refresh. Makes degradation risk visible rather than silent.</span>
</p> </p>
</div> </div>
</div> </div>
<div class="flex items-center justify-end gap-3 text-sm pt-3 border-t border-gray-200"> <div class="flex items-center justify-end gap-3 text-sm pt-3 border-t border-gray-200">
<span class="text-gray-500">Prevented by: ContextPressureMonitor (multi-factor session tracking)</span> <span class="text-gray-500" data-i18n="sections.empirical_observations.failure_3_prevention">Prevented by: ContextPressureMonitor (multi-factor session tracking)</span>
</div> </div>
</div> </div>
@ -366,43 +366,43 @@
<div class="p-5 border-t border-gray-200 space-y-4 text-sm"> <div class="p-5 border-t border-gray-200 space-y-4 text-sm">
<div class="border-l-2 border-amber-500 pl-4"> <div class="border-l-2 border-amber-500 pl-4">
<div class="font-semibold text-gray-900 mb-1">1. BoundaryEnforcer</div> <div class="font-semibold text-gray-900 mb-1" data-i18n="sections.architecture.service_1_name">1. BoundaryEnforcer</div>
<div class="text-gray-600 mb-2">Blocks AI from making values decisions without human approval. Maintains strategic constraints as persistent rules checked before action execution.</div> <div class="text-gray-600 mb-2" data-i18n="sections.architecture.service_1_desc">Blocks AI from making values decisions without human approval. Maintains strategic constraints as persistent rules checked before action execution.</div>
<div class="text-xs text-gray-500">Addresses: Values drift, autonomous decision-making on contested trade-offs</div> <div class="text-xs text-gray-500" data-i18n="sections.architecture.service_1_addresses">Addresses: Values drift, autonomous decision-making on contested trade-offs</div>
</div> </div>
<div class="border-l-2 border-gray-300 pl-4"> <div class="border-l-2 border-gray-300 pl-4">
<div class="font-semibold text-gray-900 mb-1">2. InstructionPersistenceClassifier</div> <div class="font-semibold text-gray-900 mb-1" data-i18n="sections.architecture.service_2_name">2. InstructionPersistenceClassifier</div>
<div class="text-gray-600 mb-2">Stores explicit instructions external to AI context with persistence levels (HIGH/MEDIUM/LOW) and temporal scope (STRATEGIC/OPERATIONAL/TACTICAL). Prevents pattern bias override.</div> <div class="text-gray-600 mb-2" data-i18n="sections.architecture.service_2_desc">Stores explicit instructions external to AI context with persistence levels (HIGH/MEDIUM/LOW) and temporal scope (STRATEGIC/OPERATIONAL/TACTICAL). Prevents pattern bias override.</div>
<div class="text-xs text-gray-500">Addresses: Pattern recognition bias (27027-style failures)</div> <div class="text-xs text-gray-500" data-i18n="sections.architecture.service_2_addresses">Addresses: Pattern recognition bias (27027-style failures)</div>
</div> </div>
<div class="border-l-2 border-gray-300 pl-4"> <div class="border-l-2 border-gray-300 pl-4">
<div class="font-semibold text-gray-900 mb-1">3. CrossReferenceValidator</div> <div class="font-semibold text-gray-900 mb-1" data-i18n="sections.architecture.service_3_name">3. CrossReferenceValidator</div>
<div class="text-gray-600 mb-2">Validates AI proposed actions against stored instructions and governance rules before execution. Detects conflicts and blocks inconsistent operations.</div> <div class="text-gray-600 mb-2" data-i18n="sections.architecture.service_3_desc">Validates AI proposed actions against stored instructions and governance rules before execution. Detects conflicts and blocks inconsistent operations.</div>
<div class="text-xs text-gray-500">Addresses: Instruction override, policy violation detection</div> <div class="text-xs text-gray-500" data-i18n="sections.architecture.service_3_addresses">Addresses: Instruction override, policy violation detection</div>
</div> </div>
<div class="border-l-2 border-gray-300 pl-4"> <div class="border-l-2 border-gray-300 pl-4">
<div class="font-semibold text-gray-900 mb-1">4. ContextPressureMonitor</div> <div class="font-semibold text-gray-900 mb-1" data-i18n="sections.architecture.service_4_name">4. ContextPressureMonitor</div>
<div class="text-gray-600 mb-2">Multi-factor tracking of session health: token usage, conversation length, task complexity, error frequency. Makes degradation risk explicit when thresholds exceeded.</div> <div class="text-gray-600 mb-2" data-i18n="sections.architecture.service_4_desc">Multi-factor tracking of session health: token usage, conversation length, task complexity, error frequency. Makes degradation risk explicit when thresholds exceeded.</div>
<div class="text-xs text-gray-500">Addresses: Silent quality degradation, context-pressure failures</div> <div class="text-xs text-gray-500" data-i18n="sections.architecture.service_4_addresses">Addresses: Silent quality degradation, context-pressure failures</div>
</div> </div>
<div class="border-l-2 border-gray-300 pl-4"> <div class="border-l-2 border-gray-300 pl-4">
<div class="font-semibold text-gray-900 mb-1">5. MetacognitiveVerifier</div> <div class="font-semibold text-gray-900 mb-1" data-i18n="sections.architecture.service_5_name">5. MetacognitiveVerifier</div>
<div class="text-gray-600 mb-2">Self-checks reasoning quality before complex operations (>3 files, >5 steps, architecture changes). Validates alignment, coherence, considers alternatives.</div> <div class="text-gray-600 mb-2" data-i18n="sections.architecture.service_5_desc">Self-checks reasoning quality before complex operations (>3 files, >5 steps, architecture changes). Validates alignment, coherence, considers alternatives.</div>
<div class="text-xs text-gray-500">Addresses: Reasoning shortcuts under complexity, insufficient alternative consideration</div> <div class="text-xs text-gray-500" data-i18n="sections.architecture.service_5_addresses">Addresses: Reasoning shortcuts under complexity, insufficient alternative consideration</div>
</div> </div>
<div class="border-l-2 border-gray-300 pl-4"> <div class="border-l-2 border-gray-300 pl-4">
<div class="font-semibold text-gray-900 mb-1">6. PluralisticDeliberationOrchestrator</div> <div class="font-semibold text-gray-900 mb-1" data-i18n="sections.architecture.service_6_name">6. PluralisticDeliberationOrchestrator</div>
<div class="text-gray-600 mb-2">Facilitates multi-stakeholder deliberation when values conflicts detected. Non-hierarchical engagement, documented dissent, moral remainder acknowledgment.</div> <div class="text-gray-600 mb-2" data-i18n="sections.architecture.service_6_desc">Facilitates multi-stakeholder deliberation when values conflicts detected. Non-hierarchical engagement, documented dissent, moral remainder acknowledgment.</div>
<div class="text-xs text-gray-500">Addresses: Values conflicts, stakeholder exclusion, algorithmic resolution of contested trade-offs</div> <div class="text-xs text-gray-500" data-i18n="sections.architecture.service_6_addresses">Addresses: Values conflicts, stakeholder exclusion, algorithmic resolution of contested trade-offs</div>
</div> </div>
<div class="text-xs text-gray-600 bg-gray-50 p-3 rounded mt-4"> <div class="text-xs text-gray-600 bg-gray-50 p-3 rounded mt-4">
<strong>Architectural principle:</strong> <span data-i18n="sections.architecture.principle">Services operate external to AI runtime with autonomous triggering. AI doesn't decide "should I check governance rules?"—architecture enforces checking by default. This addresses voluntary compliance problem inherent in prompt-based governance.</span> <strong data-i18n="sections.architecture.principle_label">Architectural principle:</strong> <span data-i18n="sections.architecture.principle">Services operate external to AI runtime with autonomous triggering. AI doesn't decide "should I check governance rules?"—architecture enforces checking by default. This addresses voluntary compliance problem inherent in prompt-based governance.</span>
</div> </div>
<div class="border-t border-gray-200 pt-4 mt-4"> <div class="border-t border-gray-200 pt-4 mt-4">
@ -411,7 +411,7 @@
<svg class="w-5 h-5 mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 mr-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12h6m-6 4h6m2 5H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12h6m-6 4h6m2 5H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
View Full System Architecture & Technical Details <span data-i18n="sections.architecture.view_full_architecture_link">View Full System Architecture & Technical Details</span>
</a> </a>
</div> </div>
</div> </div>
@ -447,7 +447,7 @@
<div class="space-y-3 text-sm"> <div class="space-y-3 text-sm">
<a href="/downloads/organizational-theory-foundations-of-the-tractatus-framework.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition"> <a href="/downloads/organizational-theory-foundations-of-the-tractatus-framework.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition">
<span class="font-medium text-gray-900">Organisational Theory Foundations</span> <span class="font-medium text-gray-900" data-i18n="sections.resources.doc_1_title">Organisational Theory Foundations</span>
<svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
@ -455,8 +455,8 @@
<a href="/downloads/pluralistic-values-deliberation-plan-v2-DRAFT.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition"> <a href="/downloads/pluralistic-values-deliberation-plan-v2-DRAFT.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition">
<div> <div>
<span class="font-medium text-gray-900">Pluralistic Values Deliberation Plan</span> <span class="font-medium text-gray-900" data-i18n="sections.resources.doc_2_title">Pluralistic Values Deliberation Plan</span>
<span class="ml-2 text-xs bg-amber-100 text-amber-800 px-2 py-1 rounded">DRAFT</span> <span class="ml-2 text-xs bg-amber-100 text-amber-800 px-2 py-1 rounded" data-i18n="sections.resources.doc_2_badge">DRAFT</span>
</div> </div>
<svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
@ -464,28 +464,28 @@
</a> </a>
<a href="/downloads/case-studies-real-world-llm-failure-modes.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition"> <a href="/downloads/case-studies-real-world-llm-failure-modes.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition">
<span class="font-medium text-gray-900">Case Studies: Real-World LLM Failure Modes</span> <span class="font-medium text-gray-900" data-i18n="sections.resources.doc_3_title">Case Studies: Real-World LLM Failure Modes</span>
<svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
</a> </a>
<a href="/downloads/framework-governance-in-action-pre-publication-security-audit.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition"> <a href="/downloads/framework-governance-in-action-pre-publication-security-audit.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition">
<span class="font-medium text-gray-900">Framework in Action: Pre-Publication Security Audit</span> <span class="font-medium text-gray-900" data-i18n="sections.resources.doc_4_title">Framework in Action: Pre-Publication Security Audit</span>
<svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
</a> </a>
<a href="/downloads/tractatus-agentic-governance-system-glossary-of-terms.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition"> <a href="/downloads/tractatus-agentic-governance-system-glossary-of-terms.pdf" target="_blank" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition">
<span class="font-medium text-gray-900">Appendix B: Glossary of Terms</span> <span class="font-medium text-gray-900" data-i18n="sections.resources.doc_5_title">Appendix B: Glossary of Terms</span>
<svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 10v6m0 0l-3-3m3 3l3-3m2 8H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg> </svg>
</a> </a>
<a href="/docs.html?category=technical-reference" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition"> <a href="/docs.html?category=technical-reference" class="flex items-center justify-between p-4 border border-gray-300 rounded hover:border-amber-500 hover:bg-gray-50 transition">
<span class="font-medium text-gray-900">Complete Technical Documentation</span> <span class="font-medium text-gray-900" data-i18n="sections.resources.doc_6_title">Complete Technical Documentation</span>
<svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24"> <svg class="w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M13 7h8m0 0v8m0-8l-8 8-4-4-6 6"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M13 7h8m0 0v8m0-8l-8 8-4-4-6 6"/>
</svg> </svg>
@ -512,39 +512,39 @@
<div class="p-5 border-t border-gray-200 space-y-4 text-sm text-gray-700"> <div class="p-5 border-t border-gray-200 space-y-4 text-sm text-gray-700">
<div> <div>
<strong class="text-gray-900">1. Single-Context Validation</strong> <strong class="text-gray-900" data-i18n="sections.limitations.limitation_1_title">1. Single-Context Validation</strong>
<p class="mt-1">Framework validated only in single-project, single-user context (this website development). No multi-organisation deployment, cross-platform testing, or controlled experimental validation.</p> <p class="mt-1" data-i18n="sections.limitations.limitation_1_desc">Framework validated only in single-project, single-user context (this website development). No multi-organisation deployment, cross-platform testing, or controlled experimental validation.</p>
</div> </div>
<div> <div>
<strong class="text-gray-900">2. Voluntary Invocation Limitation</strong> <strong class="text-gray-900" data-i18n="sections.limitations.limitation_2_title">2. Voluntary Invocation Limitation</strong>
<p class="mt-1">Most critical limitation: Framework can be bypassed if AI simply chooses not to use governance tools. We've addressed this through architectural patterns making governance checks automatic rather than voluntary, but full external enforcement requires runtime-level integration not universally available in current LLM platforms.</p> <p class="mt-1" data-i18n="sections.limitations.limitation_2_desc">Most critical limitation: Framework can be bypassed if AI simply chooses not to use governance tools. We've addressed this through architectural patterns making governance checks automatic rather than voluntary, but full external enforcement requires runtime-level integration not universally available in current LLM platforms.</p>
</div> </div>
<div> <div>
<strong class="text-gray-900">3. No Adversarial Testing</strong> <strong class="text-gray-900" data-i18n="sections.limitations.limitation_3_title">3. No Adversarial Testing</strong>
<p class="mt-1">Framework has not undergone red-team evaluation, jailbreak testing, or adversarial prompt assessment. All observations come from normal development workflow, not deliberate bypass attempts.</p> <p class="mt-1" data-i18n="sections.limitations.limitation_3_desc">Framework has not undergone red-team evaluation, jailbreak testing, or adversarial prompt assessment. All observations come from normal development workflow, not deliberate bypass attempts.</p>
</div> </div>
<div> <div>
<strong class="text-gray-900">4. Platform Specificity</strong> <strong class="text-gray-900" data-i18n="sections.limitations.limitation_4_title">4. Platform Specificity</strong>
<p class="mt-1">Observations and interventions validated with Claude Code (Anthropic Sonnet 4.5) only. Generalisability to other LLM systems (Copilot, GPT-4, custom agents) remains unvalidated hypothesis.</p> <p class="mt-1" data-i18n="sections.limitations.limitation_4_desc">Observations and interventions validated with Claude Code (Anthropic Sonnet 4.5) only. Generalisability to other LLM systems (Copilot, GPT-4, custom agents) remains unvalidated hypothesis.</p>
</div> </div>
<div> <div>
<strong class="text-gray-900">5. Scale Uncertainty</strong> <strong class="text-gray-900" data-i18n="sections.limitations.limitation_5_title">5. Scale Uncertainty</strong>
<p class="mt-1">Performance characteristics at enterprise scale (thousands of concurrent users, millions of governance events) completely unknown. Current implementation optimised for single-user context.</p> <p class="mt-1" data-i18n="sections.limitations.limitation_5_desc">Performance characteristics at enterprise scale (thousands of concurrent users, millions of governance events) completely unknown. Current implementation optimised for single-user context.</p>
</div> </div>
<div class="border-t border-gray-200 pt-4 mt-4"> <div class="border-t border-gray-200 pt-4 mt-4">
<strong class="text-gray-900">Future Research Needs:</strong> <strong class="text-gray-900" data-i18n="sections.limitations.future_research_title">Future Research Needs:</strong>
<ul class="list-disc pl-5 mt-2 space-y-1"> <ul class="list-disc pl-5 mt-2 space-y-1">
<li>Controlled experimental validation with quantitative metrics</li> <li data-i18n="sections.limitations.future_research_1">Controlled experimental validation with quantitative metrics</li>
<li>Multi-organisation pilot studies across different domains</li> <li data-i18n="sections.limitations.future_research_2">Multi-organisation pilot studies across different domains</li>
<li>Independent security audit and adversarial testing</li> <li data-i18n="sections.limitations.future_research_3">Independent security audit and adversarial testing</li>
<li>Cross-platform consistency evaluation (Copilot, GPT-4, open models)</li> <li data-i18n="sections.limitations.future_research_4">Cross-platform consistency evaluation (Copilot, GPT-4, open models)</li>
<li>Formal verification of boundary enforcement properties</li> <li data-i18n="sections.limitations.future_research_5">Formal verification of boundary enforcement properties</li>
<li>Longitudinal study of framework effectiveness over extended deployment</li> <li data-i18n="sections.limitations.future_research_6">Longitudinal study of framework effectiveness over extended deployment</li>
</ul> </ul>
</div> </div>
</div> </div>
@ -554,14 +554,14 @@
<!-- Bibliography --> <!-- Bibliography -->
<section class="mb-16 border-t border-gray-200 pt-12"> <section class="mb-16 border-t border-gray-200 pt-12">
<h2 class="text-2xl font-bold text-gray-900 mb-6">References & Bibliography</h2> <h2 class="text-2xl font-bold text-gray-900 mb-6" data-i18n="sections.bibliography.heading">References & Bibliography</h2>
<div class="prose prose-sm max-w-none text-gray-700 space-y-3"> <div class="prose prose-sm max-w-none text-gray-700 space-y-3">
<div class="bg-amber-50 border border-amber-200 rounded p-4 mb-6 text-sm text-amber-900"> <div class="bg-amber-50 border border-amber-200 rounded p-4 mb-6 text-sm text-amber-900">
<strong>Theoretical Priority:</strong> Tractatus emerged from concerns about maintaining human values persistence in AI-augmented organizations. Moral pluralism and deliberative process form the CORE theoretical foundation. Organizational theory provides supporting context for temporal decision authority and structural implementation. <strong data-i18n="sections.bibliography.theoretical_priority_label">Theoretical Priority:</strong> <span data-i18n="sections.bibliography.theoretical_priority_text">Tractatus emerged from concerns about maintaining human values persistence in AI-augmented organizations. Moral pluralism and deliberative process form the CORE theoretical foundation. Organizational theory provides supporting context for temporal decision authority and structural implementation.</span>
</div> </div>
<h3 class="text-lg font-semibold text-gray-900 mt-6 mb-3">Moral Pluralism & Values Philosophy (Primary Foundation)</h3> <h3 class="text-lg font-semibold text-gray-900 mt-6 mb-3" data-i18n="sections.bibliography.section_1_heading">Moral Pluralism & Values Philosophy (Primary Foundation)</h3>
<ul class="list-none space-y-2 text-sm"> <ul class="list-none space-y-2 text-sm">
<li class="pl-6 -indent-6">Berlin, Isaiah (1969). <em>Four Essays on Liberty</em>. Oxford: Oxford University Press. [Value pluralism, incommensurability of legitimate values]</li> <li class="pl-6 -indent-6">Berlin, Isaiah (1969). <em>Four Essays on Liberty</em>. Oxford: Oxford University Press. [Value pluralism, incommensurability of legitimate values]</li>
<li class="pl-6 -indent-6">Weil, Simone (1949/2002). <em>The Need for Roots: Prelude to a Declaration of Duties Towards Mankind</em> (A. Wills, Trans.). London: Routledge. [Human needs, obligations, rootedness in moral community]</li> <li class="pl-6 -indent-6">Weil, Simone (1949/2002). <em>The Need for Roots: Prelude to a Declaration of Duties Towards Mankind</em> (A. Wills, Trans.). London: Routledge. [Human needs, obligations, rootedness in moral community]</li>
@ -570,7 +570,7 @@
<li class="pl-6 -indent-6">Nussbaum, Martha C. (2000). <em>Women and Human Development: The Capabilities Approach</em>. Cambridge: Cambridge University Press. [Human capabilities, plural values in development]</li> <li class="pl-6 -indent-6">Nussbaum, Martha C. (2000). <em>Women and Human Development: The Capabilities Approach</em>. Cambridge: Cambridge University Press. [Human capabilities, plural values in development]</li>
</ul> </ul>
<h3 class="text-lg font-semibold text-gray-900 mt-6 mb-3">Organisational Theory (Supporting Context)</h3> <h3 class="text-lg font-semibold text-gray-900 mt-6 mb-3" data-i18n="sections.bibliography.section_2_heading">Organisational Theory (Supporting Context)</h3>
<ul class="list-none space-y-2 text-sm"> <ul class="list-none space-y-2 text-sm">
<li class="pl-6 -indent-6">Bluedorn, A. C., & Denhardt, R. B. (1988). Time and organizations. <em>Journal of Management, 14</em>(2), 299-320. [Temporal decision horizons]</li> <li class="pl-6 -indent-6">Bluedorn, A. C., & Denhardt, R. B. (1988). Time and organizations. <em>Journal of Management, 14</em>(2), 299-320. [Temporal decision horizons]</li>
<li class="pl-6 -indent-6">Crossan, M. M., Lane, H. W., & White, R. E. (1999). An organizational learning framework: From intuition to institution. <em>Academy of Management Review, 24</em>(3), 522-537. [Knowledge coordination]</li> <li class="pl-6 -indent-6">Crossan, M. M., Lane, H. W., & White, R. E. (1999). An organizational learning framework: From intuition to institution. <em>Academy of Management Review, 24</em>(3), 522-537. [Knowledge coordination]</li>
@ -579,15 +579,15 @@
<li class="pl-6 -indent-6">Laloux, Frederic (2014). <em>Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness</em>. Brussels: Nelson Parker. [Distributed decision-making]</li> <li class="pl-6 -indent-6">Laloux, Frederic (2014). <em>Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness</em>. Brussels: Nelson Parker. [Distributed decision-making]</li>
</ul> </ul>
<h3 class="text-lg font-semibold text-gray-900 mt-6 mb-3">AI Governance & Technical Context</h3> <h3 class="text-lg font-semibold text-gray-900 mt-6 mb-3" data-i18n="sections.bibliography.section_3_heading">AI Governance & Technical Context</h3>
<ul class="list-none space-y-2 text-sm"> <ul class="list-none space-y-2 text-sm">
<li class="pl-6 -indent-6">Anthropic (2024). <em>Claude Code: Technical Documentation</em>. Available at: https://docs.anthropic.com/claude-code</li> <li class="pl-6 -indent-6">Anthropic (2024). <em>Claude Code: Technical Documentation</em>. Available at: https://docs.anthropic.com/claude-code</li>
</ul> </ul>
<div class="bg-gray-50 border border-gray-200 rounded p-4 mt-6 text-xs text-gray-600"> <div class="bg-gray-50 border border-gray-200 rounded p-4 mt-6 text-xs text-gray-600">
<strong>Note on Intellectual Lineage:</strong> The framework's central concern—human values persistence in AI-augmented organizational contexts—derives from moral philosophy rather than management science. The PluralisticDeliberationOrchestrator represents the primary research focus, embodying Weil's concept of attention to plural human needs and Berlin's recognition of incommensurable values. <strong data-i18n="sections.bibliography.intellectual_lineage_label">Note on Intellectual Lineage:</strong> <span data-i18n="sections.bibliography.intellectual_lineage_text">The framework's central concern—human values persistence in AI-augmented organizational contexts—derives from moral philosophy rather than management science. The PluralisticDeliberationOrchestrator represents the primary research focus, embodying Weil's concept of attention to plural human needs and Berlin's recognition of incommensurable values.</span>
<strong>Berlin and Weil will be integral to further development</strong> of the deliberation component—their work provides the philosophical foundation for understanding how to preserve human agency over values decisions as AI capabilities accelerate. Traditional organizational theory (Weber, Taylor) addresses authority through hierarchy; post-AI organizational contexts require authority through appropriate deliberative process across stakeholder perspectives. Framework development documentation (incident reports, session logs) maintained in project repository but not publicly released pending peer review. <span data-i18n="sections.bibliography.future_development_text"><strong>Berlin and Weil will be integral to further development</strong> of the deliberation component—their work provides the philosophical foundation for understanding how to preserve human agency over values decisions as AI capabilities accelerate. Traditional organizational theory (Weber, Taylor) addresses authority through hierarchy; post-AI organizational contexts require authority through appropriate deliberative process across stakeholder perspectives. Framework development documentation (incident reports, session logs) maintained in project repository but not publicly released pending peer review.</span>
</div> </div>
</div> </div>
</section> </section>

View file

@ -0,0 +1,187 @@
#!/usr/bin/env node
/**
* Translate researcher.json from EN to DE and FR using DeepL API
*
* Usage: node scripts/translate-researcher-deepl.js
*
* Requires: DEEPL_API_KEY environment variable
*/
const fs = require('fs');
const path = require('path');
const https = require('https');
const DEEPL_API_KEY = process.env.DEEPL_API_KEY;
const API_URL = 'api.deepl.com'; // Pro API endpoint
if (!DEEPL_API_KEY) {
console.error('❌ ERROR: DEEPL_API_KEY environment variable not set');
console.error(' Set it with: export DEEPL_API_KEY="your-key-here"');
process.exit(1);
}
const EN_FILE = path.join(__dirname, '../public/locales/en/researcher.json');
const DE_FILE = path.join(__dirname, '../public/locales/de/researcher.json');
const FR_FILE = path.join(__dirname, '../public/locales/fr/researcher.json');
// Load JSON files
const enData = JSON.parse(fs.readFileSync(EN_FILE, 'utf8'));
const deData = JSON.parse(fs.readFileSync(DE_FILE, 'utf8'));
const frData = JSON.parse(fs.readFileSync(FR_FILE, 'utf8'));
// DeepL API request function
function translateText(text, targetLang) {
return new Promise((resolve, reject) => {
const postData = new URLSearchParams({
auth_key: DEEPL_API_KEY,
text: text,
target_lang: targetLang,
source_lang: 'EN',
formality: 'default',
preserve_formatting: '1'
}).toString();
const options = {
hostname: API_URL,
port: 443,
path: '/v2/translate',
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': Buffer.byteLength(postData)
}
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => { data += chunk; });
res.on('end', () => {
if (res.statusCode === 200) {
try {
const response = JSON.parse(data);
resolve(response.translations[0].text);
} catch (err) {
reject(new Error(`Failed to parse response: ${err.message}`));
}
} else {
reject(new Error(`DeepL API error: ${res.statusCode} - ${data}`));
}
});
});
req.on('error', reject);
req.write(postData);
req.end();
});
}
// Helper to get nested value
function getNestedValue(obj, path) {
return path.split('.').reduce((current, key) => current?.[key], obj);
}
// Helper to set nested value
function setNestedValue(obj, path, value) {
const keys = path.split('.');
const lastKey = keys.pop();
const target = keys.reduce((current, key) => {
if (!current[key]) current[key] = {};
return current[key];
}, obj);
target[lastKey] = value;
}
// Recursively find all string values and their paths
function findAllStrings(obj, prefix = '') {
const strings = [];
for (const [key, value] of Object.entries(obj)) {
const currentPath = prefix ? `${prefix}.${key}` : key;
if (typeof value === 'string') {
strings.push(currentPath);
} else if (typeof value === 'object' && value !== null && !Array.isArray(value)) {
strings.push(...findAllStrings(value, currentPath));
}
}
return strings;
}
// Main translation function
async function translateFile(targetLang, targetData, targetFile) {
console.log(`\n🌐 Translating to ${targetLang}...`);
const allPaths = findAllStrings(enData);
let translatedCount = 0;
let skippedCount = 0;
let errorCount = 0;
for (const keyPath of allPaths) {
const enValue = getNestedValue(enData, keyPath);
const existingValue = getNestedValue(targetData, keyPath);
// Skip if already translated (not empty)
if (existingValue && existingValue.trim().length > 0) {
skippedCount++;
process.stdout.write('.');
continue;
}
try {
// Translate
const translated = await translateText(enValue, targetLang);
setNestedValue(targetData, keyPath, translated);
translatedCount++;
process.stdout.write('✓');
// Rate limiting: wait 500ms between requests to avoid 429 errors
await new Promise(resolve => setTimeout(resolve, 500));
} catch (error) {
console.error(`\n❌ Error translating ${keyPath}:`, error.message);
errorCount++;
process.stdout.write('✗');
}
}
console.log(`\n\n📊 Translation Summary for ${targetLang}:`);
console.log(` ✓ Translated: ${translatedCount}`);
console.log(` . Skipped (already exists): ${skippedCount}`);
console.log(` ✗ Errors: ${errorCount}`);
// Save updated file
fs.writeFileSync(targetFile, JSON.stringify(targetData, null, 2) + '\n', 'utf8');
console.log(` 💾 Saved: ${targetFile}`);
}
// Run translations
async function main() {
console.log('═══════════════════════════════════════════════════════════');
console.log(' DeepL Translation: researcher.json (EN → DE, FR)');
console.log('═══════════════════════════════════════════════════════════\n');
const totalStrings = findAllStrings(enData).length;
console.log(`📝 Total translation keys in EN file: ${totalStrings}`);
try {
// Translate to German
await translateFile('DE', deData, DE_FILE);
// Translate to French
await translateFile('FR', frData, FR_FILE);
console.log('\n✅ Translation complete!');
console.log('\n💡 Next steps:');
console.log(' 1. Review translations in de/researcher.json and fr/researcher.json');
console.log(' 2. Test on local server: npm start');
console.log(' 3. Visit http://localhost:9000/researcher.html and switch languages');
} catch (error) {
console.error('\n❌ Fatal error:', error);
process.exit(1);
}
}
main();

View file

@ -0,0 +1,88 @@
#!/usr/bin/env node
/**
* Validate researcher.html i18n keys against translation files
*/
const fs = require('fs');
const path = require('path');
// Read HTML file and extract data-i18n keys
const htmlPath = path.join(__dirname, '../public/researcher.html');
const html = fs.readFileSync(htmlPath, 'utf8');
const keyPattern = /data-i18n="([^"]+)"/g;
const htmlKeys = new Set();
let match;
while ((match = keyPattern.exec(html)) !== null) {
htmlKeys.add(match[1]);
}
console.log('═══════════════════════════════════════════════════════════');
console.log(' Researcher.html i18n Validation');
console.log('═══════════════════════════════════════════════════════════\n');
console.log(`📄 Total data-i18n keys in HTML: ${htmlKeys.size}`);
// Load translation files
const enPath = path.join(__dirname, '../public/locales/en/researcher.json');
const dePath = path.join(__dirname, '../public/locales/de/researcher.json');
const frPath = path.join(__dirname, '../public/locales/fr/researcher.json');
const enData = JSON.parse(fs.readFileSync(enPath, 'utf8'));
const deData = JSON.parse(fs.readFileSync(dePath, 'utf8'));
const frData = JSON.parse(fs.readFileSync(frPath, 'utf8'));
// Helper to check if nested key exists
function hasNestedKey(obj, keyPath) {
const keys = keyPath.split('.');
let current = obj;
for (const key of keys) {
if (current && typeof current === 'object' && key in current) {
current = current[key];
} else {
return false;
}
}
return typeof current === 'string' && current.length > 0;
}
// Check each language
const languages = [
{ name: 'English (EN)', code: 'en', data: enData },
{ name: 'German (DE)', code: 'de', data: deData },
{ name: 'French (FR)', code: 'fr', data: frData }
];
let allValid = true;
for (const lang of languages) {
const missingKeys = [];
for (const key of htmlKeys) {
if (!hasNestedKey(lang.data, key)) {
missingKeys.push(key);
}
}
console.log(`\n🌐 ${lang.name}`);
if (missingKeys.length === 0) {
console.log(` ✅ All ${htmlKeys.size} keys found`);
} else {
console.log(` ❌ Missing ${missingKeys.length} keys:`);
missingKeys.forEach(key => console.log(`${key}`));
allValid = false;
}
}
console.log('\n═══════════════════════════════════════════════════════════');
if (allValid) {
console.log('✅ VALIDATION PASSED: All i18n keys are properly translated');
} else {
console.log('❌ VALIDATION FAILED: Some keys are missing');
process.exit(1);
}
console.log('═══════════════════════════════════════════════════════════\n');