diff --git a/public/leader.html b/public/leader.html index 7120a89f..f764d390 100644 --- a/public/leader.html +++ b/public/leader.html @@ -55,11 +55,11 @@

- + - + - Explore Framework + Assess Your Organization Read Documentation @@ -185,6 +185,274 @@
+ +
+
+
+

AI Governance Readiness Assessment

+

+ Before implementing governance frameworks, organizations need honest answers to difficult questions.
+ This assessment helps identify gaps, risks, and organizational readiness challenges. +

+
+ + +
+ + +
+
+
+ + + +
+
+

Current AI Tool Inventory

+

Do you have clear visibility into what AI systems are already in use?

+
    +
  • + + Have you catalogued all AI tools currently used across departments (ChatGPT, Claude, Copilot, internal LLMs, etc.)? +
  • +
  • + + Do you know which employees are using AI tools for customer-facing communications, code generation, or decision support? +
  • +
  • + + Can you identify which AI interactions involve proprietary data, customer information, or confidential business intelligence? +
  • +
  • + + Do you have visibility into shadow AI usage (employees using personal accounts for work tasks)? +
  • +
  • + + Have you documented which third-party vendors are using AI in services they provide to you? +
  • +
+
+
+
+ + +
+
+
+ + + +
+
+

Strategic AI Deployment Plans

+

What are you planning to build, and have you assessed the governance implications?

+
    +
  • + + Have you prioritized AI initiatives by risk level (customer-facing vs. internal, high-stakes vs. low-stakes)? +
  • +
  • + + For each planned AI system, have you identified who is accountable when it makes a mistake or causes harm? +
  • +
  • + + Do you have criteria for determining which decisions should remain human-controlled vs. AI-assisted vs. fully automated? +
  • +
  • + + Have you evaluated whether your planned AI deployments fall under EU AI Act "high-risk" categories? +
  • +
  • + + Can you articulate what "safe failure" looks like for each planned AI system? +
  • +
+
+
+
+ + +
+
+
+ + + +
+
+

Workflow & Process Integration

+

How will AI fit into existing processes, and what breaks when it fails?

+
    +
  • + + Have you mapped out which human roles will shift from "doer" to "reviewer/validator" of AI output? +
  • +
  • + + Do you have processes to detect when employees are blindly accepting AI recommendations without validation? +
  • +
  • + + Can your organization sustain critical operations if AI systems become unavailable for hours or days? +
  • +
  • + + Have you considered the handoff points between AI-generated work and human-controlled processes (e.g., draft→review→approval)? +
  • +
  • + + Do you know which workflows will require sequential AI operations, and how errors compound across multiple AI steps? +
  • +
  • + + Have you assessed whether introducing AI will create new bottlenecks (e.g., senior staff spending all day reviewing AI output)? +
  • +
+
+
+
+ + +
+
+
+ + + +
+
+

Decision Authority & Boundaries

+

Who decides what AI can and cannot do, and how are those boundaries enforced?

+
    +
  • + + Have you defined which types of decisions AI systems are prohibited from making (even with human oversight)? +
  • +
  • + + Do you have a governance board or designated owner responsible for AI safety and compliance decisions? +
  • +
  • + + Can you enforce AI usage policies technically (not just via policy documents employees may ignore)? +
  • +
  • + + Have you established clear escalation paths when AI systems encounter edge cases or ambiguous situations? +
  • +
  • + + Do you have audit mechanisms to detect policy violations or unauthorized AI usage patterns? +
  • +
+
+
+
+ + +
+
+
+ + + +
+
+

Incident Preparedness

+

What happens when AI systems fail, hallucinate, or cause harm?

+
    +
  • + + Do you have incident response procedures specifically for AI failures (separate from general IT incidents)? +
  • +
  • + + Can you trace AI-generated content or decisions back to specific prompts, model versions, and responsible parties? +
  • +
  • + + Have you war-gamed scenarios where AI provides plausible-sounding but incorrect information that leads to business harm? +
  • +
  • + + Do you have kill switches or rollback procedures to disable AI systems that are behaving unpredictably? +
  • +
  • + + Have you assessed your liability exposure if AI systems discriminate, leak data, or violate regulations? +
  • +
+
+
+
+ + +
+
+
+ + + +
+
+

Human & Cultural Readiness

+

Is your organization culturally prepared for the messy reality of AI governance?

+
    +
  • + + Have you addressed employee fears about job displacement or skill obsolescence honestly? +
  • +
  • + + Do your teams have the skills to critically evaluate AI output, or do they lack domain expertise to spot errors? +
  • +
  • + + Are employees empowered to challenge or override AI recommendations without career risk? +
  • +
  • + + Have you created incentives that reward thoughtful AI use over speed or cost savings alone? +
  • +
  • + + Does your organization have realistic expectations about AI limitations, or is there pressure to treat it as infallible? +
  • +
  • + + Have you allocated time and resources for governance work, or is it expected "on top of" existing responsibilities? +
  • +
+
+
+
+ +
+ + +
+

What Your Answers Reveal

+
+

+ If you checked most boxes: You're ahead of most organizations but likely uncovering how complex AI governance truly is. The hard work is implementation and cultural change. +

+

+ If you checked some boxes: You have awareness but significant gaps. These gaps represent risk, but also clarity about where to focus governance efforts. +

+

+ If you checked few boxes: You're in good company—most organizations are here. The challenge is building governance capability while AI deployment accelerates around you. +

+

+ Note: This assessment is designed to provoke strategic thinking, not to sell you a solution. Effective AI governance requires organizational commitment, not just technology purchases. Tractatus is a research framework exploring architectural approaches to some of these challenges—it is not a comprehensive answer to all the questions above. +

+
+
+ +
+
+