AI Governance & Compliance Self-Assessment Questionnaire

DISCLAIMER: This tool is to be used as a guide only. This guide draws on ISO 42001, and EU an AU guidelines for the Responsible Use of AI however does not take into account individual circumstances.

Section 1: AI Management System & Governance

Section 2: Data Governance & Classification

Section 3: Testing & Validation - AI/ML Development

Section 4: Transparency & Human Oversight

Section 5: Third-Party AI/ML Management

Section 6: Incident Response & Rights

Misson Control (Apollo)

AI governance is embedded and largely digitised across risk, testing, data management, transparency, and third-party oversight. Processes are well-documented, with minimal manual effort required. The organisation is well-positioned to scale AI safely, efficiently, and with confidence. Remaining gaps are minor and can likely be resolved through focused sprints or targeted process automation.


Orbiting (Climate Orbiter)

Foundational controls are in place, and the organisation is moving in the right direction—but is still dependent on manual workflows or lacks consistency across functions. AI is not a material risk today, but without maturing documentation, oversight, and tooling, inefficiencies and risk exposure may increase as usage grows. Targeted investments in process alignment and digital enablement would unlock better scalability and assurance.

Lift-Off Pending (Vanguard TV-3)

AI systems are governed through informal, ad hoc, or siloed practices. Documentation, monitoring, and role clarity may be limited or absent. These gaps create moderate to high operational and compliance risk—particularly in high-stakes applications. A structured program of uplift is recommended, beginning with governance structure, risk classification, and digitised monitoring to reduce administrative burden and establish confidence in AI system integrity.


reCAPTCHA