Governance capabilities at the outset of an AI project. Does the organization have the leadership, roles and responsibilities, risk management policies and processes, and platforms capabilities to oversee and assess risk?
1
Have leaders been designated who have the authority to develop, implement and enforce AI governance policies, processes, and systems?
2
For supporting roles (e.g. data scientists, managers, governance officers, etc.), has responsibility and accountability been assigned for identifying, assessing, mitigating, documenting, monitoring and remediating risks?
3
Is there a formal policy and process for assessing risks (business, legal, societal, 3rd party) at the beginning of AI/ML projects?
4
Are there capabilities for systematically collecting and documenting project risks, metrics, and risk mitigation plans, and approvals at the beginning of a project and as it evolves?
5
Is there a way of creating, modifying or assigning a governance policy to a project to define steps and requirements for compliance?
6
Is there automation to streamline the governance activities during the planning phase of the project?
Access
Capabilities to grant and limit access to risky data, tools, models, infrastructure and systems. Does the organization have the policies, processes and controls to manage access to risky artifacts and systems?
1
Is there a formal policy and process for granting access to data, infrastructure, code & analysis, models and environments based on the riskiness of the project?
2
Are there access control capabilities for all AI/ML project artifacts — data sets, pipelines, code analysis, models, environments, and infrastructure?
3
Can infrastructure access be allocated and restricted based on policy and/or a risk assessment?
4
Are there security, privacy and audit capabilities to ensure that data and other artifacts can not be accessed by unauthorized parties?
5
Is there credential management to provide secure access to other systems and services (e.g. APIs and 3rd party hosted LLMs)?
6
Are there capabilities to streamline, automate and administer allocation of access based on a policy?
Observe
Capabilities for overseeing the development process and production use cases. Does the organization have the policies, processes, and platforms for collecting evidence, tracking lineage, and monitoring models during development and production?
1
Is there a formal policy and process that stipulates and enforces what evidence
must be collected for governance throughout the model development process?
2
Are there policies and processes around the collection of telemetry data (e.g. models, pipelines or applications) and monitoring of production systems (e.g., drift, bias, etc.)?
3
Can standard and custom metrics e.g., of model performance, bias, fairness, business performance, etc be collected during development and in production?
4
Are there capabilities for tracking and providing visibility into project costs (e.g. infrastructure) and ensuring alignment with intended usage?
5
Are there capabilities to create snapshots of datasets, code, analytic results,
models or pipelines and to establish lineage between different artifacts?
6
Is there automation to streamline the collection of versioned artifacts,
lineage, and other evidence and for alerting and/or notifications?
Control
Capabilities for re-evaluating and remediating risk. Does the organization have the policies, processes and tools to evaluate changes in risk and adverse events, take remediation steps, and approve, terminate or decommission models based on risk?
1
Is there active participation from the relevant parts of the business (e.g.,
legal, engineering, R&D, BUs, etc.) on committees that are responsible
for overall oversight of AI/ML risk management and policies?
2
Is there a formal policy and process for periodically reviewing governance evidence, collecting risk and incident information, reassessing project risks, and ensuring that resources are used as intended?
3
Is there a formal policy and process for mitigating newly identified risks and
remediating realized risks, and for decommissioning AI/ML models and
pipelines when risks or performance requirements are no longer met?
4
Are there capabilities for managing governance, compliance, remediation and approvals tasks?
5
Are there capabilities for comparing evidence and deliverables against governance policy requirements?
6
Are there capabilities for automating the compliance, remediation, comparison of evidence, and other control-related tasks?
Deliver
Capabilities for assessing risk at production. Does the organization have the capabilities to test, document, validate, and ensure reproducibility before final approval and deployment?
1
Is there a formal policy and process for evaluating, documenting, and approving project deliverables before putting them in production?
2
Is there a formal policy and process for ensuring auditability and reproducibility of the project at a later date and/or by third parties?
3
Are there capabilities for staging and testing models, pipelines, and applications e.g. for fairness, bias, user acceptance testing and operational performance?
4
Are there capabilities for packaging evidence and artifacts and creating documentation for reproducibility and review by independent assessors and/or auditors?
5
Are the capabilities for managing deployment tasks and final approvals?
6
Are there capabilities for automating staging, testing, packaging, approvals, and other delivery-related steps?
About You
Before we share your results, we would like to know a bit more about you and your organization.