Applying sector calibration and regulatory scoring
About YouStep 1 of 9
AI Governance & Risk
Is Your Business Exposed to AI Risk?
Our free diagnostic assesses your organisation's AI governance maturity across 7 key areas — producing a personalised risk report aligned to UK and EU regulatory expectations.
Completed in Under 10 Minutes
Seven focused sections covering the areas that matter most to regulators and risk teams.
Confidential & Unbiased
Your responses are used solely to produce your risk report — never sold or shared.
Professional Risk Report
Free results summary — with a full 8-page board-ready PDF available for £79.
About You
Tell us about yourself & your organisation
This helps us tailor your assessment results accurately. Your information is handled in confidence.
Section 1 of 7
AI Inventory & Classification
Identifying AI systems in scope and establishing potential regulatory exposure.
1.1Does your organisation maintain an up-to-date inventory of all AI systems currently in use — including third-party AI tools integrated into business processes?
1.2Which of the following categories apply to your AI systems?
These categories align with EU AI Act Annex III high-risk classifications. Select all that apply — selections affect your exposure score.
1.3Have your AI systems been formally assessed against legal and regulatory requirements specific to your sector?
1.4Does your organisation have a documented process for onboarding new AI systems — including a risk assessment before deployment?
Section 2 of 7
Governance, Leadership & Accountability
Assessing the maturity of internal ownership and oversight structures.
2.1Is there a formally appointed individual or function responsible for AI governance — with defined, documented responsibilities and a clear reporting line?
2.2Are AI systems formally classified by risk level within your organisation?
2.3Does executive leadership receive regular, documented reporting on AI risk, performance, and governance status?
2.4Are documented policies in place governing the acceptable and prohibited use of AI systems across your organisation?
2.5Are roles and responsibilities for AI oversight explicitly defined across risk, compliance, legal, and technical functions?
2.6Is AI awareness or literacy training provided to staff who interact with, rely upon, or are affected by AI-generated outputs?
Section 3 of 7
Data Quality, Lineage & Management
Confirming that your AI systems are supported by clear, auditable data practices.
3.1Is the origin and source of data used by AI systems documented and accessible — including how it is collected, processed, and updated?
i.e. where data comes from, how it is collected and processed
3.2Do you maintain records of data lineage for core AI systems?
Sources, transformations applied, and version history
3.3Do you perform regular data quality assessments for inputs feeding AI decisions — covering completeness, accuracy, and representativeness?
3.4Is there a documented process for identifying, escalating, and resolving data quality issues that affect AI system inputs or outputs?
Section 4 of 7
Human Oversight & Decision Controls
Evaluating how human judgement integrates with AI-driven decisions.
4.1Do AI decision processes include human review for high-impact outcomes — where the AI output materially affects an individual’s rights, access to services, or financial position?
4.2Are override protocols formally documented for circumstances where a human reviewer disagrees with or wishes to countermand an AI-driven outcome?
4.3Do staff responsible for reviewing AI-assisted decisions receive adequate, documented training — covering both the system’s capabilities and its known limitations?
4.4Are escalation pathways defined and documented for AI-assisted decisions that require additional scrutiny, challenge, or senior review?
Section 5 of 7
Explainability & Transparency
Assessing whether AI-assisted decisions are explainable, traceable, and defensible.
5.1Can your organisation provide a clear, meaningful explanation for AI-assisted decisions when required — by an individual, a regulator, or in a legal context?
5.2Do you maintain logs of AI input features, decision outputs, and confidence or probability scores where applicable?
5.3Are decision logic, model versions, and model change history recorded and accessible for internal audit and regulatory examination?
5.4Can a specific AI-assisted decision be traced back to its source inputs, the model version that produced it, and the logic applied — for any decision made in the past 12 months?
5.5Are AI systems tested for bias, fairness, or discriminatory outputs — before deployment and on an ongoing basis — with results documented?
Section 6 of 7
Compliance & Legal Alignment
Confirming awareness and active alignment with UK and EU legal obligations.
6.1Has your organisation conducted a formal Data Protection Impact Assessment (DPIA) for AI systems that process personal data — and has it been reviewed within the last 12 months?
6.2Are individuals informed — at the point of interaction or decision — when automated or AI-assisted processes are used in ways that materially affect them?
6.3Is there a documented process allowing individuals to request human review of, or formally challenge, an AI-assisted decision that affects them?
6.4Has your organisation assessed which of its AI systems fall within the scope of the EU AI Act — including any that may qualify as high-risk under Annex III?
6.5Are individuals able to exercise their data rights in relation to AI-assisted decisions — including the right to explanation, objection, and erasure?
Section 7 of 7
Operational Controls & Resilience
Assessing the robustness of risk controls, third-party management, and operational resilience around AI systems.
7.1Are AI systems included in your organisation’s formal risk and control self-assessment (RCSA) or equivalent risk register?
7.2Do third-party AI providers and AI-enabled services undergo structured due diligence — covering data handling, contractual obligations, and ongoing performance review?
7.3Are cybersecurity and adversarial vulnerabilities assessed for AI systems?
e.g. prompt injection, model manipulation, data poisoning — these are AI-specific and not captured by standard cybersecurity assessments
7.4Do you monitor AI model performance on an ongoing basis — tracking accuracy, drift, and degradation — with defined thresholds that trigger review?
7.5Does your organisation have a defined incident response process specifically for AI system failures, harmful outputs, or unexpected behaviour?