Home Pricing Sign in Apply the Methodology
01 — Assessment Philosophy

Constructed from
enforcement logic.

Regulatory scrutiny of AI deployment has moved from principle to enforcement. The FCA, ICO, and European Commission have each demonstrated that organisations without documented governance frameworks face material exposure. The burden of proof rests with the organisation.

Generic AI checklists and self-certification exercises do not meet this standard. They produce a summary, not a defensible position. They generate internal awareness, not board-level accountability. They fail the moment a supervisory reviewer asks: what is the legal basis for this control?

The MLA diagnostic was constructed from the inverse direction — beginning with the specific obligations imposed by applicable regulatory frameworks, then designing the instrument to surface those gaps precisely. Every question in the diagnostic references a specific article, guidance paragraph, or supervisory expectation. Every finding in the output carries the legal citation that makes it usable in a board pack, supervisory submission, or procurement response.

On what regulators examine. Regulatory review of AI governance does not begin with a score. It begins with documentation of accountability structures, decision-making records, evidence of human oversight controls, and the specific legal basis cited for each governance measure. The MLA output is structured to address each of these requirements directly.

35
Weighted diagnostic questions Across seven governance domains, each assessed against a documented regulatory rationale.
7
Differentially weighted domains Domain weights reflect regulatory enforcement priority — not an equal distribution.
5
Risk band classifications A five-band structure that produces precise, unambiguous risk classification.
6+
Regulatory frameworks cross-referenced EU AI Act, UK GDPR, FCA SM&CR, ICO, ISO/IEC 42001, SRA, CQC.
02 — Architecture Overview

Seven domains.
Differentially weighted.

The diagnostic is structured across seven governance domains, identified by mapping the principal areas of AI-related examination across applicable regulatory frameworks. Each domain corresponds to a distinct area of regulatory expectation and enforcement activity.

Domains are not weighted equally. Weighting reflects the relative enforcement priority assigned to each domain by applicable regulators. Governance and Accountability carries the highest weight — it is the first area examined in any supervisory review and the source of the most consequential enforcement actions to date. Precise weighting ratios are proprietary and are not published.

Prior to any domain assessment, the instrument collects organisational context through a structured pre-screen. Sector, organisation size, and active regulatory regimes are not background data — they are structural inputs that determine mandatory flag activation, scoring treatment of certain responses, and additional domain requirements for high-risk AI deployments.

Domain Primary Regulatory Basis Questions Relative Weight
Governance & Accountability FCA SM&CR, EU AI Act Art.27, ICO Accountability Framework 6 Primary
Compliance & Legal Alignment UK GDPR Art.35, EU AI Act Art.9, FCA DP5/22 5 High
Human Oversight & Controls UK GDPR Art.22, EU AI Act Art.14, FCA SM&CR 4 High
Explainability & Transparency UK GDPR Art.13/14/22, EU AI Act Art.12, ICO Guidance 5 High
AI Inventory & Classification EU AI Act Art.9/16, ICO Accountability Framework 4 Standard
Data Quality & Lineage UK GDPR Art.30, EU AI Act Art.10, ICO Guidance 4 Standard
Operational Controls & Resilience EU AI Act Art.72/73, FCA PS7/21, Operational resilience frameworks 5 Baseline
03 — Scoring Engine Logic

From response to
risk position.

The scoring model converts individual question responses into a composite risk classification through a structured sequence. Each step applies a distinct analytical layer; the final output reflects all of them simultaneously. What follows is a summary at a level appropriate for professional review. Specific calibration parameters — weighting ratios, band boundaries, and threshold values — are proprietary.

01

Pre-Screen Contextualisation

Before any question is answered, the instrument collects sector, organisation size, and active regulatory regimes. These inputs activate mandatory disclosure flags, determine how specific response types are handled, and trigger additional domain requirements for organisations operating high-risk AI categories under EU AI Act Annex III.

02

Weighted Question Scoring

Each of the 35 questions is scored on a defined range. Partial compliance receives a distinct score — explicitly differentiated from full compliance and from absence of the relevant control. This distinction is substantive: partial controls that cannot be evidenced carry different regulatory weight from controls that are fully documented and consistently applied.

03

Uncertainty Signal Tracking

Responses indicating uncertainty — where the respondent cannot confirm whether a control exists or is required — are scored at zero and tracked as a discrete signal. Where the aggregate uncertainty signal reaches a defined threshold, it produces an independent adjustment to the risk band assignment regardless of the numeric score. Governance uncertainty is itself a regulatory risk, and the model treats it accordingly.

04

High-Risk Category Adjustment

Organisations that identify high-risk AI deployments during the pre-screen — as defined by EU AI Act Annex III — face adjusted scoring requirements across the domains most directly implicated. Identical governance responses produce different results depending on the nature of the AI deployment: a firm using AI for automated credit decisions faces materially different obligations from one using AI for internal operational tasks.

05

Sector Exposure Calibration

The weighted composite score is adjusted by a sector-specific calibration factor. Identical governance gaps carry different regulatory consequences across industries — financial services and healthcare carry the greatest density of AI-specific regulatory obligations. The calibration produces the Adjusted Exposure Score, which determines the risk band assignment.

06

Priority Flag Assignment

Certain responses trigger Priority Flags that appear in the output independent of the overall score. A high Adjusted Exposure Score does not suppress a Priority Flag. These flags correspond to binary legal obligations — controls that are either in place and documented, or not. Each is presented with the specific regulatory basis cited.

07

Risk Band Assignment

The Adjusted Exposure Score is mapped to one of five risk bands. Where the uncertainty signal threshold has been triggered, the band is elevated accordingly. The output — risk band classification, domain breakdown, priority flags, and remediation roadmap — is generated immediately and is structured for board submission and regulatory review.

04 — Risk Band Framework

Five bands.
Precisely defined.

The diagnostic produces one of five risk band classifications. Each band carries a specific heading written to communicate regulatory consequence with precision — designed to produce accurate interpretation by board members, compliance officers, and legal reviewers without translation.

The five-band structure removes the ambiguity of conventional four-band models, where the midpoint is consistently misread as an acceptable position. Every band in this model represents a distinct governance position requiring a distinct response. Numeric band boundaries are proprietary.

Band Classification Report Framing
Low
Governance Established
Controls documented, current, and evidenceable. Maintenance posture appropriate.
Maintenance Mode
Low–Mod
Foundations Present
Core controls in place. Targeted gaps require structured, prioritised remediation.
Targeted Gaps Remain
Moderate
Governance Gaps
Material control gaps across one or more domains. Regulatory exposure is likely.
Regulatory Exposure Likely
High
Significant Exposure
Multiple control failures identified. Regulatory intervention cannot be excluded without prompt remediation.
Intervention Required
Critical
Governance Absent
Fundamental governance infrastructure is not in place. Board-level escalation is required immediately.
Urgent Escalation Needed
05 — Regulatory Alignment

Grounded in
applicable law.

Regulatory alignment in the MLA diagnostic is not asserted at a product level — it is visible in the output. Every question references a specific article, guidance paragraph, or supervisory expectation. The instrument does not map generic governance principles retrospectively to regulation. It begins with regulatory text.

The frameworks listed below are those from which the diagnostic's questions and scoring rationale are directly derived. Each finding in the report output carries the specific regulatory citation that makes it usable in a board submission, supervisory response, or procurement disclosure.

EU AI Act In force August 2024. High-risk obligations from August 2026.

Annex III High-Risk Classification & Governance Obligations

Articles 9, 10, 11, 12, 14, 15, 16, and 27 are referenced directly across the assessment. High-risk AI category selection in the pre-screen activates adjusted domain requirements across the Human Oversight and Compliance sections. Annex III classification is used to determine whether additional scoring weight applies to affected areas.

UK GDPR ICO enforcement ongoing.

Automated Decision-Making, Transparency & Individual Rights

Article 22 obligations around automated decision-making and human oversight are assessed directly. Article 35 DPIA requirements, Articles 13 and 14 transparency obligations, the Article 5(2) accountability principle, and Chapter III individual rights provisions are referenced across multiple domains. Absence of a DPIA where personal data is processed at scale constitutes a Priority Flag.

Financial Conduct Authority DP5/22 · PS7/21 · Consumer Duty PS22/9

SM&CR Accountability & Board Governance Requirements

FCA expectations under DP5/22 and PS7/21 require AI risk to be embedded in operational risk frameworks with named Senior Manager accountability. Consumer Duty requires boards to receive management information on AI-driven outcomes. Absence of a named governance owner is a Priority Flag reflecting the binary nature of SM&CR accountability obligations.

Information Commissioner's Office ICO AI & Data Protection Guidance 2023

Accountability Framework & Automated Decision Transparency

The ICO Accountability Framework's requirements for documented policies, privacy by design, and records of processing are assessed across the Compliance and Governance domains. ICO guidance on automated decision-making transparency and explainability informs the Explainability domain. ICO enforcement priorities around automated processing are reflected in the flag logic.

ISO/IEC 42001 International AI Management System Standard

AI Management System Architecture

ISO/IEC 42001 provides the structural reference for AI governance documentation, risk assessment cycles, and continuous improvement requirements that complement the regulatory obligations above. The diagnostic assesses alignment with the standard's core requirements as a component of the Governance and Operational Controls domains.

Sector-Specific SRA · CQC · NHS DTAC · DCB0129

Profession-Specific Obligations

SRA AI Guidance (2024) requirements for client disclosure of AI use are activated for legal sector respondents. NHS DTAC, DCB0129 clinical safety requirements, and CQC oversight obligations are activated for healthcare respondents. Sector selection in the pre-screen determines which profession-specific flags are mandatory in the output regardless of scored question responses.

06 — Controlled Transparency

What this disclosure
does not include.

The methodology disclosure above is complete at the level required for professional due diligence and procurement review. It describes the structural architecture of the instrument, the seven domains assessed, the nature of the scoring model, the regulatory frameworks applied, and the logic governing risk band assignment and Priority Flag generation.

Exact domain weighting ratios, numeric band boundaries, sector calibration parameters, uncertainty flag thresholds, and question-level scoring scales are not included. These elements constitute the proprietary calibration of the instrument.

An AI governance diagnostic whose scoring parameters are fully public cannot produce accurate results. Organisations would construct responses to produce a preferred outcome rather than an accurate one. The integrity of the output depends on the integrity of the calibration.

Institutional reviewers requiring additional technical disclosure for procurement purposes may request a structured briefing through MLA Group directly.

Proprietary Elements — Not Disclosed
  • Domain weighting ratios Exact percentage weights assigned to each of the seven domains.
  • Band boundary thresholds Numeric score ranges governing risk band assignment.
  • Sector calibration parameters The specific calibration factor applied per industry sector.
  • Uncertainty flag thresholds The point at which accumulated uncertainty signals produce band elevation.
  • Question-level scoring scales The exact score assigned to each individual response option.
Legal Notice

This diagnostic provides governance maturity assessment and does not constitute legal advice. The MLA AI Governance Diagnostic is an analytical instrument designed to assess organisational posture against published regulatory frameworks. References to the EU AI Act, UK GDPR, Financial Conduct Authority guidance, the Information Commissioner's Office, and related frameworks are provided for contextual alignment purposes only.

MLA Group Ltd is not a law firm and does not provide legal advice or legal opinions. The outputs of this diagnostic — including risk band classifications, domain scores, priority flags, and remediation guidance — should not be relied upon as legal opinion or as confirmation of regulatory compliance. They do not constitute a legal assessment of obligations, a guarantee of regulatory standing, or a substitute for qualified legal counsel.

Organisations are advised to engage qualified legal and regulatory compliance professionals when implementing governance changes, responding to regulatory obligations, or making submissions to regulatory bodies. MLA Group Ltd accepts no liability for regulatory outcomes arising from reliance on diagnostic outputs in the absence of such professional advice.

Apply the methodology
to your organisation.

The diagnostic completes in under 20 minutes. The output — Adjusted Exposure Score, domain breakdown, priority flags, and sequenced remediation roadmap — is generated immediately and structured for board submission and regulatory review.

Apply the Methodology