The UK needs a far more rigorous approach to evaluating AI governance if it is to maintain public trust and safely scale AI adoption across public services, according to a new report from the Ada Lovelace Institute.

The report, Measuring Up, argues that while policymakers are focusing on creating AI principles, frameworks and voluntary commitments, they have paid far less attention to whether those measures can be meaningfully put into practice.
Researchers warn that without better evidence, metrics and accountability mechanisms, governments and regulators may struggle to determine whether AI governance interventions are genuinely reducing harm or just creating the appearance of oversight.
“Despite rapid developments in AI governance, there is little evidence about whether governance interventions are achieving their intended aims,” the report notes.
What “measuring” AI governance means
A central theme of the report is that governments increasingly need ways to assess whether AI governance systems are effective in practice, rather than simply measuring whether organisations have adopted policies or frameworks on paper.
According to the institute, current approaches often focus on process indicators – such as whether organisations have published principles, completed risk assessments or adopted governance tools – rather than measuring real-world outcomes. The report argues that this creates a significant blind spot for policymakers and regulators attempting to evaluate the societal impact of AI systems.
Researchers said effective measurement should instead examine whether governance mechanisms are:
- Reducing harms
- Improving accountability
- Increasing transparency
- Protecting rights
- Building public trust
The report notes that this is particularly important in high-impact public sector contexts such as healthcare, welfare, policing and local government services, where AI systems can directly affect citizens’ access to services and opportunities.
“Good governance cannot simply be assumed from the existence of governance mechanisms,” according to the report.
Public sector implications
The report raises concerns that governments internationally – including the UK – are rapidly introducing AI governance frameworks without establishing clear methods for evaluating success. The institute said this risks creating fragmented oversight systems with limited understanding of what interventions are most effective.
If you liked this content…
For the UK public sector, the report highlights several emerging challenges:
- Limited evidence on the effectiveness of current AI assurance approaches
- Inconsistent reporting and transparency practices
- Difficulties measuring societal impacts and downstream harms
- Limited public visibility into how AI governance decisions are made
It also warns that many governance approaches remain reliant on self-assessment and voluntary disclosures by organisations developing or deploying AI systems.
Researchers argue that stronger independent scrutiny and standardised measurement approaches will be needed as AI becomes more deeply embedded across public administration and public services.
Recommendations for regulators and policymakers
The Ada Lovelace Institute calls for governments and regulators to move beyond high-level AI principles and develop more mature evaluation capabilities. Among the report’s key recommendations are:
- Developing clearer definitions of what successful AI governance outcomes look like
- Investing in shared metrics and evaluation methodologies
- Improving transparency around governance performance
- Supporting independent auditing and assessment
- Creating mechanisms for ongoing monitoring rather than one-off compliance exercises
The report also stresses the need for greater interdisciplinary collaboration between regulators, policymakers, researchers, civil society and technical experts.
It argues that AI governance measurement should not be treated just as a technical exercise, but as a broader public policy challenge involving social, legal and democratic considerations.
“Measuring AI governance is inherently political,” says the report, adding that decisions about what to measure “reflect societal priorities and values”.
The report concludes that stronger evaluation frameworks will become increasingly important as governments move from experimentation with AI toward wider operational deployment.
It maintains: “Without effective measurement, it will be difficult to know whether AI governance efforts are succeeding, where they are failing, and how they should evolve over time.”








