Editorial

Ensuring AI Governance in Government

AI offers the public sector an opportunity to increase productivity and ensure that benefits and services are directed to those most in need. However, it needs guard rails and guidance to ensure that policies are not abused, says Colin Gray, Principal Consultant at SAS UK.

Posted 20 January 2025 by Christine Horton


No-one can have failed to notice the rapid increase in the latest buzz around Large Language Models (LLM), Generative AI (gen AI) and Co-Pilot. Behind the buzz, there are serious capabilities for both the public and private sector, to reduce costs and improve service.

In addition, the Prime Minister has just announced The AI Opportunities Action Plan, which sets out the government’s plans to use AI across the UK to boost growth and deliver services more efficiently.

However, the capabilities of AI now may exceed existing governance structures – and this is particularly true for the public sector, said Gray.

“Previously if an organisation wanted to use models and analytics it had full control over the data provenance and data selection. With the explosion of gen AI models and similar, these are trained from all corners of the internet and may include everything from bias, incorrect facts and hate speech,” explained Gray.

Gray said another danger is that models can hallucinate, where they will make up false but plausible sounding responses.

“There are examples where fictitious law cases and academic references have simply been fabricated. There is a risk that if not checked and validated such cases could make it into case law and medical citations,” he said.

Strong governance needed to counter discrimination

Gray maintained that AI could be a real benefit in ensuring that the most vulnerable people in society get access to the benefits and services they need.

But equally, AI could discriminate based on any bias in the data that feeds it, and ignore or exacerbate bias against certain groups of individuals. A recent investigation by The Guardian found the DWP’s AI-based system for identifying fraudulent claims for Universal Credit disproportionately flags individuals from certain demographic groups.

“There is an additional risk that policy makers either do not understand the technology or leave it to a small group of analytically minded individuals who may not see the wider policy decisions. The first question that an organisation should ask itself isn’t ‘can we?’, but ‘should we?’,” said Gray.

“This is where a strong governance framework is needed and SAS, with nearly 50 years of experience in this space, can help and support organisations through their first steps and beyond.”

Gray said a robust AI governance framework will:

  • Establish consistent standards for all AI initiatives
  • Bring together multiple disciplines from across the organisation including IT, compliance, data science and customer champions
  • Embed shared risk control processes throughout the AI development lifecycle
  • Clearly define ownership and AI accountabilities across stakeholder groups
  • Establish mechanisms for centralised reporting and issue remediation
  • Differentiate AI governance needs based on usage and risk

Much of this happens before the system goes live, but it is crucial to maintain diligence over the process whilst the system is in operation, said Gray. This includes:

  • Performance monitoring: Accuracy, reliability, robustness, resilience
  • Explainability: Local/individual, cohort and global
  • Bias & Fairness: Disparity metrics, model performance, selection rate

“To the point earlier on, these metrics should be able to be shared across the business, with policy makers, auditors, customer service representatives and more widely, in a way that it doesn’t require deep analytical expertise to understand. There should also be a segregation of duties such that those that built the system are not responsible for monitoring and maintenance,” said Gray.

“AI and its variants offer the public sector an opportunity to increase productivity and ensure that benefits and services are directed to those most in need. However, it needs guard rails and guidance to ensure that policies are not abused and that as humans we understand where it is effective and where a human in the loop is needed.”

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now