As the UK government increasingly explores the use of AI and generative AI (gen AI) technologies, the need for robust governance frameworks has become paramount. At the recent Think Data for Government event, experts delved into the intricacies of building responsible AI in the UK government.
Defining responsible AI

The discussion began with a clear definition of responsible AI, provided by Laura Petrone, principal and analyst for strategic intelligence at GlobalData. She described ‘responsible AI’ as an umbrella term encompassing internationally recognised principles, such as transparency, accountability, and reliability. She stressed the importance of adhering to emerging international standards and obtaining relevant certifications to demonstrate an organisation’s commitment to responsible AI practices.
Proactive approach to governance
Dr. Iain Brown, head of data science in Northern Europe at SAS and adjunct professor at the University of Southampton, underscored the need for proactivity in AI governance.
“Legislation and regulations will come, but as we know, these take time to catch up with what we’re seeing in the movement of this, and you don’t want to be caught out,” he said.
Brown emphasised the importance of organisations taking a phased approach, starting small and scaling their AI applications. “This is good way to get on board with this a way that’s robust and trustworthy,” he said.
Developing a responsible AI framework
Dr. Shruti Kohli, head of data science innovation and AI at DWP Digital, shared DWP’s experience in developing a responsible AI framework.
Kohli cited that DWP’s Lighthouse Programme to safely accelerate the use of AI. The programme centres around an evidence-based, ‘test and learn’ approach to the adoption of gen AI. “We started with some proof of concepts where we wanted to understand, can it help to query the policy and guidance documents better? Or can it help us to understand and build our code base better? So there was a lot of learnings,” she said.
The programme helped DWP identify six key principles: explainability, mitigation, control, understanding the AI value, being led by values, and ensuring human involvement at different stages of the project.
Kohli also highlighted the importance of collecting evidence and learnings from proof-of-concept projects to shape the responsible AI framework. At the same time, an AI delivery board and an AI assurance group facilitate informed decision-making and ensure responsible practices.
If you liked this content…
Addressing data and AI literacy
Elsewhere, the panel acknowledged the significant gaps in data literacy and AI literacy, particularly within the public sector.
Brown shared findings from a study conducted by SAS, which revealed that only 38 percent of government leaders could define what gen AI is, compared to 49 percent across all sectors.
To address this challenge, Kohli called for targeted training and critical assessment of AI outputs, ensuring that users understand the limitations and potential biases of these technologies. She also highlighted the role of tools like Microsoft’s Copilot in helping users critically evaluate content.
Bridging the guidance gap
The panellists also recognised the need for more guidance and clarity on how to operationalise the principles of responsible AI.
Petrone pointed to regulatory sandboxes as valuable tools to bridge this gap, providing organisations with resources and a safe environment to experiment with AI innovations. She also stressed the importance of establishing governance frameworks for exchanging best practices and ensuring consistency in data quality and management standards across different countries and jurisdictions.
“It will be important in the future to also have governance when it comes to standards about data governance, data management and the quality of data,” she said.
Environmental sustainability considerations
The discussion also touched on the environmental impact of AI, with a question raising the concern that government use of AI and data storage accounts for the second-highest contribution to carbon emissions.
The panellists acknowledged the need to address this aspect, with Brown highlighting the importance of optimising processes and using smaller, more efficient language models.
Additionally, Kohli shared that DWP is actively working with partners to understand the environmental impact of different AI models and to ensure that they are using the most appropriate and sustainable solutions.