Editorial

Q&A: Making AI work for government

Dr Jennifer Barth, director of research at FSP, explores why AI’s real impact in government lies beyond pilots and productivity gains – and what it will take to embed it meaningfully and responsibly.

Posted 12 May 2026 by Christine Horton


While many organisations across government are experimenting with AI, far fewer have embedded AI in ways that genuinely transform how they operate.

In this Q&A, Dr Jennifer Barth, director of research at FSP, argues that government now faces a critical moment: move beyond incremental gains and hype, or risk missing the deeper opportunity AI presents to rethink how public services are designed, delivered and improved.

AI is everywhere in political speeches and digital strategies right now. From your perspective, where are we already seeing the most meaningful real-world uses of AI in government, and where is the hype still ahead of the reality?

In the research we did with Microsoft in 2025 we found that around 65 percent of public sector respondents were already experimenting with AI in some way. But “fully integrated AI use” scores remain far lower (around 30 percent), highlighting a big gap between pilots and scaled use. And, crucially, readiness, confidence, and infrastructure vary widely between subsectors.

For the most part, the most meaningful use of AI in government today are quiet productivity uses – summarising large volumes of evidence, drafting policy briefs, analysing consultations and improving citizen services through better triage and response. But many of these are business-as-usual just a bit quicker.

Real change will come when the organisation begins to transform its thinking and not only augment its existing actions. We also need to carefully implement Agentic AI as a decision maker to accelerate analysis and surface insights combined with human judgement, context and responsibility.

AI can automate analysis and even decision support. How should governments think about the balance between automation and human judgement – particularly when the decisions affect citizens’ lives?

AI should be thought of as a decision-support tool, not a decision maker. It can surface patterns in data that humans might miss and dramatically speed up analysis, but it can’t understand lived experience, fairness, or the broader social context in which government operates. AI is particularly good at responding quickly to initial queries or points in a process that can be triaged quickly. But for decisions that materially affect citizens, human oversight – and human engagement – can be necessary.

The operating principle should be augmentation and collaboration, not replacement. AI expands the capability of public servants, but accountability remains human – as does empathy. Even if AI can be argued to be capable of empathy, it’s important for humans to stake their claim on this capability and build and manage AI systems that can ensure the balance between AI and humanness. 

Beyond tools and technology, how might AI fundamentally change how government operates – things like policymaking, service design or frontline delivery?

What would be nice to see is more real-time or near real-time development cycles – policy moving from periodic reviews to continuous learning cycles. Service design could become more personalised and proactive, anticipating needs rather than reacting to problems. This will only happen if organisations adapt culturally. We build or use AI at the points in processes where change is possible and necessary – but as these tools are being used they create new ways of working. Governments need to take the opportunity of the changing tools – to realise and adapt to how it changes how institutions learn, make decisions and collaborate.

Most UK workers have had little formal AI education despite using the tools daily. What should governments be doing now to build confidence and capability across the workforce so that people can use AI well?

The Microsoft research we did showed that confidence with and preparedness for AI among public sector workers is uneven. About 41 percent of respondents admit they feel unprepared or, crucially, unsupported in using AI effectively. As well, 37 percent of leaders in the public sector say they would use AI more if given clear training and safeguards – and this response is consistent across all public sector sub-sectors in the study. There is clearly a need to build confidence and to build clear learning pathways for AI.

To do this, I believe in building what we call ‘workforce resilience’. This is a holistic approach to building a sustainable relationship with AI. This includes five key aspects:

1. Resourcing for AI enablement – resource properly in a RACI and ensure you set out good governance

2. Create AI literacy and tooling foundations through a use case or role-based curriculum

3. Create a ways of working with AI agents playbooks and quick reference guides

4. Have a human skills development track

5. A comprehensive change management strategy, execution and measurement framework. It seems like a lot, but if you take it from a use-case-first approach you can start small and slowly build the resilience required

Many organisations bring tools in and say ‘use them’ or here is your baseline training on what button does what. AI is going to change the way you work – you need to have a streamlined and calculated strategy that can develop through each use case roll out and that focuses on creating the best relationship – collaboration between AI and humans.

Women are still significantly underrepresented in AI and data roles, even though the UK needs a much larger digital workforce. Why is it so important that women are part of shaping AI in government, and what practical steps can organisations take to make sure they are?

We want to use AI that feels like it’s developed for us – systems and the way we use them reflect the perspective of those who build them. If women are not involved in shaping those systems, we risk embedding blind spots into technologies that will influence public services and policymaking. Also – like anything – we like to learn about things that we think were built with us in mind and that’s the core of inclusivity across everything.

At the same time, we can’t come to these systems thinking we don’t already belong – women are crucially part of everything even when they seem invisible. A large part of my career, my thinking from my PhD and beyond, has always been to ‘make things visible’. We can’t assume we are not there, we are just not seen. We do need more inclusive sense of creators, engineers, women in STEM, but maybe even more important, we need to understand that many things going into a technological build – the people, ideas, processes that surround the technology itself. We need to bring those to light – visibility, leadership pathways and interdisciplinary roles that allow policy, social science and technical expertise to work together – and ensure each contribution is given equal importance. AI should not be shaped by technologies alone. 

Looking ahead five or ten years, what do you think is the biggest opportunity AI presents for government, and what is the biggest risk if we get it wrong?

I think it’s inherently difficult for government organisations to be truly agile. Agility comes with costs – including the likelihood of failure – and the public service often struggles to justify those potential losses. Too often, the risk is seen in trying something that might not work, rather than recognising that experimentation is a crucial part of learning.

The real risk lies in not taking those lessons forward: in failing to build on them, to form cross‑functional teams that can support the process, welcome change, and adapt quickly. To move forward and embrace the opportunities that AI presents, the public sector needs to find a way to build agility into its structures and practices.

That could mean rapidly developing use cases to test, iterate, and either discard or scale. It could also mean starting small – strengthening workforce resilience through simple but effective governance frameworks, refreshed ways of working, or processes that enable some teams to experience change and share their insights with others.

Ultimately, agility is the opportunity that AI brings. The transformation required depends on making agility a foundational part of the public sector’s approach.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now