Editorial

Making AI work in public services

From weather forecasting to frontline services, AI is already reshaping government. But leaders warn the real challenge now just adoption – it’s using it well.

Posted 19 March 2026 by Christine Horton


Artificial intelligence (AI) is no longer a future ambition for government – it’s already being embedded in how services are designed, delivered and experienced. That was the message from a discussion at Think Women in Digital Government on the future of digital government (pictured).

As Kirstine Dale, chief AI officer at the Met Office put it, AI has “come up the inside lane” and is now “fundamentally changing the way that we live our lives… in all respects.

Dale advised combining high-impact, large-scale innovation with everyday practical use. At the Met Office, this ranges from advanced AI models for forecasting to making tools like Copilot available across the workforce.

“It’s part of our daily lives… but it’s also fundamentally changing how the whole organisation works,” she said, pointing to applications ranging from HR and finance to core scientific modelling.

However, a message for public sector leaders is to focus less on productivity headlines and more on meaningful improvements to services. To that end, Dr Jennifer Barth, director of research at FSP, said there should be shift in thinking away from tools and towards outcomes.

“Rather than ‘here is your tool, try to use it’… what happens if you change what you do?” she said. “How could you do it better? How could you do it in better quality?” she asked.

Build capability, not just use cases

Another recurring theme was the need to treat AI as a long-term capability, not a series of isolated pilots. Barth argued that government must move beyond experimentation: “The best way forward is thinking of AI as a strategic capability, rather than quick fixes.”

That requires investment in workforce confidence as much as technology. Research cited during the session found that around half of public sector staff feel unprepared to use AI effectively, and many leaders want stronger training and safeguards.

Key, she said, is closing that gap: “How do we move from fear… to being the creators of that change?”

Both speakers stressed that adoption must begin with tangible, low risk use cases. Barth advocated a grassroots approach: “Start small… bring a team together, be creative.”

As part of that, demonstrating real examples is critical. “The amount of times I’ve seen people say, ‘I don’t know how that’s going to work’ – and then you show them something… and suddenly they can see it,” she explained.

Dale described a similar strategy at the Met Office, focused on shifting mindsets: “We’re trying to move people from concerned to curious to confident.”

That has required a structured programme of training, communities of practice and internal showcases, which she described as “raising the floor” of AI understanding across the organisation.

Public sector AI is different

Both experts highlighted a key distinction between public and private sector adoption.

For Barth, the difference lies in how value is defined: “In the private sector, return on investment is about productivity… in the public sector, it’s about experience.”

The central question is not just efficiency, but impact: “Are we making citizens’ lives better?”

This also reinforces the importance of human oversight. “We need a relationship with those machines to remember the human,” she said, warning against over-reliance on automated decision-making.

Don’t ignore the environmental cost

Alongside opportunity, the panel addressed growing concerns about the environmental impact of AI.

Dale acknowledged the tension directly: “AI has enormous potential… but it is not without its costs.”

Training large models is computationally intensive, raising questions about energy use and emissions. This creates what she described as a “green AI paradox”—where a technology that could help address climate change also contributes to it.

“The first step… is to get a really good grip on the measurement,” she said, calling for better data to inform responsible deployment.

Guard against “AI laziness”

Perhaps the most pointed warning from the session concerned the risk of over-reliance. Barth raised concerns about the long-term impact on skills and critical thinking: “How much are we losing that critical thinking ability?”

While experienced professionals may use AI to accelerate work, she warned that newer entrants risk skipping essential learning.

“The laziness argument is a job for every single person to not let that happen,” she said.

Instead, she encouraged openness and accountability: “Just tell people, ‘yes, I used AI’… own it.”

Dale reinforced this, calling for a mindset that combines curiosity with responsibility: “Be curious, be critical… be bold.”

Diversity will shape the future of AI

The session closed with a warning that the benefits of AI will be limited without greater diversity in the workforce.

Dale highlighted that women still make up only around 20% of the tech workforce in the UK – a figure that has barely shifted in recent years.

“We need to reset diversity in technology,” she said, linking it directly to innovation: “It stimulates innovation… and now is the moment we need innovation.”

Barth added that broadening participation does not mean everyone needs to become an engineer: “You don’t have to be coding… many more of us are part of this than we think.”

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now