Editorial

From Reactive to Agentive: How AI is Redesigning Public Services

As the UK government accelerates efforts to embed AI into public service delivery, a shift is underway from reactive systems to agentive ones, capable of anticipating needs, personalising support, and scaling human capacity.

Posted 24 October 2025 by Christine Horton


Can AI enable the government to move beyond digitising processes, to better anticipate needs, personalise experiences and augment the human element of public service?

The topic was up for discussion this week at AI for Government, where tech leaders (pictured) examined how public services can go from being merely reactive to agentive. The past couple of years have seen digital government leaders challenged with moving from promise of AI to production.

“This year, it’s been all about turning from generative to agentic solutions,” said Deepak Shukla, public sector data & AI strategy lead at AWS. “Agentic AI promises the acceleration of value for stakeholders – and the conversation is moving from small pilots to broader transformational agendas around AI.”

Acting as Customer Zero

For Matt Schutz, EVP for strategic alliances and AI solutions at Capita, the agentic era isn’t about replacing people but amplifying their capacity.

“We see agentic AI as labour that scales infinitely,” Schutz explained. “It’s about doing more with the same number of people – not more with fewer. Once you get a process going, it can scale across countless interactions.”

Capita’s approach is to act as a test bed for innovation.

“Government understandably finds it hard to experiment at scale. So we’re trying to be customer zero – to pilot, deploy and refine agentic AI safely within Capita, and then share those proven models into our public sector partnerships,” said Schutz.

That approach aligns with government ambitions to use AI to accelerate outcomes, rather than reinvent services wholesale.

“It’s not about redesigning everything, but about accelerating what works — using agentic AI to get to better outcomes faster,” said Schutz.

Justice, Empathy and AI

For the Ministry of Justice, AI’s role is as much about enhancing empathy as efficiency. CDO and chief scientific adviser Hugh Stickland explained: “We serve some unique citizens – offenders taking their first steps out of prison, victims of serious crime, families going through separation. These are people in crisis. AI is not something they necessarily want at that moment.”

“The question for us,” he said, “is how can AI help us serve them better – not replace the human relationship?”

AI is already supporting the justice system in practical ways, from speeding up case processing to providing victims with clear information about their rights. “We’ve developed a chatbot that helps someone understand what they’re entitled to under the Victims’ Code,” he said. “Rather than replacing humans, it helps us respond more quickly and consistently.”

But ethical and data challenges remain. “It’s difficult in AI if our data is so fragmented,” said Stickland. “And we have to go from specifics to systems, because people’s needs often cut across justice, health, and welfare.”

Stickland said that the Ministry of Justice’s AI Ethics Framework provides the foundation for responsible innovation. “It’s about ensuring data is secure, and about user testing not just for usability, but for whether AI has actually improved people’s lives.”

Designing for Agents as Well as Humans

As AI agents begin to interact directly with government systems, the design paradigm itself is changing.

“We’re not designing future applications and systems just for humans,” said AWS’ Shukla. “We’re designing them for agents as well.”

Citizens, too, may use their own AI assistants to engage with public services – meaning governments must prepare for machine-to-machine interactions.

“If our citizens start adopting agentic approaches to engage with councils or departments, that’s going to happen. So the future systems we design must take those interactions into account,” said Shukla.

Schutz agreed that user-centred design now requires a new mindset.

“In traditional IT, you might spend 80 percent on development and 20 percent on testing. With AI, it’s inverted,” he said. “You can create a working model quickly – but real learning comes from testing it with users, understanding unpredictable outcomes, and improving iteratively. That’s how you build trust and safety into these tools.”

The Road Ahead

When asked whether government is ready for agentic AI, the consensus among speakers was cautious optimism.

Stickland noted that much of the progress comes from quietly embedding AI into everyday workflows.

“We’re getting our staff to test these tools, but we don’t frame it as ‘AI’ – we just say, this will help you transcribe notes faster,” he said. “That’s what they need to know. Realistically, it’s about helping people do their jobs better.”

Meanwhile, Schutz underscored the need for structured “scan” phases, in terms of time for exploration and cross-sector collaboration before deployment.

“The more you can do in that scanning phase – bringing multidisciplinary teams together to ask big questions – the better,” he said. “That’s where real transformation starts.”

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now