Despite growing political focus on AI adoption – including the government’s AI Opportunity Action Plan – but many public sector organisations are struggling to scale beyond pilots.

That was the message from digital leaders speaking at the ServiceNow AI Summit in London, where representatives from healthcare and higher education discussed implementing AI in complex public sector environments.
Aaron Neil, VP public sector UK at ServiceNow, maintained that while many organisations are experimenting with AI, few have yet to embed it into everyday operations.
“Whilst we see a lot of great things happening, particularly pilots across public sector, we see very few real use cases by putting AI to work, getting AI to finish tasks, resolve cases and autonomously drive execution,” he said.
“The tech is ready, the opportunity is recognised, but there are still obstacles that everyone in this room lives day in, day out. Legacy systems, siloed data and limited skills still make it very hard to integrate AI in a meaningful way to drive outcomes,” said Neil.
Building the right governance
Lee Massie, head of IT at Oxford University Hospitals NHS Foundation Trust, said the organisation deliberately focused on policy and engagement before scaling deployments.
“We have a strong policy and strategy that was developed in conjunction with staff and patients,” he said.
Massie said this approach was necessary because AI adoption raises significant questions around transparency and trust in healthcare settings.
“There’s so much technology out there that’s offering the opportunity of AI, but the reality is you need the governance and understanding in place first.”
The Trust has already begun experimenting with AI-assisted clinical documentation, including tools that listen to conversations between clinicians and patients and generate structured notes.
“In some clinics we’re seeing more time with patients. But there are real concerns around legitimacy of the transcription in complex medical language. You have to make sure you’re comfortable that it’s being done correctly,” said Massie.
From experimentation to outcomes
Richard Michel, chief information and digital officer at the University of London, said his organisation has taken a similar approach to governance. Rather than creating a standalone AI strategy, the university integrated AI into its broader digital transformation agenda.
“We chose to develop a digital vision for the university in five years’ time and embed AI within that,” he said.
At the same time, the institution developed an AI policy framework covering teaching, research and professional services.
“We consulted across the university and said: here’s where you can and should use AI, and here are the guardrails and considerations.”
One of the biggest challenges, said Michel, is identifying where AI will deliver real value. “Everyone is introducing AI into their products, so it’s really quite hard to be clear about what you want to achieve,” he said.
If you liked this content…
“You pick use cases where you can see benefit, map the value you’re trying to achieve, run the pilot and measure that benefit.”
However, he added that organisations must be prepared for governance processes – particularly around data protection – to take time.
“One of the challenges of pilots is that the contractual negotiations around GDPR often take longer than the delivery of the pilot itself.”
Data and culture barriers
Data quality and accessibility remain major barriers to AI adoption across public services.
Massie said generative AI (gen AI) tools have effectively exposed long-standing data challenges within organisations.
“A lot of technology in healthcare has been developed over many years for very specific purposes. Once generative AI can see your environment, it can see things you maybe didn’t realise it could see.”
While this can surface governance risks, he believes it also provides an opportunity to address longstanding issues.
“It’s definitely part of the work plan organisations need to understand – it’s going to surface those challenges and you’re going to have to tackle them.”
Michel agreed that organisations need to start with trusted data sources before expanding AI use.
“For example, in higher education a lot of people are looking at AI to support student queries. Much of that information is already publicly available and trusted, so AI can do a great job of helping students find the courses they want.”
However, more sensitive areas require a different approach. “In wellbeing services, absolutely not – those queries need to pass straight to a human,” he said.
Start with friction
Both leaders believe AI adoption is as much about organisational culture as technology. Massie advised organisations to focus on areas where existing processes cause frustration.
“I would start with friction. Look for the solutions and systems that have grown organically and are adding to the problem of work,” he said.
He also warned organisations not to underestimate the learning curve. “This isn’t a traditional technology. The hype has been amazing – people think AI will fix everything. But the reality is it’s a learning journey.”
Michel said giving staff hands-on access to AI tools is essential. “A lot of people are scared of AI. But once you get your hands on it and experiment a bit, you start to understand what it can do.”







