Civil servants across government need a broad understanding of AI, but not everyone needs to become a technical expert, senior leaders told a panel on AI capability in the public sector.

Speaking during a discussion on the skills needed for AI adoption (pictured), at Think AI for Government, Claudia Lundie, head of data learning, culture and engagement at the Home Office Data and Identity Directorate, said departments should begin with confidence and judgment rather than software training.
“When we start asking colleagues on AI, we have to move away from ‘where do we press the button?’,” she said. “We need to start with AI literacy, which is about confidence… the questions to ask.”
Lundie said the Home Office had focused first on teaching staff the fundamentals of responsible AI use, including governance, ethics and safety, before moving on to practical tools and innovation.
“It’s only when you know what you can do, then you can start showing them the tooling. Then you can really do the exciting innovation part and show them the art of impossible,” she said.
She added that leaders do not need to become technologists, but must understand enough to take ownership of AI systems. “It’s not about making them tech experts, it’s about making them effective owners,” she said.
A ‘T-shaped’ skills challenge
Gemma Elsworth, head of data capability at DWP Digital, said AI capability should be viewed as a “T-shaped” skills challenge: broad awareness for all staff, combined with deeper expertise in specialist teams.
“There’s a need for everyone – from the person who keys in data to our most senior leader – to have a broad understanding of AI and its capabilities,” she said. “And then there’s the deeper understanding and knowledge that people in our technology teams and our data teams need.”
Elsworth warned that simply giving staff access to generative AI tools without proper controls could create new risks.
“Yes, we can give everyone access to Copilot,” she said. “But should they do that? And have we put in the right guardrails around that so that we can enable people to do things safely?”
Richard Gurney, director of education & innovation at Sparta Global, said organisations should look beyond simply inserting AI into existing workflows and instead use it to rethink how services operate.
If you liked this content…
“We’re not going to plug an AI solution into this process,” he said. “We need to look at the bigger picture – what’s the opportunity to actually change process, make it more efficient and safer?”
He added that successful adoption would depend on combining technical skills with new ways of working, while ensuring staff did not simply accept whatever AI systems produce.
Both speakers stressed that data quality remains central to successful AI deployment.
“You do not have one without the other,” Lundie said of AI and data literacy. “Everyone needs to understand their responsibility.”
Elsworth said poor quality data would inevitably undermine AI outputs.
“If the data that we’re giving it is rubbish, then what’s going to come out of the AI at the other side of it is rubbish,” she said.
She added that AI governance should become as embedded in government as data protection and cyber awareness training.
“A lack of knowledge is also a security risk. It’s going to become as important as our data protection training,” she said.
Training specific to public sector
The panel also argued that public sector organisations need AI training tailored to government work, rather than relying solely on private-sector learning products.
Lundie said much commercial training focuses on sales or marketing use cases that don’t resonate with officials. “If we collaborate together to design learning specific for the public sector, it’s definitely a thing,” she said.
Elsworth agreed, noting that government departments are driven by service outcomes rather than profit. “You’re not doing it in order to make profit,” she said. “You’re doing it in order to make a life of a citizen better.”








