In a session at the recent Think AI for Government event, top data science and government technology experts revealed that without robust, high-quality data foundations, AI initiatives risk becoming ineffective, biased, and potentially damaging to public trust.

“Artificial intelligence holds immense potential to transform government operations,” said Dr Iain Brown, head of data science, Northern Europe at SAS. “But its success fundamentally depends on data readiness.”
Current statistics paint a stark picture: only 32 percent of government departments report high readiness for data preparation and capabilities to feed AI applications. The consequences of poor data management extend beyond operational inefficiencies. Degraded AI models can cost organisations significantly – in the private sector, data quality issues can impact up to six percent of revenue.
In government, where decisions directly affect citizens’ lives, the stakes are even higher. Dimitris Perdikou, government chief engineer at the Government Digital Service (GDS), said that data quality is not just an IT problem, but an organisational challenge.
“Everybody in the organisation needs to work together to resolve data quality issues,” he said. “Even once you feed data into an AI model, you can get very poor output based on poor data quality.”
Key challenges in government data readiness
Legacy systems and data silos represent significant obstacles. Many government departments still operate with outdated infrastructure that makes data sharing and integration difficult. Perdikou, drawing from his experience building the first data office at the Home Office, highlights the need for comprehensive data governance.
“We need dedicated efforts in data cataloguing, continuous data sharing, and accessibility,” he said. “It doesn’t have to be a massive undertaking, but there must be consistent, focused attention.”
Innovative solutions: Synthetic data and collaboration
The government is exploring innovative approaches to address data limitations. Synthetic data generation is emerging as a promising technique, allowing organisations to extract behavioural patterns while maintaining privacy.
“You can start to identify where bias exists and potentially smooth out that bias,” said Dr. Brown.
If you liked this content…
Collaboration across disciplines is equally crucial. The traditional siloed approach – with policy professionals working separately from digital teams – must evolve.
“Policy and digital teams can’t work in isolation,” said Perdikou. “We’re seeing increasingly complex domains where policy, systems, and data are merging.”
Transparency and accountability are paramount. As the EU AI Act approaches implementation, government agencies must develop robust data governance frameworks that go beyond mere compliance.
“It’s not just about following rules,” said Dr. Brown. “It’s about building trust and accountability in these systems.”
Practical steps for government organisations
For agencies looking to improve their AI readiness, experts recommend starting small:
- Create a basic data catalogue, even if it’s just a spreadsheet initially
- Explore synthetic data generation for specific, well-understood datasets
- Foster cross-functional collaboration between policy, digital, and data teams
- Invest in continuous learning and upskilling of existing workforce
With the government aiming to increase digital and data professionals from five percent to a more substantial proportion of the civil service, upskilling becomes crucial. Perdikou suggested an approach centered on curiosity and self-learning.
“Ask questions, be curious,” he said. “Most people are eager to learn and will invest their own time in understanding new technologies.”
Dr. Brown concluded, “Preparing departments with robust data foundations is the key to unlocking meaningful, responsible AI applications.