Artificial intelligence (AI) holds enormous promise for improving public services, a potential already recognised by ministers.

Plans to equip civil servants with AI tools (known as Humphrey), and enable data-sharing across departments, could transform how the sector operates – hopefully alleviating many of those well-known frustrations citizens experience when dealing with government agencies.
Faster service delivery and reduced waiting times, from hospital appointments to passport applications, represent only the tip of the iceberg. AI could significantly enhance outcomes in diverse areas, such as improved healthcare quality or more effective fraud detection within HMRC and DWP, ultimately saving taxpayer money through increased operational efficiency.
However, despite substantial private-sector adoption, the public sector faces significant barriers. Foremost among these is data quality. AI’s effectiveness hinges directly on the quality of its input data. Inaccurate or incomplete datasets risk flawed decisions that adversely impact individuals and erode public trust.
Capturing new accurate data, while organising and cleansing historic data, is a mammoth task, especially for institutions like the NHS – the UK’s biggest employer – which serves 1.3 million people a day. Its sheer size means that data is often siloed in different departments and systems, including legacy software and locally-stored files. Unless it’s been regularly reviewed and cleansed, much of it could be out of date, incomplete, inaccurate and/or difficult to access.
Moreover, public-sector data frequently contains highly sensitive personal information. Any AI application must guarantee robust protection of privacy, security, and fairness, avoiding biases that could unjustly disadvantage certain individuals or groups.
Laying the foundations with synthetic data
AI is particularly suited to the enormous task of organising, structuring and segmenting data, and layering it with more detail, since it can identify patterns and anomalies far quicker and more accurately than humans. Still, this doesn’t create a perfect dataset – especially when it comes to understanding data from under-represented groups or where it might identify an individual, as in the case of rare diseases.
However, synthetic data, which is already being trialled in some government departments, offers a solution to this.
By synthesising and replicating patterns in data from multiple sources, including government departments, you can depersonalise it – a process known as differential privacy. You preserve the underlying patterns and relationships, while maintaining anonymity, so you get value from datasets that would otherwise be too small.
Advances in generative AI mean we’re able to create this data at the speed, scale and, importantly, accuracy required. It allows us to not only understand current trends but make better predictions too.
If you liked this content…
To borrow an example from financial services, more people pay their mortgage than default on it – but the far more limited data from defaults could hold the key to predicting when they might happen, so plans can be put in place to prevent them.
Industry regulator, the Financial Conduct Authority (FCA), is also exploring the potential of using synthetic data to address longstanding issues like financial crime and fraud. It’s these kinds of organisations that the public sector could look to because they’ll be approaching it with a suitable mix of innovation and caution.
Reducing bias and building public trust
One of the biggest – and legitimate – concerns around AI is bias.
In healthcare, this could mean that people with certain demographic characteristics don’t receive the right diagnosis and/or treatment because models have been trained using data from the majority of the population. In policing, it could lead to racial bias and wrongful arrests, as we’ve already seen with facial recognition software.
The solution to this is both human and technological. On the human side, we need to apply critical thinking to AI decisions, and not unquestionably treat them as fact. From a technological perspective, we train models to identify and remove potential biases so that users, and the public, can trust the AI.
AI-led synthetic data also has an important role to play here, in feeding and training AI models. It can be used to identify pockets of the population who are under-represented – whether because the group size is smaller or because there may be gaps in their health, education and other official records. Generating a more diverse and balanced set of data could reduce the number of people who are marginalised and promote greater social equality.
However, synthetic data is not without its limitations, particularly given the unpredictability of human behaviour. A hybrid approach, combining AI-driven synthetic data with human expertise and critical thinking, offers the optimal pathway forward. This balanced strategy leverages AI’s strengths while preserving essential human judgment and oversight.
Looking ahead
To fully harness AI’s potential, public-sector leaders must prioritise synthetic data. Establishing robust governance frameworks to ensure data privacy, security, and fairness will be essential. Synthetic data offers a practical and innovative way to accelerate AI adoption, ultimately transforming public services to become more responsive, efficient, and equitable.
The time to act is now. By strategically integrating synthetic data into AI initiatives, government departments can achieve transformative results, unlocking efficiencies without compromising privacy, accuracy, or public trust.