Editorial

Data quality becomes a “core justice issue” in digital policing era

Data quality and accountable AI are emerging in policing – yet gaps in governance and transparency risk undermining trust, warns Datactics CEO Stuart Harvey.

Posted 6 May 2026 by Christine Horton


As policing becomes increasingly dependent on digital evidence, the quality and governance of data are moving from technical concerns to matters of justice and public trust, according to Stuart Harvey, chief executive of Datactics.

“Data quality has become a core justice issue,” said Harvey, pointing to the growing reliance on digital records throughout investigations and prosecutions. “Investigative decisions now rely on digital records being accurate, consistently classified and traceable from source to courtroom.”

Poor quality data, he warns, can have systemic consequences, such as distorting crime patterns, misdirecting police resources and even creating inequities between forces.

A key initiative addressing this challenge is the National Data Quality Improvement Service (NDQIS), which is working to standardise crime classification across all 43 police forces in England and Wales. The aim is to ensure “one set of trusted crime records” is delivered to the UK Home Office, improving consistency and transparency nationwide.

AI’s promise and its limits

AI is already playing a growing role in reviewing evidence and identifying crime patterns. According to Harvey, its greatest strength lies in handling scale.

“AI adds the most value where the volume of digital evidence is overwhelming human capacity,” he said, highlighting its ability to identify links across datasets and surface patterns such as organised exploitation or repeat offending.

This is particularly relevant in complex cases like county lines operations, where AI can help “pinpoint likely correlations and causations” in how crimes are recorded and classified.

However, Harvey cautions against over-reliance on AI. “The risks increase when AI is relied upon without further human review,” he said, noting that unvalidated outputs from large language models can introduce “confirmation bias or error at speed.”

For this reason, he stresses the importance of a “human-in-the-loop” approach, where AI supports, but doesn’t replace, investigative judgement.

Harvey also warned that when flawed data is processed through “black box” tools, the consequences can escalate quickly.

“Investigators may be unable to explain how evidence was identified or prioritised,” he said, creating disclosure challenges and potentially undermining admissibility in court.

The inability to interrogate algorithmic decisions also weakens accountability and makes outcomes harder to challenge. “This lack of transparency can damage public confidence, even if the technology was intended to help,” he added.

Harvey advocates clearer boundaries for such technologies, including a formal governance charter defining acceptable uses of AI within the criminal justice system.

Tackling bias requires continuous oversight

Avoiding bias in AI-driven policing systems is not a one-off exercise, said Harvey, but an ongoing process.

“Bias mitigation must be continuous,” he said, with forces regularly testing outcomes across demographic and geographic groups to identify unintended disparities.

He also highlighted the importance of explainability tools, such as monitoring model drift and reviewing AI outcome repositories, to detect when systems deviate from expected norms.

Crucially, the quality of training data remains a foundational issue. “Historical records may reflect legacy inequalities,” said Harvey, meaning careful curation is essential to avoid embedding those biases in future systems.

Aligning AI governance with forensic standards

To ensure AI tools stand up to legal scrutiny, Harvey believes policing must adopt governance frameworks similar to those used in forensic science.

“AI governance in policing should mirror established forensic principles of validation, documentation and the ability to withstand challenge,” he said.

This includes maintaining clear records of how systems are trained, tested and approved, alongside robust audit trails showing how outputs are generated. Such transparency is essential for courts and oversight bodies to scrutinise evidence effectively.

National standards for accreditation and ethical assurance would also help prevent inconsistent adoption across forces, he added.

Strengthening prosecutions through better data practices

Failures in evidence handling remain a common cause of cases collapsing at prosecution stage. Here, Harvey sees a direct link to data quality and governance.

“Cases often fail because evidence handling cannot demonstrate continuity, completeness or reliability,” he explained.

Improved practices around provenance, lineage and governance can ensure that decisions are “reliably, robustly and ethically” aligned with the evidence.

Within this framework, AI can play a supportive role, helping investigators identify relevant material more comprehensively and reducing the risk of missed evidence or late disclosures.

“Transparent analytical workflows allow prosecutors to explain how material was located and assessed,” said Harvey, making it easier for courts to trust the process.

Eliminate “shadow AI” and strengthen infrastructure

Looking ahead, Harvey outlines several steps for police leaders and government over the next 12-24 months.

Top of the list is eliminating “shadow AI” – the use of unapproved or uncontrolled AI tools.

“Police need to be able to guarantee that no outcome has been reached via the use of an unapproved or non-locked-down AI tool,” he said, describing it as a critical policy issue for both employees and third-party vendors.

He also calls for an assessment of existing technology infrastructure. “If it is in any way ungoverned, unmeasured, unknown, it cannot be used for AI,” he said.

Finally, Harvey urges caution against rushing into AI adoption at the expense of established data practices. “We’re not starting from scratch here,” he said, noting that police forces already have years of data strategy built on proven tools.

“Downward pressure to use AI… should be resisted at all costs with the bigger picture of strengthened faith in the criminal justice system in mind.”

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now