Editorial

Bridging the Gaps: A Holistic Approach to Government AI Strategy

With AI high on the Government’s agenda, Snowflake global field CTO Fawad Qureshi is calling for a radical reimagining of how government departments collect, share, and utilise data.

Posted 4 April 2025 by Christine Horton


In an era of complex societal challenges, government departments are increasingly looking to artificial intelligence (AI) as a transformative tool. However, creating a unified AI strategy is far more nuanced than simply implementing new technology, according to Fawad Qureshi, global field CTO at Snowflake.

“The modern society is like a connected organism,” explained Qureshi. “Whatever policy decision you make on one side has a ripple effect – what we, in technical terms, call externalities; we are especially on the lookout for the negative externalities which are unintended costs to the society.”

This interconnectedness demands a radical rethinking of how government agencies approach data and AI. The fundamental challenge lies in breaking down the traditional silos that have long separated government departments.

“Today, even data collaboration within a single agency is challenging,” said Qureshi.

While some tactical data sharing exists (such as photo sharing between the Passport Office and DVLA), what’s needed is a more holistic, strategic approach to data integration. Central to this approach is developing a comprehensive data governance framework. This requires clear, unified policies on data sharing, security, and compliance that transcend individual departmental boundaries.

Creating common standards for data sharing

Qureshi advocates for creating common standards that allow different agencies to seamlessly connect and understand each other’s data. As an example, he points to the General Transit Feed Specification (GTFS) – originally developed by Google. This open-source standard has been adopted by more than 10,000 public transport agencies worldwide, enabling seamless information sharing.

“We need a GTFS for the public sector, where agencies can connect and know exactly what to expect from each other’s data,” he said.

Transparency and explainability are crucial components of this strategy. With increasing regulatory requirements like GDPR’s ‘right to explanation,’ government AI systems must be designed with traceability in mind.

“Transparency leads to trust,” said Qureshi. “This needs to be the cornerstone of everything we do with data and AI.”

This means moving beyond black-box AI models that provide opaque outputs. Instead, organisations should prioritise AI techniques with complete traceability, building explainability into the design from the ground up. When data is consolidated in a single system, tracking its lineage becomes significantly easier.

Keeping humans in the loop

Critically, Qureshi stresses the importance of keeping humans in the loop, too. AI should augment, not replace, human decision-making. “We can’t let algorithms run by themselves,” he warned, highlighting the risk of what he calls an “algorithm prison” – where automated systems can trap individuals in cycles of disadvantage.

Addressing potential biases is paramount. “If the data is biased, the outcome will be biased,” said Qureshi.

This requires constant monitoring, testing, and a commitment to fairness across diverse populations. For a multicultural society like the UK, this means ensuring AI systems do not disproportionately impact specific ethnic or social groups.

AI “not a one-and-done solution”

Within government, measuring success goes beyond traditional return on investment metrics. Therefore, the focus should be on “return on total social value” – understanding the indirect and sometimes invisible impacts of policy decisions across different societal domains.

Cybersecurity presents another critical consideration. With increasing cloud infrastructure and sophisticated threat actors, governments must adopt a zero trust security model, said Qureshi. This means trusting nobody and sharing information on a strict need-to-know basis.

Snowflake’s Fawad Qureshi

The emerging challenges of synthetic data and AI-generated content add another layer of complexity. Qureshi advocates for robust watermarking and labelling systems to ensure transparency and prevent misuse.

Ultimately, creating a unified AI strategy is an ongoing process. “AI is not a one-and-done solution,” maintained Qureshi. Continuous monitoring, testing, and iteration are essential to prevent AI systems from becoming self-reinforcing and potentially harmful.

Ultimately, the path forward for AI requires a fundamental reimagining of how government departments collect, share, and utilise data. By prioritising interconnectedness, transparency, and human oversight, governments can harness AI’s potential to create more responsive, efficient, and equitable public services.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now