The UK public sector is focusing on transparency in a bid to demystify the algorithmic tools at work across government.

At the heart of this work is the Algorithmic Transparency Reporting Standard (ATRS), a framework for capturing information about algorithmic tools, including AI systems.
“We know that particularly when algorithmic tools are being used in ways that either interact directly with members of the public, or that support or inform important decisions about them, the public has a right and a need to know about that, and that’s really what this project is about,” said Sarah Handyside, algorithmic transparency lead at the Department for Science, Innovation and Technology (DSIT).
Handyside was speaking at the recent Think AI for Government event. She was joined by Department for Business and Trade (DBT) chief data officer, Sian Thomas MBE, and Amazon Web Services (AWS)’ data and AI strategy lead for UK public sector, Deepak Shukla, to discuss the importance of public sector transparency in AI tool usage.
Shukla highlighted the critical nature of this transparency. “There is significant scepticism around AI technology, especially in citizen-facing applications,” he said. “Transparency is the root to building trust.”
At its core, ATRS is a comprehensive spreadsheet requiring government departments to publicly document their algorithmic tools, purposes, and potential implications. Currently, 55 records have been published, with more in development. By compelling departments to articulate the precise role and context of their algorithmic tools, the ATRS encourages a deeper internal examination of technological deployments.
Thomas shared some insights into the process. DBT’s first published tool, ‘Find Exporters’, generated minimal public interest. However, a subsequent expense claims verification algorithm unexpectedly attracted significant media attention, highlighting the unpredictable nature of public engagement with algorithmic transparency, she said.
The challenges are substantial. Large operational departments face complex tasks in comprehensively documenting their technological ecosystems. Yet, the benefits are significant, said Handyside.
If you liked this content…
“If you are experiencing speculation about your tools, it is better to have control of the information you publish and point to a single source of truth,” she said.
The transparency drive isn’t just about current tools. The discussions suggested moving towards a ‘shift left’ approach, where algorithmic projects are assessed and registered before implementation, similar to clinical trial protocols. By identifying potential biases and challenges early, departments could refine their approaches before public deployment.
AWS’ Shukla noted the collaborative nature of this challenge. “We all have to come together,” he states. “Amazon alone cannot solve these problems, nor can any single organisation.”
The initiative also addresses critical ethical considerations. How do we ensure algorithmic tools serve public interests without unintended discriminatory consequences? Here, transparency becomes a mechanism for ongoing accountability.
Practical implementations are already showing promise. Tools like expense claim verification demonstrate how algorithmic assistance can support, not replace, human decision-making. Humans remain central to the process, with algorithms serving as sophisticated support mechanisms, said the panel (pictured).
“We are in the initial stages of the AI revolution,” said Shukla. “We need constructive ways of absorbing feedback and ensuring the services we create genuinely serve public interests.”
As Handyside noted: “By getting ahead of speculation, we can be clear about our technological tools and their purposes.”





