Autonomous decision-making is one of the best-known characteristics of artificial intelligence (AI) – and the one that people are probably most nervous about.

It’s one thing for streaming services or retailers to use AI to make hyper-personalised recommendations with minimal intervention but what about healthcare, tax or policing, where poor decisions could have serious consequences for individuals?
This is why agentic AI – a type of AI where decisions drive automatic actions linked to a defined goal – may feel particularly jarring in the public sector. Taking humans out of the loop would surely mean the loss of critical and expert-led oversight guiding decisions, or a lack of empathy and nuance in complex and emotive cases? Just one wrong decision would undermine public trust and raise questions about transparency, accountability and liability.
Yet agentic AI has huge potential in the public sector, both from a cost-saving perspective and to deliver a convenient and effective service. In fact, the government is already exploring how an agentic AI system could provide personalised support for young people choosing education and career options.
As the technology continues to advance, we could see agentic AI being used to analyse large amounts of disparate and unstructured data to identify potential fraud in the benefits system earlier, or cases of tax evasion that would otherwise have gone unnoticed. The advantage of agentic AI is that it can make decisions, and take action, far faster than any human could, which is critical in areas like fraud detection. Outside of the public sector, there have been cases of fraudulent transactions being identified using real-time fraudulent payment detection systems almost fully autonomously – an approach known as ‘humans out of the loop’.
But even if you move towards more autonomous decision-making, human expertise and oversight isn’t lost. You still need ‘humans in the loop’ at every stage of the AI model lifecycle, from development to testing and validation, to deployment and monitoring, and during the decision-making process itself. This helps to ensure that outcomes are trustworthy and explainable, which in turn builds public trust.
If you liked this content…
Understanding the value of AI
At the moment, there are still a lot of questions in the public sector about how AI can be used. Many are struggling to see where the true value is, where it’s appropriate and more importantly, where it’s not appropriate. Nobody wants to inadvertently be caught out – but with the government determined to make the UK an AI superpower, as well as reduce Civil Service running costs by 15 percent, departments will need to work closely with the tech industry to ensure teams have the skills, tools and oversight to implement AI in responsible yet innovative ways.
We won’t know all the answers straight away, which is why we must closely monitor the outcomes of the pilots currently taking place. There are also steps departments can take right now to ensure they’re ready to implement and scale AI. In my last piece for Think Digital, I discussed the importance of preparing your data, ensuring that you have systems in place to not only capture but also organise and cleanse historic data so that it’s both useful and accurate. If you don’t get the data right then the decisions won’t be right either.
Transparency and trust
There’s a risk in any kind of decision-making, including human-led ones. We only have a certain amount of data available, and we’re only capable of processing so much of it – AI, on the other hand, can make decisions based on unimaginable amounts of data, from multiple sources, identifying patterns or anomalies far faster than the most experienced experts.
But just like any team has to be accountable for their decision, so too do the people developing and using AI.
You have to be able to trace the lineage and demonstrate how and why a decision was made; who was involved, and what data flowed into it? It needs to be explained at the right level to anyone, including members of the public, and give them recourse in the event of an unfair decision or where there are mitigating circumstances.
Ultimately both decision-makers and users within the public need to be able to trust the technology as much as possible – and know when their professional expertise should override it. AI might be capable of analysing data at scale but it lacks human empathy and creativity, including what challenges you want to solve in the first place. With strong safeguards in place, both for vendors and government departments, we can build public confidence and engagement in AI-enabled systems, and unlock their benefits.




