Editorial

The future of Artificial Intelligence in government is trust-first or not at all

Artificial Intelligence could help government deliver faster, more equitable services. But without trust, it won’t work – and in some cases, it could do real harm. Dr Jonathan Sykes, global head of AI products at Caution Your Blast Ltd, shares how trust shaped the UK’s first public-facing AI service – and what that means for the future.

Posted 14 April 2025 by Christine Horton


Everyone wants to talk about AI. Few want to talk about trust.

AI is everywhere in government conversations. White papers. Roadmaps. Vision statements. But what’s often missing is trust – not as a legal checkbox or an ethics paragraph at the end, but as a design principle.

Because if people don’t trust what your AI is saying, or what it’s doing with their data, it’s not just a failed product – it’s a failed service.

No room for hallucinations – or assumptions

At the Foreign, Commonwealth & Development Office (FCDO), trust was a non-negotiable. The FCDO receives more than 500,000 consular enquiries a year-many complex, time-sensitive, and emotionally charged. Advice needed to be accurate, consistent, and immediate.

There was no room for a chatbot that might guess. No hallucinated policies. No off-the-shelf tools that couldn’t stand up to scrutiny. The team at Caution Your Blast Ltd (CYB) had to design and build a system that was fast, transparent and above all, dependable.

We developed the UK’s first live, public-facing AI service in government: a consular support tool that runs 24/7 on 215 embassy pages worldwide. It interprets a query, explains what it understands, and provides a templated, trusted answer approved by the FCDO. If it’s not right, the user can rephrase their question or escalate to a human. The system is clear, fast, and always in the user’s control.

Trust starts with listening

A lot of people think the safest way to use AI in government is to hold it back. Label it ‘experimental’. Keep it vague. Hand off to a human as soon as it gets tricky.

But that’s not trust. That’s just avoidance.

We spoke to real users to understand why they struggled to self-serve. The issue wasn’t just about getting answers – it was about confidence. People weren’t sure if they were even asking the right question. They didn’t know whether what they’d read applied to them, or if they were in the right section of GOV.UK at all.

So we designed the system to show its working. It tells the user what it thinks they’re asking, and checks that understanding in plain English. That moment matters. It says: we see you. We heard you. And we’re responding to this.

You can’t automate trust – but you can build for it

The service wasn’t built to be clever. It was built to be trustworthy:

  • No hallucinations: we separated understanding from response, grounding every answer in pre-approved guidance
  • No data grabs: we stripped personal data and didn’t train on user interactions
  • No black boxes: we tested every pathway and aligned to the UK government’s Generative AI Framework and AI Playbook

And we didn’t let the AI block people from getting human help. Vulnerable users – those in distress or danger – can still go straight to a person. The AI supports scale, but it never replaces compassion.

Because trust isn’t about what the system can do. It’s about knowing when it should step aside.

It’s already working

The FCDO AI service is now live worldwide. In just three months, it reduced written enquiries by 80 percent and calls by up to 50 percent. Over five years, it’s expected to save the FCDO millions-without compromising accuracy or safety.

Most importantly, it’s helping British Nationals get support faster, more reliably, and with confidence that someone is looking out for them.

What comes next?

To make AI a trusted part of public services, we need more than ambition. We need to bake trust into everything – from system design to team processes.

That means:

  • Designing with people, not just for them
  • Prioritising safety over novelty
  • Choosing simplicity over scale
  • Being honest about what AI can and can’t do

The future of AI in government won’t be shaped by bigger models or flashier demos. It will be shaped by how well we serve the people who rely on us.

That’s why it’s encouraging to see leadership from DSIT and GDS on algorithmic transparency. The FCDO Consular Triage service is fully aligned – and you can read the public record here.

The FCDO service shows that when AI is developed openly, tested carefully, and built with users in mind, trust isn’t just possible – it’s earned.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now