Editorial

AI and identity: Government at the next frontier

AI is rapidly reshaping the public sector with identity remaining a critical frontier. As the pace of innovation accelerates, how can the UK ensure its identity frameworks are both future-ready and resilient?

Posted 3 October 2025 by Christine Horton


Artificial intelligence (AI) is rapidly reshaping how governments think about identity. From defence to digital services, the questions remain the same: who – or what – can be trusted to access critical systems, and how should those identities be managed in a world increasingly dominated by non-human actors?

That was the central theme of a panel discussion at Think Digital Identity and Cybersecurity for Government, which featured Howard Tweedie, former strategy head for identity, Ministry of Defence, Jonathan Neal, field CTO at Saviynt and Ian Norton, digital identity advisor to the One Login programme.

From platforms to decisions

Tweedie opened by reflecting on lessons learned from military campaigns in Libya, Syria, Ukraine and beyond. Civilian networks and data now underpin military operations. “Platforms, tanks, trains, very high-cost elements are now being motivated by how we integrate them into sense, decide and act. We are moving from a platform-centric to an information-centric, to a decision-centric paradigm.”

That shift, he argued, has only accelerated since 2021 with the explosion of social media data and the integration of AI, machine learning and robotic process automation into operations.

Two sides of the coin

For Neal, AI and identity must be seen in two dimensions: AI for identity and identity for AI.

On the first, the benefits are clear: “AI is having massive impact on how we improve efficiencies across our whole life cycle management process,” he said. “We can reduce manual and mundane tasks, increase accuracy when making access decisions, and maintain compliance at scale.”

But the harder challenge is the reverse: how to establish identity for AI agents themselves. “When we’re dealing with humans, there’s HR. We do background checks, security clearance, criminal records. But when it comes to AI agents or machine identities, there is no HR. And yet these entities are getting access to critical systems and data. We’ve got to treat them with the same robust, verifiable processes as human users.”

Neal warned that this is no longer a future problem. “Already more than 60 percent of all internet traffic is machine-to-machine. In the enterprise, the ratio of non-human to human identities is about 45 to one – and rising. Anyone who thinks this is tomorrow’s problem is wrong. It’s now.”

Old questions, new tools

Norton urged colleagues to cut through the hype. “The fundamental problem hasn’t changed: who are you, can I trust you, and can I give you access?” he said. “The what hasn’t changed, but the how has. Identity used to mean turning up with a passport and getting eyeballed. Today it means digital evidence, verifiable credentials, decentralised identities.”

The real breakthrough, Norton argued, is the “identity of things”. “We’re no longer just managing people. We’ve got systems talking to systems, AI agents talking to AI agents, hundreds of thousands of them, and all of those IDs need to be managed. If we don’t, malicious actors will exploit them.”

He illustrated the risks with a cautionary tale. A productivity app using AI to manage emails worked well until version 16, when a rogue line of code instructed it to “open every single email”. “The AI just did it – no checks, no balances, no human saying, ‘hang on, that looks dodgy.’ That’s the issue. How do we govern identities and processes when the actors are machines that don’t know better?”

Zero trust, ethical questions

For Tweedie, the answer lies in adopting zero trust, but also widening the pool of expertise. “It’s not just technologists anymore. You may need a social scientist on your team to say, is this right? These are no longer sharp-pencil problems in a back office. They’re ethical, legal, societal questions.”

In defence, that extends to whether AI can be trusted to make targeting decisions. “Do I prosecute a target automatically based on AI information, or does a human need to be in the loop? Those are no longer theoretical questions – they’re live.”

Clarity, outcomes and guardrails

Asked how government should respond, Norton urged pragmatism. “Be really clear on your use case. What outcomes do you want to achieve? Then put the right guardrails in place – standards, policy, technical controls – so you can innovate safely. But don’t get lost in the excitement. AI is still the wild west. There will be failures, unintended consequences. The only way forward is small steps, clarity of purpose, and continuous evaluation.”

Neal agreed, pointing to the need for observability. “Even with the best intentions, exceptions will happen. You need continuous discovery and validation of identities. And the only way to manage AI at scale is to use AI itself.”

The new identity frontier

All three panellists agreed that government must move quickly to adapt. “There are no borders around this stuff anymore,” Norton warned. “It’s not just government, not just a department. It’s cross-cutting, it’s global. We all use the same technologies, whether for Spotify or for defence. That means we all face the same risks.”

As Tweedie summed up: “AI gives us flexibility and speed – we can stand up an operation in days, not months. But the consequences are profound. Leaders must ask, do we really want this, and is it ethical? That’s the mirror AI is holding up to us now.”

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now