Agentic AI is moving rapidly from research labs into operational environments. In a joint interview with Think Digital Partners, Tom Pepper, partner at Avella Security and security lead at the UK Government’s AI Security Institute, and Jay Bangle, CTO at TPXimpact, set out how autonomous systems are altering the national security and public trust landscape.

Pepper argues that the technology is already crossing that threshold.
“Agentic AI, autonomous systems capable of independent action, is no longer a distant research concept,” he said. “These AI agents are beginning to make decisions, execute tasks, and act without continuous human oversight. Early deployments already demonstrate the transformative potential of this technology, but as capabilities grow, so too does the need for careful governance, security, and oversight, particularly in critical national infrastructure and highly regulated environments.”
For Pepper, the implications extend far beyond efficiency gains or automation targets. The core issue is scale, and the societal consequences if systems fail.
“The societal stakes are high. While agentic AI promises to boost productivity, improve public services, and enhance societal resilience, misuse, over-reliance, or operational failures could magnify human error, spread disinformation, or disrupt essential services. The question is simple, but profound: what will it take for agentic AI to be safely adopted at scale?”
Bangle notes how autonomy fundamentally reshapes the risk landscape itself. Moving from tools that assist humans to systems that act independently changes both the speed and nature of decision-making inside government.
“Agentic AI marks a real shift in how risk shows up at a national level. We’re moving away from systems that primarily support human decision-making toward systems that can act with a degree of independence. That changes the problem space quite severely,” he said.
That shift, he added, creates new pressures around accountability and public trust, especially when systems are embedded in critical services.
“Risks are no longer just about how people might misuse a tool. It is about speed, scale, and transparency. An autonomous system can take many actions in a short period of time, adapt its behaviour as it goes, and interact with other systems in ways that are difficult to fully anticipate. When something goes wrong, it can be genuinely hard to reconstruct what happened, why a particular decision was taken, or where responsibility sits. For governments, that uncertainty quickly becomes a trust issue as much as a technical one, particularly when critical infrastructure or public services are involved.”
Resilience beyond cybersecurity
Both argue that resilience must now be the organising principle. But resilience in this context goes beyond traditional cybersecurity or uptime metrics.
If you liked this content…
“When we talk about AI resilience, we are talking about something broader than traditional cyber or digital resilience,” said Bangle. “It is about whether AI systems can be operated safely even when they behave unexpectedly, are stressed, or are deliberately influenced in subtle ways.”
He draws a distinction between defending systems against intrusion and ensuring systems behave predictably under pressure.
“Cyber resilience tends to focus on breaches, outages, and recovery. Digital resilience is often about continuity and uptime. AI resilience adds a behavioural layer. It asks whether systems stay within acceptable bounds, whether they fail safely, whether humans can step in quickly, and whether their behaviour can be understood after the event.”
That behavioural layer introduces a new class of vulnerability, added Bangle.
“Just as cybersecurity has to deal with social engineering of people, we now face the social engineering of systems. In many cases, the primary interface is language itself, which creates new and less obvious ways to manipulate outcomes.”
Autonomy increases systemic exposure
Meanwhile, Pepper warns that autonomy increases systemic exposure if not tightly controlled.
“Resilience in the context of agentic AI is about harnessing its potential while limiting its blast radius. These systems are capable of autonomous optimisation at unprecedented scale, but that same autonomy introduces new systemic risks. Shadow agentic AI and unsanctioned systems deployed by employees further complicate the landscape, operating outside governance structures and increasing the likelihood of cascading failures.”
In practice, he said, resilience cannot rely on perimeter defences alone.
“Building resilience, therefore, requires more than perimeter security. It demands technical guardrails, cultural awareness, and robust governance frameworks that recognise autonomous behaviour as a critical risk.”








