The UK risks being outpaced in the next phase of AI unless it looks beyond simply building larger models, according to new research published under embargo today by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS).

The report argues that future breakthroughs in so-called “frontier AI” are unlikely to come from scale alone. Instead, they will emerge from a mix of complementary techniques, hybrid systems and specialised models – requiring a more agile and anticipatory approach from government.
For digital and national security leaders, the organisation’s message is to prepare for a more complex and fragmented AI landscape, or risk losing strategic advantage.
Beyond the scaling paradigm
While recent advances in AI have largely been driven by increasing compute power and data, the research identifies 15 emerging approaches that could shape the next generation of systems. These span new model architectures, alternative training methods and novel hardware pathways.
Crucially, the report finds that future gains are likely to come from combining these approaches—particularly in areas such as post-training, tool use and system design—rather than relying on a single breakthrough.
However, access to large-scale compute will remain a key differentiator between nations.
The authors warn that the dominance of current methods may obscure the potential of newer approaches in the short term, especially as existing systems continue to be heavily optimised.
The report frames AI development as entering a more uncertain phase, where multiple technological paths could evolve in parallel. It highlights the growing importance of “strategic agility”—the ability for governments to monitor, assess and pivot quickly in response to emerging capabilities.
If you liked this content…
A key factor will be the automation of AI research itself, which could dramatically increase the number of research avenues pursued simultaneously and accelerate breakthroughs.
National security implications
From a national security perspective, the shift could reshape global power dynamics. The report assesses how each of the 15 approaches could address current limitations in performance, reliability, adaptability and originality – factors that underpin both economic and military applications.
It concludes that countries able to anticipate and exploit a broader range of AI developments will be better positioned to maintain influence.
Four priorities for government
To respond, CETaS sets out a series of recommendations aimed at strengthening the UK’s preparedness:
- Enhance anticipatory capabilities by monitoring emerging paradigms and creating a standing, cross-disciplinary AI assessment function for national security
- Build a deeper skills base, spanning large-scale model engineering, transferable technical expertise and niche specialisms
- Invest in enabling infrastructure, including secure sandboxes for testing advanced AI systems and shared access to hardware testbeds
- Accelerate adoption at scale, particularly in high-value sectors, while reducing barriers to public sector deployment
The report also calls for more secure information-sharing channels between government, frontier AI labs and trusted partners.
‘Prepare now or lose advantage’
Ardi Janjeva, senior research associate at CETaS, said the UK cannot afford to be reactive.
“An AI paradigm shift could rapidly alter economic power and security dynamics, so it is critical that the UK anticipates and prepares for the multiple eventualities outlined in this report,” he said.
“If we wait for impacts to materialise, we lose our ability to shape outcomes and thus our strategic advantage.”








