Editorial

UK urged to develop AI crisis response strategy to counter disinformation

Ministers are being urged to develop an AI-specific crisis response strategy after a new report warned that AI is increasingly being weaponised following major incidents to spread disinformation, incite violence and undermine democratic stability.

Posted 11 February 2026 by Christine Horton


The UK Government is being urged to develop a dedicated AI-specific crisis response strategy to address the growing threat posed by AI-enabled disinformation following major crisis events.

A new report from the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) warns that AI tools are increasingly being weaponised in the aftermath of incidents such as terror attacks and national security crises to spread conspiracy theories, incite violence and undermine democratic stability.

The researchers argue that current government crisis response frameworks are not equipped to deal with the speed, scale and sophistication of AI-driven information threats. They call for clear protocols that define how government should respond when AI is used to manipulate public understanding during fast-moving crises.

Central to the recommendations is the creation of monitoring indicators and severity thresholds that would allow government to assess when AI information threats require escalation, alongside formalised data-sharing processes with AI companies to enable rapid intervention.

AI accelerating harm after crisis events

The report examines how the growing prevalence of AI chatbots and content generation tools across the digital ecosystem has changed the information environment during crisis events. Through a review of existing research, interviews with 25 experts across government, industry and academia, and an AI-driven security incident simulation, the study assessed how AI information threats can evolve in real time.

According to the findings, AI tools are now being used to create, curate and amplify harmful content following crises, increasing the persuasiveness and reach of misleading narratives. Since July 2024, the researchers identified at least 15 major international crisis events where AI information threats played a role, including the Southport murders in July 2024 and the Bondi beach terrorist attack in December 2025.

In these cases, AI-enabled tactics included deepfakes designed to promote false narratives, data poisoning attacks aimed at corrupting the sources used to train AI models, and AI-powered bot networks that mimic human behaviour to influence public opinion.

The report highlights that misleading AI-generated content has, in some instances, complicated law enforcement responses, fuelled harmful conspiracy theories and encouraged real-world violence. The activity was linked to both domestic actors and coordinated hostile foreign networks.

Role for regulators and industry

While the risks are significant, the researchers stress that AI tools could also play a constructive role in future crisis response. This includes detecting and removing harmful content before it spreads widely, and using chatbots to amplify accurate, authoritative information.

Beyond government, the report calls on industry and regulators to act. It recommends that AI companies improve transparency around chatbot limitations during live crises, including the use of prominent warnings when users query unfolding events. It also urges the tech sector to strengthen incident response mechanisms, with the Frontier Model Forum establishing new channels for sharing threat intelligence.

The report further recommends that Ofcom examines the financial incentives behind AI-enabled disinformation as part of its upcoming consultation on fraudulent advertising.

Preparing for future threats

With further incidents likely, the researchers argue that sustained monitoring and information-sharing will be critical. Future research will explore how terrorists may use chatbots to support attack planning, how AI could support debunking efforts during crises, and how to counter AI data poisoning attacks.

“Crisis events are unpredictable and volatile scenarios,” said Sam Stockwell, Senior Research Associate at CETaS. “Combined with a poorly understood AI threat landscape, this means that we are not currently equipped to deal with this growing threat to public safety. Yet while we need to address the critical risks associated with AI tools in this context, we must also recognise that the same technology can help to strengthen democratic resilience in times of crisis.”

Event Logo

If you are interested in this article, why not register to attend our Think Digital Identity and Cybersecurity for Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now