Editorial

The rising risk of insider threats to the public sector in the AI-era 

Findlay Whitelaw, senior solutions engineer at Exabeam shows how insider threats are becoming harder to manage as AI reshapes the cybersecurity landscape in the public sector.

Posted 13 March 2025 by Christine Horton


The public sector is a prime target for insider threats, holding a large network of sensitive personal and government data, making it highly vulnerable not only to external cyberattacks but also to hard to detect insider threats.  While this is not a new challenge, the rapid growth of AI is elevating the risk, transforming insider threats into a significant and evolving concern. 

Traditional insider threats involve activities from individuals or systems with legitimate access to an organisation’s network, applications, databases, premises, and data. These threats can originate from malicious, accidental, negligent, and compromised insiders. Insider threats are a pressing issue within the public sector, accounting for 30 percent of all breaches in this space, according to the 2024 Data Breach Investigations Report. It is a challenge that is set to escalate with AI’s increasing role in insider threats. 

The rapid growth of AI is not only transforming digital operations —it’s reshaping the threat landscape.  With the rise of AI, insider threats now include AI systems themselves, and an AI insider threat can be an AI-enabled human actor; defined as “An individual who misuses AI-powered tools, algorithms or automation to conduct malicious, deceptive, or harmful activities within an organisation”. 

In addition, entity threats also exist from autonomous AI systems, and can be defined as, “An AI system that operates autonomously, makes independent decisions, or is manipulated to make harmful decisions, whether through unintended bias or malicious reprogramming”.

AI and the evolution of insider threats

Whether intentional or accidental, insider threats pose a significant risk to the public sector, ranging from large-scale data breaches to financial losses and reputational damage. Unlike external threats, insider threat actors often have trusted credentials and legitimate access, allowing them to challenge or bypass traditional security controls. This complexity is heightened when AI is involved, as AI-driven activities can mimic legitimate actions or exploit system vulnerabilities. 

The evolution of traditional insider threats with AI-enabled tactics is creating more scalable, sophisticated, and harder-to-detect threats. AI’s increasing role in cyberthreats can be presented in four levels of rising risk:  

  • Level 1: AI Augmented Basic Attacks – AI now supports traditional and basic cyberattacks like phishing and password cracking.
  • Level 2: AI Enhanced Social Engineering – This is where AI deepfakes and hyper-personalised deception techniques are often used. Deepfakes can be falsely deployed to gain access to sensitive information by impersonating existing employees to bypass security measures or manipulate and deceive other employees into trusting them.
  • Level 3: AI-Driven System Exploits –AI-driven system exploits occur where AI autonomously identifies and exploits system vulnerabilities.
  • Level 4: Agentic AI (AI systems that can act independently without human intervention) and Autonomous Decision Manipulation – This type of threat is when the AI as an entity can make decisions independently or can be manipulated to make malicious decisions.   

At the same time, AI-powered tools like ChatGPT are becoming more widely utilised within work environments, with 79 percent of UK employees now using generative AI to help them in the workplace according to a 2024 Forbes Advisor poll. Clear policies for AI are critical for responsible and ethical use, particularly as AI systems become more integrated into decision making, security and business operations. Well-defined policies help address concerns like bias, transparency, accountability and data privacy. They also provide a framework for organisations to navigate regulatory requirements, mitigate risks, and align AI.  

Understanding how AI both amplifies and mitigates insider threats is crucial to protecting the public sector. Without robust governance, organisations risk AI-driven breaches, bias and regulatory non-compliance. 

Proactively protecting the public sector

To address these growing threats, the public sector needs a multi-faceted approach, combining advanced technology, strong governance, and a security-first culture. 

A core element of building resilience involves exploring advanced technologies that leverage AI and machine learning (ML) to discover abnormal and risky behaviours that traditional tools miss. A well-rounded security strategy against AI insider threats consists of several key factors: 

  • Deploying AI-Based Defences – Utilising user and entity behaviour analytics (UEBA) enables real-time monitoring and anomaly detection, offering insights into abnormal and risky behaviours. UEBA tools deliver proactive risk scoring and correlated threat indicators to prioritise high-risk events.
  • Leveraging Large Language Models (LLMs) Embedding LLMs in security operations is a powerful way to gain guided recommendations against insider threats. LLMs can map attack chains and timelines faster to streamline manual efforts in incident response.
  • Fostering a Security-First Culture – Implementing AI-specific insider threat training focused on synthetic media, deepfakes, and social engineering tactics is vital for insider threat defence.Running simulated attacks and teaching employees how to use Open AI models like ChatGPT responsibly, including validating their outputs, can help mitigate insider threats by building a culture of awareness. 
  • Strengthening Governance and Controls – Defending against AI-powered insiders involves establishing clear policies for AI deployment that emphasise transparency and accountability while considering ethical and data requirements and regulation. Regularly reviewing and auditing AI systems is essential to prevent manipulation and model drift. 

Building resilience, defending data

As AI technologies continue to advance, maintaining awareness of the threat landscape and adapting security strategies is essential. The public sector must adopt a proactive approach, integrating AI defences while remaining vigilant against AI-powered insider threats.  

By combining technology, governance, and culture, the public sector can protect critical data, safeguard public trust, and build resilience against the next generation of insider threats.

Event Logo

If you are interested in this article, why not register to attend our Think Digital Identity and Cybersecurity for Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now