Editorial

Two-thirds of UK firms lack visibility over what staff share with AI, research finds

Despite heavy investment in AI skills and governance, many large UK organisations admit they cannot track whether employees are sharing sensitive information through AI tools, raising concerns over security and compliance risks.

Posted 28 April 2026 by Christine Horton


More than two-thirds of large UK organisations do not know whether employees are sharing company information through approved AI platforms, according to new research from SailPoint, exposing what it describes as a major governance blind spot as workplace AI adoption accelerates.

The study found that 67 percent of UK organisations are unable to account for whether staff are sharing information through their own secure AI systems or gated large language models. The findings suggest many businesses lack oversight of how internal AI tools are being used, even where formal platforms are already in place.

The issue comes as employees increasingly use generative AI tools such as OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini to improve productivity. When these tools are used outside approved corporate systems – often referred to as “shadow AI” – staff may unknowingly upload confidential files or sensitive data into external models.

Investment fails to close visibility gap

The research also found that 82 percent of organisations have invested in additional staff or skills to manage AI and data, while 41 percent have hired dedicated AI or analytics management roles. However, 45 percent of IT decision-makers said they still need greater visibility into where information is being shared and how it is being used.

Concerns are also growing around autonomous AI systems. Four in five organisations (80 percent) said their AI agents had already performed unintended actions, such as accessing or sharing inappropriate data. Meanwhile, 12 percent of UK firms reported adding as many as 10,000 AI agents or machine identities each month, increasing pressure on already stretched security teams.

Risks extend beyond AI tools

The lack of control is not limited to AI systems. More than a third (35 percent) of organisations said employees are also sharing files through third-party collaboration tools that may fall outside approved governance frameworks, adding to compliance and security risks.

Mark McClain, chief executive and founder of SailPoint, said organisations risk losing control of sensitive data if they fail to improve oversight.

“AI tools can enhance productivity, but they also create serious risk when they operate outside an organisation’s visibility and governance,” he said. “When sensitive information is entered into unapproved models, it can be exposed, mishandled, or even amplified through errors and hallucinations.”

He added that businesses need real-time insight into who – or what – is accessing company data, from which devices, and where information is being shared.

The research was conducted by Censuswide in January 2026 on behalf of SailPoint. It surveyed 333 IT decision-makers at UK organisations employing 7,500 people or more across multiple sectors.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now