Editorial

Governments need to be mindful of shadow AI risk

Shadow AI poses risks to governments’ data security, legal compliance, and the integrity of their operations, says Kolekti founder, Adam Wignall.

Posted 18 December 2024 by Christine Horton


Shadow AI, where employees use generative AI (gen AI) tools like ChatGPT without proper oversight, is a growing concern across government.

That’s according to Adam Wignall, GM and founder of remote working specialist, Kolekti.

He pointed to a recent horizon scanning report that identified areas in health, transport, education and more where AI could offer societal benefits and went on to quote that AI-powered innovation could deliver a potential £550 billion in economic value to the UK economy by 2035. However, it also said that the speed of AI development was outstripping the pace of regulation

“Shadow AI also poses risks to governments’ data security, legal compliance, and the integrity of their operations. This not only jeopardises the safeguarding of critical data but also exposes governments to legal liabilities and reputational harm,” said Wignall.

“Additionally, shadow AI can lead to unreliable data handling and processing. For governments, which rely heavily on data accuracy and consistency for policy-making and service delivery, discrepancies like this can undermine public trust and operational effectiveness. The absence of standard practices for gen AI use also makes it difficult for governments to display how their processes are compliant with regulatory standards. Governments need to manage these risks while harnessing the benefits of gen AI.”

Bridging the gap between innovation, governance and security

Research shows that 78 percent of knowledge workers use their own AI tools to complete work, yet 52 percent don’t disclose this to employers. This poses risks like data breaches, compliance violations, and security threats.

Kolekti recently launched Narus, a gen AI platform designed to enable and accelerate safe AI adoption. Wignall said Narus bridges the gap between the need for innovation balanced by governance and security.

“By providing a platform with robust administrative controls, Narus ensures that AI usage is secure and compliant. We want to ensure Narus can integrate into existing workflows, and help users get the most from gen AI while having that reassurance,” he said.

“Additionally, Narus offers a great experience for staff – especially those familiar with gen AI tools. With features, such as being able to send a prompt to multiple LLMs (large language models) at once to help teams compare results and guard against the issues gen AI content can sometimes surface.”

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now