The rapid evolution of generative AI is a polarising issue, especially in the context of business use. Many arguments against the use of generative AI centre around workforce reductions as a result of AI-driven automation, data source bias and privacy concerns. The latter is a prevalent concern within the public sector due to the huge amount of personal and sensitive data organisations handle on a daily basis.
Despite these concerns, AI adoption is surging: a recent Deloitte survey reveals that more than four million people have used generative AI for work, with or without formal permission. In addition, many UK public sector organisations are already exploiting the benefits of AI to automate processes and speed decision making.
It’s a trend that is difficult to ignore, as are the inevitable cost saving and efficiency benefits. But without proper governance, the security risk of sharing sensitive data with large language models (LLMs) is significant and should be mitigated against just as other data sharing workflows such as email.
The similarities: Generative AI and email risks
When it comes to sharing sensitive information, the risks associated with generative AI are remarkably similar to an existing workflow that has become integral to every workplace; sending out emails containing sensitive data.
You might also like
Both generative AI and email workflows involve human interaction, which introduces the potential for mistakes. An employee may inadvertently paste sensitive data into a generative AI tool, or send an email to a mistyped email address. In both scenarios, the data is at risk of being leaked, accessed by unauthorised individuals, or in the case of generative AI, even used to train the public model.
Similar risks call for similar data protections
Generative AI opens up a host of opportunities to enhance public-facing services and internal processes, just as sharing data securely via email improves communication speed, enhances collaboration and improves productivity. Both workflows equally pose risks to data security, which can be effectively mitigated by implementing proper measures, including:
- Employee Training: Educate your employees about the risks associated with sharing sensitive information externally with third parties via generative AI or email workflows. Instruct your employees not to disclose sensitive information to any generative AI tool. Similarly, make sure to educate employees on proper email security practices, such as leveraging encryption and double-checking recipients prior to sending sensitive information. This also requires employees to understand exactly what data is considered sensitive, such as Personally Identifiable Information (PII). Failing to clearly communicate this to employees could ultimately result in a breach or a violation of privacy regulations, such as GDPR.
- Upstream Data Discovery, Classification, and Tagging: It is critical to invest in proper data discovery to know what data you have and where it is located. Once discovery is done, you can then classify your data as sensitive or not sensitive. Ad, once data has been classified, you can then apply tags or labels to identify (attribute) the data that is most sensitive. Collectively, these upstream data governance efforts will enable your organisation to adopt downstream data security controls so you can define and enforce policy and minimise leakage via channels like generative AI and email.
- Downstream Security Controls: Once your data has been tagged, it now has attributes. And, with attributes, you can now automatically apply policies to easily control who can access it, how it can be used, for how long — even after it’s been shared with others via email workflows. You can also implement similar controls for generative AI use cases whereby filters act as guardrails and automatically prevent sensitive data from being shared.
By adopting these practices and leveraging the right tools, public sector organisations can confidently leverage the benefits of generative AI.