Editorial

Data privacy concerns and the need for AI regulation 

Dana Simberkoff, chief risk, privacy and information security officer at AvePoint shares the three components he believes are necessary to safely use AI.

Posted 25 July 2024 by Christine Horton


Over the past few years, we’ve all seen firsthand the transformative power of artificial intelligence (AI) technology. At AvePoint, we’ve been piloting our own AI tools and Microsoft Copilot – and so far, we’ve seen a significant impact across many departments in the organisation. And while AI has the potential to revolutionise the way we live and work, it also poses significant privacy risks for both consumers and businesses, some of which have been addressed, and others that have been swept to the side – particularly when it comes to exposing personal and sensitive data.

That’s why organisations need to invest in information management and data governance, and in developing policies and education that protect against misuse and risky data leaks. But the onus does not fall solely on organisations. I argue that governments are responsible too, and widespread regulations have not been able to keep pace with the adoption of AI. In this article, I have outlined the three necessary components I feel are necessary to safe AI usage so that we can keep using it for good.

Information management and data governance are key

To ensure the safe and responsible adoption of AI, it is crucial that organisations implement proper data management and governance strategies. Many organisations lack archiving policies and data lifecycle controls, which can lead to data breaches and other security risks. In fact, 66 percent of executives believe they are below average in managing the information lifecycle, governing the data properly, and ensuring its compliance. On top of that, when implementing AI, 45 percent of organisations encountered unintended data exposure – many of which could have been prevented if proper data management was in place.

Furthermore, the rise of generative AI models, such as those from OpenAI and Google, raises ethical and legal concerns. These models can create realistic and misleading content based on large amounts of data scraped from the internet, without obtaining user consent or respecting data rights. And, fewer than half of organisations have an AI Acceptable Use Policy, meaning that their employees are not educated on how to properly use this technology. Organisations must invest in better information management and training to mitigate security risks and control outcomes. 

Organisations need teams dedicated to AI policies

Despite widespread use, fewer than half of organisations feel they can safely use AI. Contributing to this is the fact that only 34 percent of organisations currently have a formal group or board to advise on generative AI-related risks, according to Deloitte. But as this technology grows, organisations should prioritize creating sub-committees focused on the ethical and safe use of AI within their workplace. As AI continues to evolve, it is important for all stakeholders to engage in ongoing dialogue and collaboration to address emerging challenges and opportunities. By working together, we can harness the power of AI to drive innovation and progress, while also safeguarding the privacy and security of individuals.

To make AI ethical and responsible, we need both good policies and good data. Organisations should have policies that make them accountable for how they use data and technologies. These policies should guide the creation of AI programs that follow ethical and societal norms. We can use AI for positive purposes for everyone, but only if we are careful and responsible with data and technology.

Regulation is required in the AI era

To protect privacy, we also need regulatory frameworks and data governance policies—and 78 percent of organisations agree the widespread proliferation of generative AI tools and applications will require more regulation of AI by governments according to Deloitte. Further, nearly three quarters of organisations believe there is not enough global collaboration when it comes to ensuring the responsible development of all AI powered systems. Clearly, there is room for improvement with global collaboration on how to develop and use AI safely.  

The European Union’s AI Act is a pioneering example of a comprehensive AI law that aims to regulate the risks and benefits of AI technology. The United States and other countries should follow suit and establish clear and enforceable standards for data privacy and transparency but as of right now, the EU seems to be leading the pack. Today, AI development has outpaced regulations, but in the next 1-3 years, I believe they will become more prominent and effective. Rules may cover AI data and IP, and they may vary by country, state or territory, but they’re required to keep moving forward with this transformative technology.

In conclusion, AI has the potential to transform our world, but we must be mindful of the privacy risks it poses. By implementing proper data management strategies, providing education and training, and establishing regulatory frameworks, we can ensure the safe and responsible adoption of AI technology.

Event Logo

If you are interested in this article, why not register to attend our Think Digital Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now