From Microsoft’s infamous AI-powered tweet bot, Tay, which began spouting racist responses within 24 hours of its release, to Amazon’s deeply prejudiced machine-learning recruitment tool, examples of artificial intelligence going rogue are plentiful.
Concerns about opaque black-box algorithms, as well as questions about the ethical use of personal data and responsibilities for security and privacy, have made AI a hotbed of modern ethical problems.
These problems must be addressed by public and private organisations that are now relying on AI to power innovation. Despite the prevalence of AI in the enterprise, many organisations still lack strong AI governance, which is critical to ensuring the integrity and security of data-driven systems.
Indeed, according to O’Reilly’s research, more than half of AI products in production at global organisations still lack a governance plan that governs how projects are created, measured, and monitored.
Concerningly, privacy and security – issues that may directly impact individuals – were among the risks that organisations cited the least when asked how they evaluate the risks of AI applications. AI-enabled organisations report that the most significant risk to AI projects is ‘unexpected outcomes,’ closely followed by model interpretability and model degradation, which represent business issues. Business risks were ranked lower than interpretability, privacy, fairness, and safety.
There could be AI applications in which privacy and fairness are not concerns (for example, an embedded system that decides whether the dishes in your dishwasher are clean). Companies that use AI, on the other hand, must prioritise the human impact of AI as both an ethical imperative and a core business priority.
As UKRI (UK Research and Innovation) highlights, ‘responsible use of AI is proving to be a competitive differentiator and key success factor for the adoption of AI technologies. However, cultural challenges, and particularly the lack of trust, are still deemed to be the main obstacles preventing broader and faster adoption of AI.’
Lack of governance is not just an ethical concern. Security is also a massive issue, with AI subject to many unique risks: data poisoning, malicious inputs that generate false predictions, and reverse engineering models to expose private information, to name a few. However, security remains close to the bottom of the list of perceived AI risks.
With cybercriminals and bad actors surging ahead in their adoption of sophisticated technology, cybersecurity cannot take a back seat in the race to realise AI’s promise. It is a vital strand of much-needed AI governance. Governance must rise up the matrix of risk factors for AI projects, becoming a cornerstone of any development and deployment programme.
What is AI governance?
With that in mind, what exactly is AI governance? According to Deloitte, it encompasses a ‘wide spectrum of capabilities focused on driving the responsible use of AI. It combines traditional governance constructs (policy, accountability, etc.) with differential ones such as ethics review, bias testing, and surveillance. The definition comes down to an operational view of AI and has three components: data, technique/algorithm, and business context.’
If you liked this content…
In summary, ‘achieving widespread use of AI requires effective governance of AI through active management of AI risks and implementation of enabling standards and routines.’
Without formalising AI governance, organisations are less likely to know when models are becoming stale, results are biased, or when data is improperly collected. Companies developing AI systems without stringent governance to tackle these issues are risking their businesses. They leave the way open for AI to effectively take control, with unpredictable results that could cause irreparable damage to reputation and large legal judgments.
The least of these risks is that legislation will impose governance, and those who have not been practising AI governance will need to catch up. In today’s rapidly shifting regulatory landscape, playing catch up is a risk to reputation and business resilience.
The failure of AI governance
The reasons for the failure of AI governance are complex and interconnected. However, one thing is certain: the rapid development and adoption of AI has not been matched by an increase in education and awareness of its dangers. This means that AI is experiencing a people problem.
For example, one of the most significant bottlenecks to AI adoption is the scarcity of skilled workers. Our research reveals significant skill gaps in key technological areas such as ML modelling and data science, data engineering, and business use case maintenance. The AI skills gap has been well documented, and there has been much government discussion and policy to drive data skills through focused tertiary education and up/reskilling.
However, technological abilities alone are insufficient to bridge the gap between innovation and governance. It is neither prudent nor equitable to leave governance to technical talent alone. Those with the skills to develop AI must, without a doubt, have the knowledge and values to make decisions and solve problems within the context in which they operate. However, AI governance is truly a collaborative effort that brings an organisation’s values to life.
As a result, no organisation can afford to be complacent when it comes to incorporating ethics and security into AI projects from the start. That means that everyone in the organisation, from the CEO to the data analyst, the CIO to the project manager, must participate in AI governance. They must agree on why these issues are important and how to address them.
A strategy of this type begins with empowerment via education, awareness, and role-specific training. When it comes to artificial intelligence, vigilance is a holistic skill that everyone must master. Frameworks, principles, and policies serve as the foundation for sound innovation, but they are useless unless people are engaged, educated, and empowered to bring them to life.
Rachel Roumeliotis is VP of Data and AI at O’Reilly.