Editorial

OECD leads push for international ethics of AI standards

42 countries sign up to plan to support an AI ‘global governance framework’ – though China refuses to play

Posted 30 May 2019 by Gary Flood


42 nations – mostly those in the OECD group, but including others – have enshrined the first global attempt to put some ethical principles into AI (Artificial Intelligence).

Last week the organisation adopted what it’s describing as its Principles on Artificial Intelligence, which it claims to be “the first international standards agreed by governments for the responsible stewardship of trustworthy AI”. 

The Principles come out of the work of a 50-plus member expert group on AI made up of representatives of 20 governments as well as leaders from the business, labour, civil society, academic and science communities. The experts’ proposals were taken on by the OECD, and in turn developed into the OECD AI Principles.

It’s useful to know that OECD Recommendations are not legally binding, but they are highly influential and have many times formed the basis of international standards and helped governments design national legislation.

For example, the OECD Privacy Guidelines adopted in 1980 and stating that there should be limits to the collection of personal data underlie many privacy laws and frameworks in the United States, Europe and Asia, the organisation points out.

In any case, the idea is straightforward enough: promote AI that is both innovative but also trustworthy – and that respects human rights and democratic values. 

They are:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

The OECD Principles on AI include “concrete” recommendations for public policy and strategy, while their general scope ensure they can be applied to AI developments around the world, it also adds.

OECD is also also planning to launch a policy observatory to ensure the “beneficial use of AI” later in the year – but while OECD members like the US, UK and Japan were happy to sign up to the thing, as were non-members like Brazil and Romania, China, which is investing heavily in AI work, has not.

The Organisation for Economic Cooperation and Development is a forum where the governments of 34 democracies with market economies work with each other, as well as with more than 70 non-member economies to promote economic growth, prosperity, and sustainable development.

Event Logo

If you are interested in this article, why not register to attend our Think Digital Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now