The government will have to play a very active role in policing some tricky issues around the ethics and proper governance of Artificial Intelligence (AI) as it starts to become more mainstream in the next few years.
Even more mainstream, it seems – as a new study says AI is already widely seen in not just the private sector in areas like search engines, translation and speech recognition services, but in the public sector, too, in the shape of things like machine learning – indeed, through the work of the Government Data Programme their use is growing, providing insights into digital service delivery to agricultural land use through the analysis satellite images.
The problem is securing public trust in usage of such technologies, which hold the promise of better informed policy decisions through quickly accessing relevant information, vastly reduced fraud and error, and the ability to make make government decisions more transparent.
UK public sector bodies using AI, then, have to be ready to answer questions about how it was used in a policy context, and be transparent about its role in decision making.
They will also need to understand how relevant legislation, like the Data Protection Act and EU’s General Data Protection Regulation, apply in cases where machines were personal data, possibly without intention, when it may not be clear if consent has been obtained.
You might also like
The ideas come in the shape of an interesting new report from the Government Office for Science – Artificial intelligence: opportunities and implications for the future of decision making.
The study’s authors set out to answer three questions: What is artificial intelligence, and how is it being used, what benefits is it likely to bring for society and for government, and how should any resulting ethical and legal risks be managed?
Along the way, the 21-page study shows how central getting that public trust will have to be, if AI’s potential is to be fully realised:
“Public trust is a vital condition for artificial intelligence to be used productively. Trust is underpinned by trustworthiness. But whilst this can be difficult to demonstrate in complex technical areas like artificial intelligence, it can be engendered from consistency of outcome, clarity of accountability, and straightforward routes of challenge,” says the study, which was co-written by Britain’s Chief Scientific Adviser, Sir Mark Walport.
The Government Office for Science works to ensure that government policies and decisions are informed by the best scientific evidence and strategic long-term thinking.