Editorial

‘If citizens don’t see your AI as legitimate – you’re going to have problems,’ warns think tank

Take risks but strive to prove to citizens these systems are there to help them, suggests Boston Consulting Group spin-off CPI.

Posted 17 October 2018 by Gary Flood


Government use of Artificial intelligence (AI) will fail to help public services do a better job for people unless citizens really trust that it’s there to help them.

The warning comes from think tank the Centre for Public Impact (CPI), an off-shoot of analysts Boston Consulting Group, which has just published a report on How governments can secure legitimacy for their AI systems.

In it, CPI Programme Associate Margot Gagliani argues that given how AI technology is often over-hyped and the “public anxiety over the moral and ethical issues it raises”, in order to be a valuable tool for government and citizens, AI has to possess legitimacy – the deep well of support that governments need in order to achieve positive public impact.

“While AI can already automate well-defined, repeatable tasks and augment human decision-making, governments ought to be very circumspect over its future direction,” she goes on.

“As AI expands into more sensitive and contentious domains, citizens are beginning to worry about the implications of such a far-reaching technology.”

To reassure citizens, Gagliani suggests a special five-point action plan:

  • Understand and empathise with the real needs of end-users
  • Focus on specific and doable tasks
  • Build AI literacy in the organisation and the public
  • Keep maintaining and improving AI systems
  • Design for and embrace extended scrutiny

Governments should also consult data scientists and AI developers about any investment in new infrastructure so that it is compatible with their existing systems, databases, and AI workflows, says CPI, while the resultant overall technical infrastructure has to promote transparency, so that citizens and communities can access the data and reasoning of any AI systems that may affect their lives, and raise any concerns about their correctness and objectivity.

“In defining the processes and timescales for technology procurement and deployment, governments must not focus on risk management at the expense of experimentation,” the group concludes.

“Only by grasping the emerging opportunities for innovation can they make a fruitful and legitimate use of AI on behalf of their citizens.”

The group released its findings at the Tallinn Digital Summit in Estonia this week.