Voters like the idea of AI – but not if it means they might lose their jobs, says Boston Consulting Group

Citizens like the idea of using AI for tasks such as transport and traffic optimisation, predictive maintenance of public infrastructure and customer service activities – but really not so keen on the technology being used for sensitive decisions in criminal justice, such as sentencing recommendations

Posted 13 March 2019 by Gary Flood

According to Boston Consulting Group (BCG), the global public likes the idea of using AI (Artificial Intelligence) to help better manage transport and optimise traffic, predictively maintain public infrastructure, and customer service activities.

What they don’t want: AI being sued for sensitive decisions associated with the justice system, such as parole board and sentencing recommendations – and they are really bothered by potential ethical issues, as well as lack of transparency in decision making.

They also fear AI’s potential to increase automation and the resulting effect on employment.

The findings come in a new report from the respected advisory group, The Citizen’s Perspective on the Use of AI in Government, which was published at the start of the month.

Its team was attempting to gain insights into citizen attitudes about and perceptions of the use of AI in government, which it did by surveying over 14,000 Internet users around the world as part of its biannual Digital Government Benchmarking study, asking respondents

  • how comfortable they are with certain decisions being made by a computer rather than a human being
  • what concerns they have about the use of AI by governments
  • how concerned they are about the impact of AI on the economy and jobs.

And what it found: “Citizens generally feel positive about government use of AI, but the level of support varies widely by use case, and many remain hesitant.”

For example, voters who answered its pollsters’ questions expressed a positive net perception of all 13 potential use cases covered in the survey, bar decision making in the justice system: 51% of respondents disagreed with using AI to determine innocence or guilt in a criminal trial, and 46% disagreed with its use for making parole decisions. When asked about potential concerns around the use of AI by governments, 32% of citizens expressed concern that significant ethical issues had not yet been resolved, and 25% were concerned about the potential for bias and discrimination. The other major concerns were the perceived lack of transparency in decision making (31%), the accuracy of the results and analysis (25%), and the capability of the public sector to use AI (27%).

The level of support is high, however, for using AI in many core government decision-making processes, says the study, such as tax and welfare administration, fraud and noncompliance monitoring, and, to a lesser extent, immigration and visa processing. Strong support emerged for less sensitive decisions such as traffic and transport optimisation.

Also well supported was the use of AI for the predictive maintenance of public infrastructure and equipment such as roads, trains, and buses – and support was strong for using AI in customer service channels, such as for virtual assistants, avatars, and virtual and augmented reality.

Other highlights:

  • people in emerging markets tend to be more positive about government use of AI
  • support for government use of AI correlates moderately with trust in government
  • younger citizens and city dwellers also expressed the least worry about AI in response to the question: “What concerns you most about the use of AI by governments?”

Finally, citizens are very concerned about the impact of AI on jobs. When asked about the implications of AI for the economy and society, citizens expressed significant concerns about the availability of work in the future (61% agree), the need to regulate AI to protect jobs (58% agree), and the potential impact of AI on jobs (54% agree), says BCG.

To address these issues, the study says government should be engaged in pilots, but only ones that involve the public: “When identifying use cases that will deliver the greatest benefit from experimentation, governments will need to balance the difficulty of implementation with the benefits, including the potential impact for citizens, the reusability and applicability of a use case to other needs, and the opportunity to reduce costs and free up resources for other uses. Governments should also consider how to involve citizens in these pilots.”

And as for the Rise of the Robots, “Unless governments address fears of potential job insecurity and general uncertainty—through public dialogue and policies that provide a safety net for those most affected—these perceived threats could create a significant barrier to the development of AI.”

“Transparency into where and how AI will be used in government will be essential to establishing the legitimacy of the technology in citizens’ eyes and to mitigate their concerns about any negative effects it might have on their lives,” the study concludes.