Editorial

Maybe ‘No AI’ is better than ‘Bad AI’?

Respected American AI watchers the AI Now Institute come down pretty hard on patchy success of the tech in everyday use

Posted 26 September 2018 by Gary Flood


The US government is finding that using AI (artificial intelligence) can be good for bureaucrats – but maybe less good for actual citizens.

Common failings: poorly designed algorithms that tend to discriminate against service users, a lack of transparency around how they arrive at decisions, and a lack of capability at the civil service end that can exacerbate these issues.

The findings are from a new report from a respected New York University-based research hub into all things to do with the social impact of advanced technologies like Machine Learning (ML), the AI Now Institute.

Its study – Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems – reflects the findings of a number of case studies of ML in the US public sector that have thrown up some troubling questions.

In one case, disabled recipients of the US’s Medicaid programme in Arkansas were illegitimately thrown off the scheme, while in another case, some teachers in Texas were fired after software decided for reasons that were not clear to observers.

The problem seems to be come down from incomplete or poorly designed training set usage, says the Institute, or the simple adoption by another state of algorithms trained on very different data to what they really needed for their own case.

There is also the issue of vendors of such systems either refusing to co-operate to explain how their systems derived their decisions, or have become too important in the decision loop for public servants to operate without their help, which throws up some accountability and transparency issues, suggest the authors.

”As evidence about the risks and harms of these systems grow, we’re hopeful we’ll see greater support for assessing and overseeing the use of algorithms in government,” it warns.

Go here to read the full report: