Policymakers should harness data to deliver public services that are responsive, efficient and fair, urges research recently published in Nature.
The piece argues that while businesses are harnessing the vast amount of data we are all using every day using Artificial Intelligence (AI), “governments have been slow to apply [the technology] to hone their policies and services”.
That’s puzzling, as the data government collect about citizens could, in theory, be used to tailor education to the needs of each child, or to fit health care to the genetics and lifestyle of each patient, argues the article – as well as help predict and prevent traffic deaths, street crime or the necessity of taking children into care, as well as other social benefits.
The main problem is that the public sector is not great at working with transactional data, the researchers, drawing on work at London’s The Alan Turing Institute for forward-looking AT thinking, contend:
“Policymaking processes were designed in very different times. Governments rely on custom-built data, collected through national statistical offices or surveys. They have no tradition of using transactional data about people’s actual behaviour to improve policy or services.”
But now, governments’ interactions with citizens generate trails of digital data – “vehicle-licensing authorities have databases containing information about our cars, how often we get stopped by the police, how many accidents we have, whether we pay our road taxes on time and when we obtained (or lost) our driving licences”, to take just one example.
You might also like
So if we flip the model and start using data better, AI could harness data about citizens’ behaviour to enable government in three ways, say the authors: “First, personalised public services can be developed and adapted to individual circumstances. Second, AI [could enable] governments to make forecasts that are more accurate, helping them to plan [and finally] AI can also be used to target health and safety inspections rather than using randomisation.”
More controversially, it goes one, forecasts can be applied to individuals. Machine-learning algorithms might pinpoint which children are likely to drop out of school, or be deemed at risk on the basis of data about their previous interactions with public-sector agencies, or instance – an ability that would “enable authorities to target scarce resources”.
And even more intriguingly, government could simulate future scenarios in great detail: the article claims the Bank of England is modelling the UK housing market and simulating the effects of policy measures aimed at mitigating financial risk, for example.
The piece doesn’t shy away from discussing many of the serious ethical and practical issues getting to this state would demand – but does argue that, “The pay-offs for policymakers using data science and AI go well beyond cutting costs and making government more citizen-focused [as] the biases revealed by machine-learning technologies have existed for centuries in governance systems.
“By laying them bare, data-intensive technologies could offer a way to overcome them. We hold some technologies to a higher standard than we do humans — we expect driverless cars to be safer than those driven by people, for example. As a society, we might accept less bias in a system of government that uses AI. In this way, a data-driven government might actually be more fair, transparent and responsive than the human face of officialdom.”