Turing researchers tackle over-reliance on blind trust in digital IDs

New framework to introduce measures for ethical considerations surrounding identity systems

Posted 28 May 2021 by

The Alan Turing Institute is introducing a new framework that will consider whether digital identity systems warrant the trust of the public.

The framework is part of a new publication, Facets of Trustworthiness in Digital Identity Systems, a Technical Briefing from the Trustworthy Digital Infrastructure for Identity Systems project.

Often when people speak of trust in the context of identity systems, the focus is on the technical security of the systems and data. This briefing introduces a framework with measures for ethical considerations such as fairness, explainability and the impact of design choices on individual rights of access, alongside more established criteria, covering many technical developments, including the significant influence of machine learning in shaping these systems.

The Institute says the framework will be developed as a resource for and in consultation with governments, humanitarian organisations and industry stakeholders that are advancing digital identity systems. It will provide “a mechanism for determining whether systems warrant being trusted by the people and organisations that are increasingly relying on them”, it says.

Digital identity in modern society

The framework is to be published in response to growing use of digital identity in modern society, and particularly the influence of governments in the advancement of digital identity programmes.

Examples across all economic settings range from initiatives to enhance the distribution of social support during the COVID-19 pandemic to the introduction of vaccine passports and the reaction of 64 percent of Swiss voters who rejected their government’s plans for an ID system using commercial companies.    

“As digital identity systems progress the promise of new opportunities in public service, governance, and economic growth, we are seeing growing recognition for the need to justify the confidence people are being asked to have in these systems,” says Turing Fellow Carsten Maple, lead author of the Technical Briefing and a principal investigator for the project.

“Our aim is to enhance the many principle-based and trust frameworks currently guiding development today with tangible, practical mechanisms for objectively demonstrating the facets in an identity system that warrant it being trusted.”

Six pillars of trustworthiness

Facets of Trustworthiness in Digital Identity Systems details six pillars of trustworthiness. These are security, privacy, robustness, ethics, reliability and resiliency. They will be used to “define aspects that determine predictability of outputs, the appropriateness of information collected, and the sustainability of design in terms of the technology, social and economic environments in which they operate.”

It sets out opportunities for assessing ethical considerations, such as whether the introduction of a digital system creates bias or barriers that could impact inclusive and fair access to resources and services. The lens provided by the six facets also presents the opportunity to bring together and explore different criteria underpinning aspects such as usability, openness and explainability that are often considered in a more focussed context. 

The publication highlights that identity systems operate within an ecosystem of technologies, databases, networks and other infrastructure. The proliferation of machine learning technologies in particular is revolutionising development in the field, with artificial intelligence (AI)-based processes.

“We see significant emphasis on the need for providers to demonstrate security, and increasingly privacy measures within their own solutions. Current developments, however, underline a much broader imperative to provide arguments that speak to why users should trust the overall processes and procedures that govern ID management,” said Maple. 

The report is available here.