More than half of British public are concerned about sharing biometric data – such as facial images – between the police force and private sector to tackle crimes like shoplifting. That’s according to new research by the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute.
The new UK study found that 85 percent of people were comfortable with police using the technology to verify identities at the UK border. This fell to around 60 percent when respondents were asked about police using it to identify criminal suspects in a crowd. And less than a third (29 percent) were comfortable with biometric data being used by the police to determine whether someone might be telling the truth.
Overall, people are more likely to trust the use of these systems by public sector organisations like the police forces (79 percent) and the NHS (66 percent), as opposed to private companies, particularly employers (42 percent) and retailers (38 percent).
Sam Stockwell, lead author and research associate at The Alan Turing Institute, said: “Our research shows that people are marginally optimistic about the benefits of biometric systems for reducing crime, but there’s also a clear acknowledgement that those using them need to provide the general public with greater confidence that appropriate safeguards are in place.”
Survey respondents suggested that in most cases, biometric systems should be explicitly regulated rather than outright banned. But when asked specifically about bans, nearly two thirds of respondents believed that the use of biometric systems in job interviews to assess performance should be banned. They were also supportive of a ban on tracking student or employee engagement, for example to understand whether someone is paying attention.
If you liked this content…
The study moves beyond debates focused on facial recognition systems, in order to account for emerging but scientifically contested systems that could be used to classify individuals into demographic groups (e.g. age estimation technology) and infer behavioural states (e.g. emotion recognition systems for gauging expressions like fear or anger).
The researchers acknowledge that there are also concerns both within the literature and from the public over the potential discriminative implications of these emerging systems. This includes categorising individuals into demographic groups such as race and gender – which can be contextual, fluid, and political – as well as how trying to infer emotions from neurodivergent individuals may be particularly problematic.
The researchers also emphasise the need to contrast different biometric data types to accommodate for how these new systems operate. This division is between established data types which can be used to uniquely identify someone with a high degree of confidence (e.g. fingerprints and DNA), and novel data types – such as analysis of someone’s gait or keyboard strokes – which are weaker markers of unique identification.
Looking to the potential future integration of biometric systems across society, 42 percent of survey respondents predicted that the benefits will ‘somewhat’ outweigh the concerns. In contrast, 29 percent of respondents predicted that the concerns will either ‘somewhat’ or ‘far’ outweigh the benefits.