Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/34654
Appears in Collections:Psychology Journal Articles
Peer Review Status: Refereed
Title: Simulated Automated Facial Recognition Systems as Decision-Aids in Forensic Face Matching Tasks
Other Titles: Simulated AFRS as decision-aids in face matching
Author(s): Carragher, Daniel J
Hancock, Peter J B
Contact Email: p.j.b.hancock@stir.ac.uk
Keywords: human-algorithm teaming
face recognition
automation
verification
collaborative decision-making
Issue Date: 1-Dec-2022
Date Deposited: 4-Nov-2022
Citation: Carragher DJ & Hancock PJB (2022) Simulated Automated Facial Recognition Systems as Decision-Aids in Forensic Face Matching Tasks [Simulated AFRS as decision-aids in face matching]. <i>Journal of Experimental Psychology: General</i>. https://doi.org/10.1037/xge0001310
Abstract: Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies and private businesses to verify the identity of individuals. While previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (>90%) to improve their own face matching performance compared to baseline (sensitivity gain: Cohen’s d = 0.71-1.28; overall accuracy gain: d = 0.73-1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (>89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenarios
DOI Link: 10.1037/xge0001310
Rights: ©American Psychological Association, 2022. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. The final article is available, upon publication, at: https://doi.org/10.1037/xge0001310
Notes: Output Status: Forthcoming/Available Online

Files in This Item:
File Description SizeFormat 
Carragher_Hancock2022_SimulatedAFRS_accepted.pdfFulltext - Accepted Version3.91 MBAdobe PDFView/Open

Can humans use facial recognition algorithms to improve their identification decisions?

What is it about?

We often need to verify an individual's identity from their facial appearance. One common method of verification is the "one-to-one matching task", in which an observer is asked to decide whether a photo ID document (e.g., a passport or driver's licence) matches the person presenting it for inspection. Although this is a common task, average human performance is surprisingly poor - with error rates regularly between 20-30%. Recent technological advances mean that many facial recognition algorithms now outperform the average human on these verification tasks. Even so, often the human operator is responsible for reviewing the algorithm's response and then making the final identification decision. Despite such arrangements already being used for identity verification, very little is known about the collaborative performance of these human-algorithm teams. Here we investigate how knowing the decision of a facial recognition system influences the final identification decision made by the human operator in a one-to-one face matching task.

Why is it important?

We show that although humans can use the decisions from highly accurate facial recognition algorithms to improve their own performance, the decisions they make with the help of the system are actually less accurate than those the system makes alone. In other words, humans often failed to correct errors made by the facial recognition system, but also overruled many of the algorithm's correct decisions. While human oversight of facial recognition algorithms is vital, our research suggests that human ability might be a factor limiting the effectiveness of the human-algorithm team. Our findings have implications for the effective implementation and oversight of facial recognition technologies.

Read more on Kudos…
The following have contributed to this page:
Peter Hancock and Daniel Carragher



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.