7 research outputs found

    Simulated AFRS as decision-aids in face matching

    Get PDF
    Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies and private businesses to verify the identity of individuals. While previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (>90%) to improve their own face matching performance compared to baseline (sensitivity gain: Cohen’s d = 0.71-1.28; overall accuracy gain: d = 0.73-1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (>89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenariosOutput Status: Forthcomin

    Simulated Automated Facial Recognition Systems as Decision-Aids in Forensic Face Matching Tasks

    Get PDF
    Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies and private businesses to verify the identity of individuals. While previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (>90%) to improve their own face matching performance compared to baseline (sensitivity gain: Cohen’s d = 0.71-1.28; overall accuracy gain: d = 0.73-1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (>89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenario

    Hey Siri, should I keep this title? How algorithms support (my) decision making under uncertainty

    Full text link
    Algorithms are increasingly present in everyday life. These tools have moved beyond their traditional online habitats to analysing data about how people choose to commute or browse a dinner menu. This thesis investigated when and why people use algorithms as decision aids to guide their choices. I examined this central question in three experimental designs that spanned from applied investigations with radiologists to studies in the laboratory and then returning to examining algorithm-based scenarios in the real-world. Across 14 experiments, our results highlight the central role of the individual’s knowledge about the algorithm. In various formats, we provided individuals with algorithm performance information that allowed them to compare their own abilities (or the abilities of others) to an algorithm. We show that once equipped with this information, individuals adopt a strategy of selective reliance on the decision aid. That is, individuals ignore the algorithm when their abilities surpass the algorithm and appropriately defer to it when faced with choices under uncertainty. Our systematic investigations show that further opportunities to learn about the algorithm not only encourage reliance on its recommendations but also engagement in experimentation and verification of one’s knowledge about its capabilities. Together, our findings emphasise the decision-maker’s capacity to learn about the algorithm to provide insights for how we can improve the use of decision aids
    corecore