7 research outputs found
Recommended from our members
Metacognition, Numeracy, and Automation-aided Decision-making
Automated decision aids can improve human decision-making but the benefits are often compromised by inefficient use. The current experiment examined whether metacognition—the ability to assess self-performance—and numeracy—the ability to understand and work with numbers—predict the efficiency of automation use in a signal detection task. Two-hundred twenty-one participants classified random dot images as blue or orange dominant, receiving assistance from an 84% reliable decision aid on some trials. Type 1 and metacognitive signal detection measures were estimated from participants’ confidence ratings, and numeracy was measured using a subjective scale. The inefficiency of automation use was assessed by measuring the deviation from optimal bias following cues from the aid (bias error). Data gave strong evidence that metacognition was not associated with bias error, and anecdotal evidence that numeracy and suboptimality were weakly negatively correlated. These results suggest that operators used a strategy of combining the aid’s judgments with their own that is not metacognitively driven, but may depend on numeracy
Recommended from our members
Not Good Enough: Ironic Efficiency in Automated-Aided Signal-Detection
During applied signal-detection (e.g., airport-baggage screening) human operators can be assisted in their decision-making process by automated devices. Automation implementation is aimed at increasing performance relative to unaided levels. Generally, this intended effect is empirically observed. However, operators consistently fall short of optimal levels of aided performance, indicating suboptimal aid-use efficiency. Previous research suggests aid-use efficiency might vary depending on the sensitivity levels of each agent in the human + automation team. In the present research we manipulated Task Difficulty (easy vs. difficult) and Aid Reliability (low vs high) to examine how measures of sensitivity and aid-use efficiency vary across these factors. Participants completed a numerical signal-detection task with automated-support manipulated within-subjects. Bayesian inference analyses suggested higher sensitivity gains were achieved at higher levels of difficulty and aid reliability. Interestingly, however, aid-use efficiency was lower at these conditions. These findings replicate and expand previously observed ironic patterns of aided performance where operators fall shorter of optimal levels in conditions where empirical and potential levels of aid-benefit are higher. These findings provide valuable insight for system designers and highlight the need to better understand factors contributing to suboptimal human-automation interaction during aided signal-detection to procure safety and efficiency in naturalistic settings
Simulated AFRS as decision-aids in face matching
Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies and private businesses to verify the identity of individuals. While previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (>90%) to improve their own face matching performance compared to baseline (sensitivity gain: Cohen’s d = 0.71-1.28; overall accuracy gain: d = 0.73-1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (>89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenariosOutput Status: Forthcomin
Simulated Automated Facial Recognition Systems as Decision-Aids in Forensic Face Matching Tasks
Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies and private businesses to verify the identity of individuals. While previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (>90%) to improve their own face matching performance compared to baseline (sensitivity gain: Cohen’s d = 0.71-1.28; overall accuracy gain: d = 0.73-1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (>89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenario
Recommended from our members
Automation-Aided Collaborative Strategies in Signal Detection Tasks
Automated systems have become increasingly important to human decision making and task performance in a wide variety of complex fields and situations. Decision aids, a specific form of automation, assist human operators in making complex decisions under conditions of uncertainty, such as air traffic control or combat identification. When humans collaborate with other humans in decision making tasks, they are typically plagued by inefficiencies in decision weighting, information integration, and confidence calibration. The collaborative process by which a human operator uses an automated decision aid is similarly inefficient; past research investigating the efficacy of decision aids has suggested that individuals fail to use the information provided by an aid in an ideal manner, reaching levels of performance below ideal statistical predictions of automation use. As automation is increasingly employed in both everyday and professional settings, it is of crucial importance to understand how operators use automated decision aids, and what decision-making strategies they employ. Research examining competing explanations of automation-aided decision making is conflicted, and has disagreed as to what information provided by an aid are used by human operators. To inform automation design, it is therefore important to understand the decision-making process used by operators during automation use. The present project examined automation-aided human performance in a pair of signal detection tasks, to contrast a series of cognitive process models that provide plausible explanations of aid use. Study 1 fit automation-aided human signal detection performance to psychometric curves in a hierarchical Bayesian parameter estimation procedure, to compare the competing contingent criterion and discrete state decision strategies. Results from 143 participants found that the lack of increase in attentional lapses for aided performance was most consistent with a contingent criterion strategy of automation use, but changes in bias were observed that were inconsistent with both models. Study 2 fit automation-aided performance to a series of formal cognitive process models, comparing several plausible models of automation-use strategies and heuristics. Results from 123 participants indicated that a novel integrated confidence mixture model was the best fit to the observed data. Study 3 replicated these findings, and found in 104 participants that the integrated confidence model was also the best fit to the observed data for both a 93% and 84% reliable decision aid. The results of this project suggest that when performing a signal detection task with assistance from a decision aid, operators engage in a criterion shifting strategy, while occasionally lapsing and deferring to the aid’s judgment with a probability equal to the aid’s confidence on a given trial. Consistent with research that suggests confidence sharing is an integral part of human-human collaboration, these results suggest aid confidence information is valuable to operators, and is indeed used during automation-aided decision-making
Hey Siri, should I keep this title? How algorithms support (my) decision making under uncertainty
Algorithms are increasingly present in everyday life. These tools have moved beyond their traditional online habitats to analysing data about how people choose to commute or browse a dinner menu. This thesis investigated when and why people use algorithms as decision aids to guide their choices. I examined this central question in three experimental designs that spanned from applied investigations with radiologists to studies in the laboratory and then returning to examining algorithm-based scenarios in the real-world. Across 14 experiments, our results highlight the central role of the individual’s knowledge about the algorithm. In various formats, we provided individuals with algorithm performance information that allowed them to compare their own abilities (or the abilities of others) to an algorithm. We show that once equipped with this information, individuals adopt a strategy of selective reliance on the decision aid. That is, individuals ignore the algorithm when their abilities surpass the algorithm and appropriately defer to it when faced with choices under uncertainty. Our systematic investigations show that further opportunities to learn about the algorithm not only encourage reliance on its recommendations but also engagement in experimentation and verification of one’s knowledge about its capabilities. Together, our findings emphasise the decision-maker’s capacity to learn about the algorithm to provide insights for how we can improve the use of decision aids