7,490 research outputs found

    Measures of Reliance and Compliance in Aided Visual Scanning

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Objective: We study the dependence or independence of reliance and compliance as two responses to alarms to understand the mechanisms behind these responses. Background: Alarms, alerts, and other binary cues affect user behavior in complex ways. The suggestion has been made that there are two different responses to alerts—compliance (the tendency to perform an action cued by the alert) and reliance (the tendency to refrain from actions as long as no alert is issued). The study tests the degree to which these two responses are indeed independent. Method: An experiment tested the effects of the positive and negative predictive values of the alerts (PPV and NPV) on measures of compliance and reliance based on cutoff settings, response times, and subjective confidence. Results: For cutoff settings and response times, compliance was unaffected by the irrelevant NPV, whereas reliance depended on the irrelevant PPV. For subjective estimates, there were no significant effects of the irrelevant variables. Conclusion: Results suggest that compliance is relatively stable and unaffected by irrelevant information (the NPV), whereas reliance is also affected by the PPV. The results support the notion that reliance and compliance are separate, but related, forms of trust. Application: False alarm rates, which affect PPV, determine both the response to alerts (compliance) and the tendency to limit precautions when no alert is issued (reliance)

    Theoretical, Measured and Subjective Responsibility in Aided Decision Making

    Full text link
    When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems

    On the Relation Between Reliance and Compliance in an Aided Visual Scanning Task

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Alarms, alerts, and other binary cues affect user behavior in complex ways. One relevant distinction is the suggestion that there are two different responses to alerts – compliance (the tendency to perform an action cued by the alert) and reliance (the tendency to refrain from actions as long as no alert is issued). An experiment tested the dependence of the two behaviors on the Positive and Negative Predictive Values of the alerts (PPV and NPV) to determine whether these are indeed two different behaviors. Results suggest that the compliance is relatively stable and unaffected by irrelevant information (the NPV), while reliance is also affected by the PPV. The results are discussed in terms of multiple-process theories of trust in information sources

    Asymmetric effects of false positive and false negative indications on the verification of alerts in different risk conditions

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Indications from alerts or alarm systems can be the trigger for decisions, or they can elicit further information search. We report an experiment on the tendency to collect additional information after receiving system indications. We varied the proclivity of the alarm system towards false positive or false negative indications and the perceived risk of the situation. Results showed that false alarm-prone systems led to more frequent re-checking following both alarms and non-alarms in the high risk condition, whereas miss-prone systems led to high re-checking rates only for non-alarms, representing an asymmetry effect. Increasing the risk led to more re-checks with all alarm systems, but it had a stronger impact in the false alarm-prone condition. Results regarding the relation of risk and the asymmetry effect of false negative and false positive indications are discussed

    Asymmetric effects of false positive and false negative indications on the verification of alerts in different risk conditions

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Indications from alerts or alarm systems can be the trigger for decisions, or they can elicit further information search. We report an experiment on the tendency to collect additional information after receiving system indications. We varied the proclivity of the alarm system towards false positive or false negative indications and the perceived risk of the situation. Results showed that false alarm-prone systems led to more frequent re-checking following both alarms and non-alarms in the high risk condition, whereas miss-prone systems led to high re-checking rates only for non-alarms, representing an asymmetry effect. Increasing the risk led to more re-checks with all alarm systems, but it had a stronger impact in the false alarm-prone condition. Results regarding the relation of risk and the asymmetry effect of false negative and false positive indications are discussed

    The Effect of Task Load, Automation Reliability, and Environment Complexity on UAV Supervisory Control Performance

    Get PDF
    Over the last decade, military unmanned aerial vehicles (UAVs) have experienced exponential growth and now comprise over 40% of military aircraft. However, since most military UAVs require multiple operators (usually an air vehicle operator, payload operator, and mission commander), the proliferation of UAVs has created a manpower burden within the U.S. military. Fortunately, simultaneous advances in UAV automation have enabled a switch from direct control to supervisory control; future UAV operators will no longer directly control a single UAV subsystem but, rather, will control multiple advanced, highly autonomous UAVs. However, research is needed to better understand operator performance in a complex UAV supervisory control environment. The Naval Research Lab (NRL) developed SCOUT™ (Supervisory Control Operations User Testbed) to realistically simulate the supervisory control tasks that a future UAV operator will likely perform in a dynamic, uncertain setting under highly variable time constraints. The study reported herein used SCOUT to assess the effects of task load, environment complexity, and automation reliability on UAV operator performance and automation dependence. The effects of automation reliability on participants’ subjective trust ratings and the possible dissociation between task load and subjective workload ratings were also explored. Eighty-one Navy student pilots completed a 34:15 minute pre-scripted SCOUT scenario, during which they managed three helicopter UAVs. To meet mission goals, they decided how to best allocate the UAVs to locate targets while they maintained communications, updated UAV parameters, and monitored their sensor feeds and airspace. After completing training on SCOUT, participants were randomly sorted into low and high automation reliability groups. Within each group, task load (the number of messages and vehicle status updates that had to be made and the number of new targets that appeared) and environment complexity (the complexity of the payload monitoring task) were varied between low and high levels over the course of the scenario. Participants’ throughput, accuracy, and expected value in response to mission events were used to assess their performance. In addition, participants rated their subjective workload and fatigue using the Crew Status Survey. Finally, a four-item survey modeled after Lee and Moray’s validated (1994) scale was used to assess participants’ trust in the payload task automation and their self-confidence that they could have manually performed the payload task. This study contributed to the growing body of knowledge on operator performance within a UAV supervisory control setting. More specifically, it provided experimental evidence of the relationship between operator task load, task complexity, and automation reliability and their effects on operator performance, automation dependence, and operators’ subjective experiences of workload and fatigue. It also explored the relationship between automation reliability and operators’ subjective trust in said automation. The immediate goal of this research effort is to contribute to the development of a suite of domain-specific performance metrics to enable the development and/or testing and evaluation of future UAV ground control stations (GCS), particularly new work support tools and data visualizations. Long-term goals also include the potential augmentation of the current Aviation Selection Test Battery (ASTB) to better select future UAV operators and operational use of the metrics to determine mission-specific manpower requirements. In the far future, UAV-specific performance metrics could also contribute to the development of a dynamic task allocation algorithm for distributing control of UAVs amongst a group of operators

    The Effects of Alarm System Errors on Dependence: Moderated Mediation of Trust With and Without Risk

    Get PDF
    Research on sensor-based signaling systems suggests that false alarms and misses affect operator dependence via two independent psychological processes, hypothesized as two types of trust. These two types of trust manifest in two categorically different behaviors: compliance and reliance. The current study links the theoretical perspective outlined by Lee and See (2004) to the compliance-reliance paradigm, and argues that trust mediates the false alarm-compliance relationship but not the miss-reliance relationship. Specifically, the key conditions to allow the mediation of trust are: The operator is presented with a salient choice to depend on the signaling system and the risk associated with non-dependence is recognized. Eighty-eight participants interacted with a primary flight simulation task and a secondary signaling system task. Participants were asked to evaluate their trust in the signaling system according to the informational bases of trust: Performance, process, and purpose. Half of the participants were in a high risk group and half were in a low risk group. The signaling systems varied by reliability (90%, 60%) within subjects and error bias (false alarm prone, miss prone) between subjects. Generally, analyses supported the hypotheses. Reliability affected compliance, but only in the false alarm prone group. Alternatively, reliability affected reliance, but only in the miss prone group. Higher reliability led to higher subjective trust. Conditional indirect effects indicated that individual factors of trust mediated the relationship between false alarm rate and compliance (i.e., purpose) and reliance (i.e., process), but only in the high risk groups. Serial mediation analyses indicated that the false alarm rate affected compliance and reliance through the sequential ordering of the factors of trust, all stemming from performance. Miss rate did not affect reliance through any of the factors of trust. The theoretical implications of this study suggest the compliance-reliance paradigm is not the reflection of two independent types of trust. The practical applications of this research could be to update training and design recommendations that are based upon the assumption of trust causing operator responses regardless of error bias
    • …
    corecore