21,742 research outputs found

    Theoretical, Measured and Subjective Responsibility in Aided Decision Making

    Full text link
    When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    The Effects of Alarm System Errors on Dependence: Moderated Mediation of Trust With and Without Risk

    Get PDF
    Research on sensor-based signaling systems suggests that false alarms and misses affect operator dependence via two independent psychological processes, hypothesized as two types of trust. These two types of trust manifest in two categorically different behaviors: compliance and reliance. The current study links the theoretical perspective outlined by Lee and See (2004) to the compliance-reliance paradigm, and argues that trust mediates the false alarm-compliance relationship but not the miss-reliance relationship. Specifically, the key conditions to allow the mediation of trust are: The operator is presented with a salient choice to depend on the signaling system and the risk associated with non-dependence is recognized. Eighty-eight participants interacted with a primary flight simulation task and a secondary signaling system task. Participants were asked to evaluate their trust in the signaling system according to the informational bases of trust: Performance, process, and purpose. Half of the participants were in a high risk group and half were in a low risk group. The signaling systems varied by reliability (90%, 60%) within subjects and error bias (false alarm prone, miss prone) between subjects. Generally, analyses supported the hypotheses. Reliability affected compliance, but only in the false alarm prone group. Alternatively, reliability affected reliance, but only in the miss prone group. Higher reliability led to higher subjective trust. Conditional indirect effects indicated that individual factors of trust mediated the relationship between false alarm rate and compliance (i.e., purpose) and reliance (i.e., process), but only in the high risk groups. Serial mediation analyses indicated that the false alarm rate affected compliance and reliance through the sequential ordering of the factors of trust, all stemming from performance. Miss rate did not affect reliance through any of the factors of trust. The theoretical implications of this study suggest the compliance-reliance paradigm is not the reflection of two independent types of trust. The practical applications of this research could be to update training and design recommendations that are based upon the assumption of trust causing operator responses regardless of error bias

    Analysis of Human and Agent Characteristics on Human-Agent Team Performance and Trust

    Get PDF
    The human-agent team represents a new construct in how the United States Department of Defense is orchestrating mission planning and mission accomplishment. In order for mission planning and accomplishment to be successful, several requirements must be met: a firm understanding of human trust in automated agents, how human and automated agent characteristics influence human-agent team performance, and how humans behave. This thesis applies a combination of modeling techniques and human experimentation to understand the concepts aforementioned. The modeling techniques used include static modeling in SysML activity diagrams and dynamic modeling of both human and agent behavior in IMPRINT. Additionally, this research consisted of human experimentation in a dynamic, event-driven, teaming environment known as Space Navigator. Both the modeling and the experimenting depict that the agent\u27s reliability has a significant effect upon the human-agent team performance. Additionally, this research found that the age, gender, and education level of the human user has a relationship with the perceived trust the user has in the agent. Finally, it was found that patterns of compliant human behavior, archetypes, can be created to classify human users

    The Effects of Automation Transparency and Reliability on Task Shedding and Operator Trust

    Get PDF
    Because automation use is common in many domains, understanding how to design it to optimize human-automation system performance is vital. Well-calibrated trust ensures good performance when using imperfect automation. Two factors that may jointly affect trust calibration are automation transparency and perceived reliability. Transparency information that explains automated processes and analyses to the operator may help the operator choose appropriate times to shed task control to automation. Because operator trust is positively correlated with automation use, behaviors such as task shedding to automation can indicate the presence of trust. This study used a 2 (reliability; between) × 3 (transparency; within) split-plot design to study the effects that reliability and amount of transparency information have on operators’ subjective trust and task shedding behaviors. Results showed a significant effect of reliability on trust, in which high reliability resulted in more trust. There was no effect of transparency on trust. There was no effect of either reliability or transparency on task shedding frequency or time to task shed. This may be due to high workload of the primary task, restricting participants’ ability to utilize transparency information beyond the automation recommendation. Another influence on these findings was participant hesitance to task shed which could have influenced behavior regardless of automation reliability. These findings contribute to the understanding of automation trust and operator task shedding behavior. Consistent with literature, reliability increased trust. However, there was no effect of transparency, demonstrating the complexity of the relationship between transparency and trust. Participants demonstrated a bias to retain personal control, even with highly reliable automation and at the cost of time-out errors. Future research should examine the relationship between workload and transparency and the influence of task importance on task shedding

    The microfoundations of audit quality

    Get PDF
    The three essays collected in this dissertation relate to the microfoundations of audit quality. The first essay shows how auditors prioritize easy tasks and how this affects their judgment performance, and by extension audit quality. The second essay deals with how auditors learn in the workplace. The third essay investigates how auditors’ usage of automated tools and techniques affects their professional skepticism. Together, these essays shed light on how individual auditor behaviors, judgments, and decision-making can impact audit quality. By examining auditing from an operational perspective, these findings provide a more realistic understanding of the complexities of modern audit engagements and shed light on the microfoundations of audit quality. As regulators and policymakers continue to express concerns about audit quality, these studies offer actionable interventions that can help improve audit quality.<br/

    Student Reliance on Simulations: The Extent That Engineering Students Rely on the Outcomes of Their Simulations

    Get PDF
    The purpose of this research was to investigate the factors that contributed to engineering education students’ reliance on technology while learning new concepts. The researcher hypothesized that students would give reliance to their technology, even in the face of evidence that the technology was not working as intended. This research used a mixed-methods approach to answer the research questions. Three questions guided the research: (1) How are the participant’s level of automation complacency and the correctness of the simulation that participant is using related?; (2) How is automation bias related to a participant’s ability to recognize errors in a simulation?; and (3) What factors explain the automation bias and automation complacency that the participants are experiencing? The third research question had two subquestions: (a) What factors explain the correlation between a participant’s level of automation complacency and the correctness of the simulation that participant is using?; and (b) What factors explain the impact that automation bias has on a participant’s ability to recognize errors in that simulation? This study was based on the Theory of Technology Dominance, which states that people are more likely to rely on their technology the less experience they have with the task, the higher the complexity of the task, the lower the familiarity with the technology, and the further the technology is from the skillsets needed to solve the problem. This framework is built on the automation bias and automation complacency given by an individual towards technology. Automation bias is an overreliance on automation results despite contradictory information being produced by humans, while automation complacency is the acceptance of results from automation because of an unjustifiable assumption that the automation is working satisfactorily. To ensure that the study could gather the information necessary, the mixed-study utilized deception techniques to divide participants into separate groupings. Four groupings were created, with some participants being given a properly functioning simulation with others being given a faulty simulation. Half of each grouping were informed that the simulation may have errors, while others were not. All participants who completed the study were debriefed about the real purpose of the study, but only after information had been gathered for analysis. The simulation given to all participants was designed to help students learn and practice the Method of Joints. Students participating in the statics courses taught in the College of Engineering courses at Utah State University were invited to participate in the program over Spring and Fall semesters of 2022. Sixty-nine participants began the study, but only thirty-four remained in the study through to completion. Each participant took a pre-questionnaire, worked with a provided simulation that was either correct or incorrect, were possibly informed of potential errors in the simulation, and took a post-questionnaire. A few participants were invited to participate in an interview. The findings of this study revealed that students often have high levels of automation bias and automation complacency. Participants changed their answers from wrong answers to right answers more often when using correct simulations and from right answer to wrong answers more often when using faulty simulations. The accuracy of each participant’s responses was also higher for those with correct simulations than faulty simulations. And most participants expressed that they checked their work and changed their answers when the simulation asked them to. These findings were confirmed through the use of the post-questionnaire results and in interview analysis between the groups

    Exploring the Efficacy of Social Trust Repair in Human-Automation Interactions

    Get PDF
    ABSTRACT Trust is a critical component to both human-automation and human-human interactions. Interface manipulations, such as visual anthropomorphism and machine politeness, have been used to affect trust in automation. However, these design strategies have been primarily used to facilitate initial trust formation but have not been examined means to actively repair trust that has been violated by a system failure. Previous research has shown that trust in another party can be effectively repaired after a violation using various strategies, but there is little evidence substantiating such strategies in human-automation context. The current study examined the effectiveness of trust repair strategies, derived from a human-human or human-organizational context, in human-automation interaction. During a taxi dispatching task, participants interacted with imperfect automation that either denied or apologized for committing competence- or integrity-based failures. Participants performed two experimental blocks (one for each failure type), and, after each block, reported subjective trust in the automation. Consistent with interpersonal literature, our analysis revealed that automation apologies more successfully repaired trust following competence-based failures than integrity-based failures. However, user trust in automation was not significantly different when the automation denied committing competence- or integrity-based failures. These findings provide important insight into the unique ways in which humans interact with machines
    • …
    corecore