125,370 research outputs found

    Current Concepts and Trends in Human-Automation Interaction

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The purpose of this panel was to provide a general overview and discussion of some of the most current and controversial concepts and trends in human-automation interaction. The panel was composed of eight researchers and practitioners. The panelists are well-known experts in the area and offered differing views on a variety of different human-automation topics. The range of concepts and trends discussed in this panel include: general taxonomies regarding stages and levels of automation and function allocation, individualized adaptive automation, automation-induced complacency, economic rationality and the use of automation, the potential utility of false alarms, the influence of different types of false alarms on trust and reliance, and a system-wide theory of trust in multiple automated aids

    Influence of cultural factors in dynamic trust in automation

    Get PDF
    The use of autonomous systems has been rapidly increasing in recent decades. To improve human-automation interaction, trust has been closely studied. Research shows trust is critical in the development of appropriate reliance on automation. To examine how trust mediates the human-automation relationships across cultures, the present study investigated the influences of cultural factors on trust in automation. Theoretically guided empirical studies were conducted in the U.S., Taiwan and Turkey to examine how cultural dynamics affect various aspects of trust in automation. The results found significant cultural differences in human trust attitude in automation

    The Dynamics of Human Trust in Aviation Automation Technology

    Get PDF
    This quantitative, descriptive survey study aims to determine how automation systems\u27 purpose, performance, and process influence US regional airline pilots\u27 trust in automation technology. By investigating human perceptions of trust in automation technology, user education and system technology may be developed to facilitate appropriate use. The roles of a tool are to extend human physical and mental limits and to benefit humans by reducing workload, enhancing safety, and improving the quality of life. Initially considered another tool, as technology developed, it became less mechanical and more intelligent. On the flight deck of an airliner, there are two imperfect entities – humans and automation technology. Trust shapes, defines, and limits the human-automation technology relationship. Despite trust\u27s ubiquity, its definition remains elusive, contextual, a matter of perspective, and controversial. The most cited definition suggests trust requires the user\u27s uncertainty and vulnerability (risk). The risk lies in the imperfection of automation technology. The technology may not perform appropriately to meet the user\u27s desired outcomes. Trust determines acceptance and reliance on automation technology in situations characterized by user vulnerability and uncertainty. Furthermore, there is controversy in the relationship between trust and distrust, whether they form opposite ends of a unidimensional bipolar continuum where an increase in trust requires a commensurate decrease in distrust or are independent constructs where an increase in trust does not necessarily affect distrust. Appropriate levels of human trust in automation technology are crucial. Calibrated trust occurs when the user\u27s trust in automation technology accurately matches the technology\u27s capability and trustworthiness, encouraging appropriate and timely use. Trust significantly influences automation use, misuse, abuse, and disuse. The distinction between trust and reliance is trust is an attitude, whereas reliance is a resultant behavior. Despite the imperfections of automation, pilots are prone to trust and rely on automation

    Task Load and Automation Use in an Uncertain Environment

    Get PDF
    The purpose of this research was to investigate the effects that user task load level has on the relationship between an individual\u27s trust in and subsequent use of a system\u27s automation. Automation research has demonstrated a positive correlation between an individual\u27s trust in and subsequent use of the automation. Military decision-makers trust and use information system automation to make many tactical judgments and decisions. In situations of information uncertainty (information warfare environments), decision-makers must remain aware of information reliability issues and temperate their use of system automation if necessary. An individual\u27s task load may have an effect on his use of a system\u27s automation in environments of information uncertainty. It was hypothesized that user task load will have a moderating effect on the positive relationship between system automation trust and use of system automation. Specifically, in situations of information uncertainty (low trust), high task load will have a negative effect on the relationship. To test this hypothesis, an experiment in a simulated command and control micro-world was conducted in which system automation trust and individual task load were manipulated. The findings from the experiment support the positive relationship between automation trust and automation use found in previous research and suggest that task load does have a negative effect on the positive relationship between automation trust and automation use

    Exploring the Efficacy of Social Trust Repair in Human-Automation Interactions

    Get PDF
    ABSTRACT Trust is a critical component to both human-automation and human-human interactions. Interface manipulations, such as visual anthropomorphism and machine politeness, have been used to affect trust in automation. However, these design strategies have been primarily used to facilitate initial trust formation but have not been examined means to actively repair trust that has been violated by a system failure. Previous research has shown that trust in another party can be effectively repaired after a violation using various strategies, but there is little evidence substantiating such strategies in human-automation context. The current study examined the effectiveness of trust repair strategies, derived from a human-human or human-organizational context, in human-automation interaction. During a taxi dispatching task, participants interacted with imperfect automation that either denied or apologized for committing competence- or integrity-based failures. Participants performed two experimental blocks (one for each failure type), and, after each block, reported subjective trust in the automation. Consistent with interpersonal literature, our analysis revealed that automation apologies more successfully repaired trust following competence-based failures than integrity-based failures. However, user trust in automation was not significantly different when the automation denied committing competence- or integrity-based failures. These findings provide important insight into the unique ways in which humans interact with machines

    Relation between Trust Attitudes Toward Automation, Hofstede’s Cultural Dimensions, and Big Five Personality Traits

    Get PDF
    Automation has been widely used in interactions with smartphones, computers, and other machinery in recent decades. Studies have shown that inappropriate reliance on automation can lead to unexpected and even catastrophic results. Trust is conceived as an intervening variable between user intention and actions involving reliance on automation. It is generally believed that trust is dynamic and an individual’s culture or personality may influence automation use through changes in trust. To better understand how cultural and individual differences may affect a person’s trust and resulting behaviors, the present study examined the effects of cultural characteristics and personality traits on reported trust in automation in U.S., Taiwanese and Turkish populations. The results showed individual differences significantly affected human trust in automation across the three cultures

    THE EFFECT OF INTERMEDIATE TRUST RATINGS ON AUTOMATION RELIANCE

    Get PDF
    As automated systems are increasingly capable of augmenting human decision-makers, appropriate reliance on automation has the potential to increase safety and efficiency in several high-stake domains. To that end, a solid understanding of how and under what conditions people rely on automation is needed to design decision aids that allow people to rely on them appropriately. Previous studies have used regular trust ratings during human–automation interactions to examine how trust develops and evolves, but such intermediate judgments might affect subsequent reliance decisions. This dissertation addresses a knowledge gap by empirically exploring how intermediate trust ratings affect automation reliance in human–automation interactions. A laboratory experiment in which 118 participants, supported by automated decision aids, identified UAVs in images was conducted to determine whether trust rating frequency, automation reliability, and participant motivation affected participant reliance behavior. Findings show that intermediate trust ratings increased automation reliance and retrospective trust ratings but did not affect response time. This dissertation proposes an extended theoretical model that might help explain and predict automation reliance. Additionally, it suggests that intermediate trust ratings might be suitable for calibrating automation reliance but not for research that seeks to measure trust without influencing reliance behavior.Approved for public release. Distribution is unlimited.Major, Swedish Arm

    Effects of Take-Over Requests and Cultural Background on Automation Trust in Highly Automated Driving

    Get PDF
    Appropriate automation trust is a prerequisite for safe, comfortable andefficient use of highly automated driving systems (HADS). Earlier researchindicates that a drivers’ nationality and Take-Over Requests (TOR) due toimperfect system reliability might affect trust, but this has never been investigatedin the context of highly automated driving. A driving simulator study (N = 80)showed that TORs only temporarily lowered trust in HADSs, and revealedsimilarities in trust formation between German and Chinese drivers. Trust wassignificantly higher after experiencing the system than before, both for German andChinese participants. However, Chinese drivers reported significantly higherautomation mistrust than German drivers. Self-report measures of automation trustwere not connected to behavioral measures. The results support a distinctionbetween automation trust and mistrust as separate constructs, short- and long-termeffects of TORs on automation trust, and cultural differences in automation trust

    Toward Adaptive Trust Calibration for Level 2 Driving Automation

    Full text link
    Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.Comment: 10 pages, 8 figure

    A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

    Get PDF
    Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see “others” in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process
    • …
    corecore