6,580 research outputs found

    The Effects of Alarm System Errors on Dependence: Moderated Mediation of Trust With and Without Risk

    Get PDF
    Research on sensor-based signaling systems suggests that false alarms and misses affect operator dependence via two independent psychological processes, hypothesized as two types of trust. These two types of trust manifest in two categorically different behaviors: compliance and reliance. The current study links the theoretical perspective outlined by Lee and See (2004) to the compliance-reliance paradigm, and argues that trust mediates the false alarm-compliance relationship but not the miss-reliance relationship. Specifically, the key conditions to allow the mediation of trust are: The operator is presented with a salient choice to depend on the signaling system and the risk associated with non-dependence is recognized. Eighty-eight participants interacted with a primary flight simulation task and a secondary signaling system task. Participants were asked to evaluate their trust in the signaling system according to the informational bases of trust: Performance, process, and purpose. Half of the participants were in a high risk group and half were in a low risk group. The signaling systems varied by reliability (90%, 60%) within subjects and error bias (false alarm prone, miss prone) between subjects. Generally, analyses supported the hypotheses. Reliability affected compliance, but only in the false alarm prone group. Alternatively, reliability affected reliance, but only in the miss prone group. Higher reliability led to higher subjective trust. Conditional indirect effects indicated that individual factors of trust mediated the relationship between false alarm rate and compliance (i.e., purpose) and reliance (i.e., process), but only in the high risk groups. Serial mediation analyses indicated that the false alarm rate affected compliance and reliance through the sequential ordering of the factors of trust, all stemming from performance. Miss rate did not affect reliance through any of the factors of trust. The theoretical implications of this study suggest the compliance-reliance paradigm is not the reflection of two independent types of trust. The practical applications of this research could be to update training and design recommendations that are based upon the assumption of trust causing operator responses regardless of error bias

    Understanding the importance of trust and perceived risk to the adoption of automated driving systems

    Get PDF
    Dissertation presented as partial requirement for obtaining the Master’s degree in Information Management, with a specialization in Marketing IntelligenceAutomated Driving Systems (ADS) have piqued interest among researchers over the last few years. Notwithstanding, this technology is very new and therefore people are far from sold on the safety, or benefits of ADS, leading to uncertainty and distrust. This study extends the line of research by conjointly examining trust, risk and adoption theories in the pre-adoption stage of ADS. We developed a study among 311 European consumers using PLS-SEM. Results reveal that perceived behavioral control, performance expectancy and trust are salient antecedents of intention to use ADS, while perceived risk is not. Implications for practice and research are discussed

    Whose Drive Is It Anyway? Using Multiple Sequential Drives to Establish Patterns of Learned Trust, Error Cost, and Non-Active Trust Repair While Considering Daytime and Nighttime Differences as a Proxy for Difficulty

    Get PDF
    Semi-autonomous driving is a complex task domain with a broad range of problems to consider. The human operator’s role in semi-autonomous driving is crucial because safety and performance depends on how the operator interacts with the system. Drive difficulty has not been extensively studied in automated driving systems and thus is not well understood. Additionally, few studies have studied trust development, decline, or repair over multiple drives for automated driving systems. The goal of this study was to test the effect of perceived driving difficulty on human trust in the automation and how trust is dynamically learned, reduced due to automation errors, and repaired over a seven-drive series. The experiment used 2 task difficulty conditions (easy vs. difficult) x 3 error type conditions (no error, takeover request or TOR, failure) x 7 drives mixed design. Lighting condition was used as a proxy for driving difficulty because decreased visibility for potential hazards could make monitoring the road difficult. During the experiment, 122 undergraduate participants drove an automated vehicle seven times in either a daytime (i.e., “easy”) or nighttime (i.e., “difficult”) condition. Participants experienced a critical hazard event in the fourth drive, in which the automation perfectly avoided the hazard (“no error” condition), issued a takeover request (“TOR” condition), or failed to notice and respond to the hazard (“failure” condition). Participants completed trust ratings after each drive to establish trust development. Results showed that trust improved through the first three drives, demonstrating proper trust calibration. The TOR and automation failure conditions saw significant decreases in trust after the critical hazard in drive four, whereas trust was unaffected for the no error condition. Trust naturally repaired in the TOR and failure conditions after the critical event but did not recover to previous levels before the critical event. There was no evidence of perceived difficulty differences between the daytime and nighttime conditions. Thus, a consistent lack of trust differences was found between lighting conditions. This study demonstrated how trust develops and responds to errors in automated driving systems, informing future research for trust repair interventions and design of automated driving systems

    Effects of Visibility and Alarm Modality on Workload, Trust in Automation, Situation Awareness, and Driver Performance

    Get PDF
    Driving demands sustained driver attention. This attentional demand increases with decreasing field visibility. In the past researchers have explored and investigated how collision avoidance warning systems (CAWS) help improve driving performance. The goal of the present study is to determine whether auditory or tactile CAWS have a greater effect on driver performance, perceived workload, system trust, and situation awareness (SA). Sixty-three undergraduate students from Old Dominion University participated in this study. Participants were asked to complete two simulated driving sessions along with Motion Sickness Susceptibility Questionnaire, Background Information Questionnaire, Trust Questionnaire, NASA Task Load Index Questionnaire, Situation Awareness Rating Technique Questionnaire, and Simulator Sickness Questionnaire. Analyses indicated that drivers in the tactile modality condition had low perceived workload. Drivers in the heavy fog visibility condition had the highest number of collisions and red-light tickets. Drivers in the heavy fog condition also reported having the highest overall situation awareness. Drivers in the clear visibility condition trusted tactile alarms more than the auditory alarms, whereas drivers in the heavy fog condition trusted auditory alarms more than tactile alarms. The findings of this investigation could be applied to improve the design of CAWS that would help improve driver performance and increase safety on the roadways

    Ethical and Social Aspects of Self-Driving Cars

    Full text link
    As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethics. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems are typically being presented in the form of an idealized unsolvable decision-making problem, the so-called trolley problem, which is grossly misleading. We argue that an applied engineering ethical approach for the development of new technology is what is needed; the approach should be applied, meaning that it should focus on the analysis of complex real-world engineering problems. Software plays a crucial role for the control of self-driving cars; therefore, software engineering solutions should seriously handle ethical and social considerations. In this paper we take a closer look at the regulative instruments, standards, design, and implementations of components, systems, and services and we present practical social and ethical challenges that have to be met, as well as novel expectations for software engineering.Comment: 11 pages, 3 figures, 2 table

    Real-Time Estimation of Drivers' Trust in Automated Driving Systems

    Full text link
    Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman lter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task (NDRT). We conducted a study (n = 80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identi cation of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.National Science FoundationBrazilian Army's Department of Science and TechnologyAutomotive Research Center (ARC) at the University of MichiganU.S. Army CCDC/GVSC (government contract DoD-DoA W56HZV14-2-0001).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162572/1/Azevedo Sa et al. 2020.pdfSEL
    • …
    corecore