5 research outputs found

    Context-Adaptive Management of Driversā€™ Trust in Automated Vehicles

    Full text link
    Automated vehicles (AVs) that intelligently interact with drivers must build a trustworthy relationship with them. A calibrated level of trust is fundamental for the AV and the driver to collaborate as a team. Techniques that allow AVs to perceive driversā€™ trust from driversā€™ behaviors and react accordingly are, therefore, needed for context-aware systems designed to avoid trust miscalibrations. This letter proposes a framework for the management of driversā€™ trust in AVs. The framework is based on the identification of trust miscalibrations (when driversā€™ undertrust or overtrust the AV) and on the activation of different communication styles to encourage or warn the driver when deemed necessary. Our results show that the management framework is effective, increasing (decreasing) trust of undertrusting (overtrusting) drivers, and reducing the average trust miscalibration time periods by approximately 40%. The framework is applicable for the design of SAE Level 3 automated driving systems and has the potential to improve the performance and safety of driverā€“AV teams.U.S. Army CCDC/GVSCAutomotive Research CenterNational Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162571/1/Azevedo-Sa et al. 2020 with doi.pdfSEL

    Real-Time Estimation of Drivers' Trust in Automated Driving Systems

    Full text link
    Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman lter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task (NDRT). We conducted a study (n = 80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identi cation of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.National Science FoundationBrazilian Army's Department of Science and TechnologyAutomotive Research Center (ARC) at the University of MichiganU.S. Army CCDC/GVSC (government contract DoD-DoA W56HZV14-2-0001).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162572/1/Azevedo Sa et al. 2020.pdfSEL

    Trust-Based Control of (Semi)Autonomous Mobile Robotic Systems

    Get PDF
    Despite great achievements made in (semi)autonomous robotic systems, human participa-tion is still an essential part, especially for decision-making about the autonomy allocation of robots in complex and uncertain environments. However, human decisions may not be optimal due to limited cognitive capacities and subjective human factors. In human-robot interaction (HRI), trust is a major factor that determines humans use of autonomy. Over/under trust may lead to dispro-portionate autonomy allocation, resulting in decreased task performance and/or increased human workload. In this work, we develop automated decision-making aids utilizing computational trust models to help human operators achieve a more eļ¬€ective and unbiased allocation. Our proposed decision aids resemble the way that humans make an autonomy allocation decision, however, are unbiased and aim to reduce human workload, improve the overall performance, and result in higher acceptance by a human. We consider two types of autonomy control schemes for (semi)autonomous mobile robotic systems. The ļ¬rst type is a two-level control scheme which includes switches between either manual or autonomous control modes. For this type, we propose automated decision aids via a computational trust and self-conļ¬dence model. We provide analytical tools to investigate the steady-state eļ¬€ects of the proposed autonomy allocation scheme on robot performance and human workload. We also develop an autonomous decision pattern correction algorithm using a nonlinear model predictive control to help the human gradually adapt to a better allocation pattern. The second type is a mixed-initiative bilateral teleoperation control scheme which requires mixing of autonomous and manual control. For this type, we utilize computational two-way trust models. Here, mixed-initiative is enabled by scaling the manual and autonomous control inputs with a function of computational human-to-robot trust. The haptic force feedback cue sent by the robot is dynamically scaled with a function of computational robot-to-human trust to reduce humans physical workload. Using the proposed control schemes, our human-in-the-loop tests show that the trust-based automated decision aids generally improve the overall robot performance and reduce the operator workload compared to a manual allocation scheme. The proposed decision aids are also generally preferred and trusted by the participants. Finally, the trust-based control schemes are extended to the single-operator-multi-robot applications. A theoretical control framework is developed for these applications and the stability and convergence issues under the switching scheme between diļ¬€erent robots are addressed via passivity based measures

    Incorporating Trust and Self-Confidence Analysis in the Guidance and Control of (Semi)Autonomous Mobile Robotic Systems

    No full text

    Improving Collaboration Between Drivers and Automated Vehicles with Trust Processing Methods

    Full text link
    Trust has gained attention in the Human-Robot Interaction (HRI) field, as it is considered an antecedent of people's reliance on machines. In general, people are likely to rely on and use machines they trust, and to refrain from using machines they do not trust. Recent advances in robotic perception technologies open paths for the development of machines that can be aware of people's trust by observing their human behaviors. This dissertation explores the role of trust in the interactions between humans and robots, particularly Automated Vehicles (AVs). Novel methods and models are proposed for perceiving and processing drivers' trust in AVs and for determining both humans' natural trust and robots' artificial trust. Two high-level problems are addressed in this dissertation: (1) the problem of avoiding or reducing miscalibrations of drivers' trust in AVs, and (2) the problem of how trust can be used to dynamically allocate tasks between a human and a robot that collaborate. A complete solution is proposed for the problem of avoiding or reducing trust miscalibrations. This solution combines methods for estimating and influencing drivers' trust through interactions with the AV. Three main contributions stem from that solution: (i) the characterization of risk factors that affect driversā€™ trust in AVs, which provided theoretical evidence for the development of a linear model for driver trust in AVs; (ii) the development of a new method for real-time trust estimation, which leveraged the trust linear model mentioned above for the implementation of a Kalman-filter-based approach, able to provide numerical estimates from the processing of drivers' behavioral measurements; and (iii) the development of a new method for trust calibration, which identifies trust miscalibration instances from comparisons between drivers' trust in the AV and that AV's capabilities, and triggers messages from the AV to the driver. These messages are effective for encouraging or warning drivers that are undertrusting or overtrusting the AV capabilities respectively as shown by the obtained results. Although the development of a trust-based solution for dynamically allocating tasks between a human and a robot (i.e., the second high-level problem addressed in this dissertation) remains an open problem, we take a step forward in that direction. The fourth contribution of this dissertation is the development of a unified bi-directional model for predicting natural and artificial trust. This trust model is based on mathematical representations of both the trustee agent's capabilities and the required capabilities for the execution of a task. Trust emerges from comparisons between the agent capabilities and task requirements, roughly replicating the following logic: if a trustee agent's capabilities exceed the requirements for executing a certain task, then the agent can be highly trusted (to execute that task); conversely, if that trustee agent's capabilities fall short of that task requirements, trust should be low. In this trust model, the agent's capabilities are represented by random variables that are dynamically updated over interactions between the trustor and the trustee whenever the trustee is successful or fails in the execution of a task. These capability representations allow for the numerical computation of human's trust or robot's trust, which is represented by the probability of a given trustee agent to execute a given task successfully.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169615/1/azevedo_1.pd
    corecore