165 research outputs found

    Enhancing driving safety and user experience through unobtrusive and function-specific feedback

    Get PDF
    Inappropriate trust in the capabilities of automated driving systems can result in misuse and insufficient monitoring behaviour that impedes safe manual driving performance following takeovers. Previous studies indicate that the communication of system uncertainty can promote appropriate use and monitoring by calibrating trust. However, existing approaches require the driver to regularly glance at the instrument cluster to perceive the changes in uncertainty. This may lead to missed uncertainty changes and user disruptions. Furthermore, the benefits of conveying the uncertainty of the different vehicle functions such as lateral and longitudinal control have yet to be explored. This research addresses these gaps by investigating the impact of unobtrusive and function-specific feedback on driving safety and user experience. Transferring knowledge from other disciplines, several different techniques will be assessed in terms of their suitability for conveying uncertainty in a driving context

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

    Full text link
    Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simulator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AV's action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Augmented reality displays for communicating uncertainty information in automated driving

    Get PDF
    Safe manual driving performance following takeovers in conditionally automated driving systems is impeded by a lack in situation awareness, partly due to an inappropriate trust in the system’s capabilities. Previous work has indicated that the communication of system uncertainties can aid the trust calibration process. However, it has yet to be investigated how the information is best conveyed to the human operator. The study outlined in this publication presents an interface layout to visualise function-specific uncertainty information in an augmented reality display and explores the suitability of 11 visual variables. 46 participants completed a sorting task and indicated their preference for each of these variables. The results demonstrate that particularly colour-based and animation-based variables, above all hue, convey a clear order in terms of urgency and are well-received by participants. The presented findings have implications for all augmented reality displays that are intended to show content varying in urgency

    S(C)ENTINEL - monitoring automated vehicles with olfactory reliability displays

    Get PDF
    Overreliance in technology is safety-critical and it is assumed that this could have been a main cause of severe accidents with automated vehicles. To ease the complex task of per- manently monitoring vehicle behavior in the driving en- vironment, researchers have proposed to implement relia- bility/uncertainty displays. Such displays allow to estimate whether or not an upcoming intervention is likely. However, presenting uncertainty just adds more visual workload on drivers, who might also be engaged in secondary tasks. We suggest to use olfactory displays as a potential solution to communicate system uncertainty and conducted a user study (N=25) in a high-fidelity driving simulator. Results of the ex- periment (conditions: no reliability display, purely visual reliability display, and visual-olfactory reliability display) comping both objective (task performance) and subjective (technology acceptance model, trust scales, semi-structured interviews) measures suggest that olfactory notifications could become a valuable extension for calibrating trust in automated vehicles

    Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic for overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. A driving simulator study was used to assess the impact of dynamically communicating system uncertainties on monitoring, trust, workload, takeovers, and physiological responses. The uncertainty information was conveyed visually using a stylised heart beat combined with a numerical display and users were engaged in a visual search task. Multilevel analysis results suggest that uncertainty communication helps operators calibrate their trust and gain situation awareness prior to critical situations, resulting in safer takeovers. Additionally, eye tracking data indicate that operators can adjust their gaze behaviour in correspondence with the level of uncertainty. However, conveying uncertainties using a visual display significantly increases operator workload and impedes users in the execution of non-driving related tasks

    The Interaction Gap: A Step Toward Understanding Trust in Autonomous Vehicles Between Encounters

    Full text link
    Shared autonomous vehicles (SAVs) will be introduced in greater numbers over the coming decade. Due to rapid advances in shared mobility and the slower development of fully autonomous vehicles (AVs), SAVs will likely be deployed before privately-owned AVs. Moreover, existing shared mobility services are transitioning their vehicle fleets toward those with increasingly higher levels of driving automation. Consequently, people who use shared vehicles on an "as needed" basis will have infrequent interactions with automated driving, thereby experiencing interaction gaps. Using human trust data of 25 participants, we show that interaction gaps can affect human trust in automated driving. Participants engaged in a simulator study consisting of two interactions separated by a one-week interaction gap. A moderate, inverse correlation was found between the change in trust during the initial interaction and the interaction gap, suggesting people "forget" some of their gained trust or distrust in automation during an interaction gap.Comment: 5 pages, 3 figure

    Investigating what level of visual information inspires trust in a user of a highly automated vehicle

    Get PDF
    The aim of this research is to investigate whether visual feedback alone can affect a driver’s trust in an autonomous vehicle, and in particular, what level of feedback (no feedback vs. moderate feedback vs. high feedback) will evoke the appropriate level of trust. Before conducting the experiment, the Human Machine Interfaces (HMI) were piloted with two sets of six participants (before and after iterations), to ensure the meaning of the displays can be understood by all. A static driving simulator experiment was conducted with a sample of 30 participants (between 18 and 55). Participants completed two pre-study questionnaires to evaluate previous driving experience, and attitude to trust in automation. During the study, participants completed a trust questionnaire after each simulated scenario to assess their trust level in the autonomous vehicle and HMI displays, and on intention to use and acceptance. The participants were shown 10 different driving scenarios that lasted approximately 2 minutes each. Results indicated that the ‘high visual feedback’ group recorded the highest trust ratings, with this difference significantly higher than for the ‘no visual feedback’ group (U = .000; p = <0.001 < α) and the ‘moderate visual feedback’ group (U = .000; p = <0.001 < α). There is an upward inclination of trust in all groups due to familiarity to both the interfaces and driving simulator over time. Participants’ trust level was also influenced by the driving scenario, with trust reducing in all displays during safety verses non-safety-critical situations
    • …
    corecore