48,543 research outputs found

    Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

    Full text link
    Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simulator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AV's action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Investigating what level of visual information inspires trust in a user of a highly automated vehicle

    Get PDF
    The aim of this research is to investigate whether visual feedback alone can affect a driver’s trust in an autonomous vehicle, and in particular, what level of feedback (no feedback vs. moderate feedback vs. high feedback) will evoke the appropriate level of trust. Before conducting the experiment, the Human Machine Interfaces (HMI) were piloted with two sets of six participants (before and after iterations), to ensure the meaning of the displays can be understood by all. A static driving simulator experiment was conducted with a sample of 30 participants (between 18 and 55). Participants completed two pre-study questionnaires to evaluate previous driving experience, and attitude to trust in automation. During the study, participants completed a trust questionnaire after each simulated scenario to assess their trust level in the autonomous vehicle and HMI displays, and on intention to use and acceptance. The participants were shown 10 different driving scenarios that lasted approximately 2 minutes each. Results indicated that the ‘high visual feedback’ group recorded the highest trust ratings, with this difference significantly higher than for the ‘no visual feedback’ group (U = .000; p = <0.001 < α) and the ‘moderate visual feedback’ group (U = .000; p = <0.001 < α). There is an upward inclination of trust in all groups due to familiarity to both the interfaces and driving simulator over time. Participants’ trust level was also influenced by the driving scenario, with trust reducing in all displays during safety verses non-safety-critical situations

    Driving automation: Learning from aviation about design philosophies

    Get PDF
    Full vehicle automation is predicted to be on British roads by 2030 (Walker et al., 2001). However, experience in aviation gives us some cause for concern for the 'drive-by-wire' car (Stanton and Marsden, 1996). Two different philosophies have emerged in aviation for dealing with the human factor: hard vs. soft automation, depending on whether the computer or the pilot has ultimate authority (Hughes and Dornheim, 1995). This paper speculates whether hard or soft automation provides the best solution for road vehicles, and considers an alternative design philosophy in vehicles of the future based on coordination and cooperation
    • …
    corecore