1,861 research outputs found

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Driver Trust in Automated Driving Systems

    Get PDF
    Vehicle automation is a prominent example of safety-critical AI-based task automation. Recent digital innovations have led to the introduction of partial vehicle automation, which can already give vehicle drivers a sense of what fully automated driving would feel like. In the context of current imperfect vehicle automation, establishing an appropriate level of driver trust in automated driving systems (ADS) is seen as a key factor for their safe use and long-term acceptance. This paper thoroughly reviews and synthesizes the literature on driver trust in ADS, covering a wide range of academic disciplines. Pulling together knowledge on trustful user interaction with ADS, this paper offers a first classification of the main trust calibrators. Guided by this analysis, the paper identifies a lack of studies on adaptive, contextual trust calibration in contrast to numerous studies that focus on general trust calibration

    A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction

    Full text link
    Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication. However, a comprehensive understanding of the field is lacking due to the diversity of perspectives arising from various backgrounds that influence it and the lack of a single definition for appropriate trust. To investigate this topic, this paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it. We also propose a Belief, Intentions, and Actions (BIA) mapping to study commonalities and differences in the concepts related to appropriate trust by (a) describing the existing disagreements on defining appropriate trust, and (b) providing an overview of the concepts and definitions related to appropriate trust in AI from the existing literature. Finally, the challenges identified in studying appropriate trust are discussed, and observations are summarized as current trends, potential gaps, and research opportunities for future work. Overall, the paper provides insights into the complex concept of appropriate trust in human-AI interaction and presents research opportunities to advance our understanding on this topic.Comment: 39 Page

    Exploring the usability of a connected autonomous vehicle human machine interface designed for older adults

    Get PDF
    Users of Level 4–5 connected autonomous vehicles (CAVs) should not need to intervene with the dynamic driving task or monitor the driving environment, as the system will handle all driving functions. CAV human-machine interface (HMI) dashboards for such CAVs should therefore offer features to support user situation awareness (SA) and provide additional functionality that would not be practical within non-autonomous vehicles. Though, the exact features and functions, as well as their usability, might differ depending on factors such as user needs and context of use. The current paper presents findings from a simulator trial conducted to test the usability of a prototype CAV HMI designed for older adults and/or individuals with sensory and/or physical impairments: populations that will benefit enormously from the mobility afforded by CAVs. The HMI was developed to suit needs and requirements of this demographic based upon an extensive review of HMI and HCI principles focused on accessibility, usability and functionality [1, 2], as well as studies with target users. Thirty-one 50-88-year-olds (M 67.52, three 50–59) participated in the study. They experienced four seven-minute simulated journeys, involving inner and outer urban settings with mixed speed-limits and were encouraged to explore the HMI during journeys and interact with features, including a real-time map display, vehicle status, emergency stop, and arrival time. Measures were taken pre-, during- and post- journeys. Key was the System Usability Scale [3] and measures of SA, task load, and trust in computers and automation. As predicted, SA decreased with journey experience and although cognitive load did not, there were consistent negative correlations. System usability was also related to trust in technology but not trust in automation or attitudes towards computers. Overall, the findings are important for those designing, developing and testing CAV HMIs for older adults and individuals with sensory and/or physical impairments

    Driving Style: How Should an Automated Vehicle Behave?

    Get PDF
    This article reports on a study to investigate how the driving behaviour of autonomous vehicles influences trust and acceptance. Two different designs were presented to two groups of participants (n = 22/21), using actual autonomously driving vehicles. The first was a vehicle programmed to drive similarly to a human, “peeking” when approaching road junctions as if it was looking before proceeding. The second design had a vehicle programmed to convey the impression that it was communicating with other vehicles and infrastructure and “knew” if the junction was clear so could proceed without ever stopping or slowing down. Results showed non-significant differences in trust between the two vehicle behaviours. However, there were significant increases in trust scores overall for both designs as the trials progressed. Post-interaction interviews indicated that there were pros and cons for both driving styles, and participants suggested which aspects of the driving styles could be improved. This paper presents user information recommendations for the design and programming of driving systems for autonomous vehicles, with the aim of improving their users’ trust and acceptance

    The Effects of Automation Transparency and Ethical Outcomes on User Trust and Blame Towards Fully Autonomous Vehicles

    Get PDF
    The current study examined the effect of automation transparency on user trust and blame during forced moral outcomes. Participants read through moral scenarios in which an autonomous vehicle did or did not convey information about its decision prior to making a utilitarian or non-utilitarian decision. Participants also provided moral acceptance ratings for autonomous vehicles and humans when making identical moral decisions. It was expected that trust would be highest for utilitarian outcomes and blame would be highest for non-utilitarian outcomes. When the vehicle provided information about its decision, trust and blame were expected to increase. Results showed that moral outcome and transparency did not influence trust independently. Specifically, trust was highest for non-transparent non- utilitarian outcomes and lowest for non-transparent utilitarian outcomes. Blame was not found to be influenced by either transparency, moral outcome, or their combined effects. Interestingly, acceptance was determined to be higher for autonomous vehicles that made the same utilitarian decision as humans, though no differences were found for non-utilitarian outcomes. This research draws on the importance of active and passive harm and suggests that the type of automation transparency conveyed to an operator may be inappropriate in the presence of actively harmful moral outcomes. Theoretical insights into how ethical decisions are evaluated when different agents (human or autonomous) are responsible for active or passive moral decisions are discussed

    INVESTIGATING COLLABORATIVE EXPLAINABLE AI (CXAI)/SOCIAL FORUM AS AN EXPLAINABLE AI (XAI) METHOD IN AUTONOMOUS DRIVING (AD)

    Get PDF
    Explainable AI (XAI) systems primarily focus on algorithms, integrating additional information into AI decisions and classifications to enhance user or developer comprehension of the system\u27s behavior. These systems often incorporate untested concepts of explainability, lacking grounding in the cognitive and educational psychology literature (S. T. Mueller et al., 2021). Consequently, their effectiveness may be limited, as they may address problems that real users don\u27t encounter or provide information that users do not seek. In contrast, an alternative approach called Collaborative XAI (CXAI), as proposed by S. Mueller et al (2021), emphasizes generating explanations without relying solely on algorithms. CXAI centers on enabling users to ask questions and share explanations based on their knowledge and experience to facilitate others\u27 understanding of AI systems. Mamun, Hoffman, et al. (2021) developed a CXAI system akin to a Social Question and Answer (SQA) platform (S. Oh, 2018a), adapting it for AI system explanations. The system successfully passed evaluation based on XAI metrics Hoffman, Mueller, et al. (2018), as implemented in a master’s thesis by Mamun (2021), which validated its effectiveness in a basic image classification domain and explored the types of explanations it generated. This Ph.D. dissertation builds upon this prior work, aiming to apply it in a novel context: users and potential users of self-driving semi-autonomous vehicles. This approach seeks to unravel communication patterns within a social QA platform (S. Oh, 2018a), the types of questions it can assist with, and the benefits it might offer users of widely adopted AI systems. Initially, the feasibility of using existing social QA platforms as explanatory tools for an existing AI system was investigated. The study found that users on these platforms collaboratively assist one another in problem-solving, with many resolutions being reached (Linja et al., 2022). An intriguing discovery was that anger directed at the AI system drove increased engagement on the platform. The subsequent phase leverages observations from social QA platforms in the autonomous driving (AD) sector to gain insights into an AI system within a vehicle. The dissertation includes two simulation studies employing these observations as training materials. The studies explore users\u27 Level 3 Situational Awareness (Endsley, 1995) when the autonomous vehicle exhibits abnormal behavior. These investigate detection rates and users\u27 comprehension of abnormal driving situations. Additionally, these studies measure the perception of personalization within the context of the training process (Zhang & Curley, 2018), cognitive workload (Hart & Staveland, 1988), trust, and reliance (Körber, 2018) concerning the training process. The findings from these studies are mixed, showing higher detection rates of abnormal driving with training but diminished trust and reliance. The final study engages current Tesla FSD users in semi-structured interviews (Crandall et al., 2006) to explore their use of social QA platforms, their knowledge sources during the training phase, and their search for answers to abnormal driving scenarios. The results reveal extensive collaboration through social forums and group discussions, shedding light on differences in trust and reliance within this domain

    Autonomous Vehicles Drive into Shared Spaces: eHMI Design Concept Focusing on Vulnerable Road Users

    Get PDF
    In comparison to conventional traffic designs, shared spaces promote a more pleasant urban environment with slower motorized movement, smoother traffic, and less congestion. In the foreseeable future, shared spaces will be populated with a mixture of autonomous vehicles (AVs) and vulnerable road users (VRUs) like pedestrians and cyclists. However, a driver-less AV lacks a way to communicate with the VRUs when they have to reach an agreement of a negotiation, which brings new challenges to the safety and smoothness of the traffic. To find a feasible solution to integrating AVs seamlessly into shared-space traffic, we first identified the possible issues that the shared-space designs have not considered for the role of AVs. Then an online questionnaire was used to ask participants about how they would like a driver of the manually driving vehicle to communicate with VRUs in a shared space. We found that when the driver wanted to give some suggestions to the VRUs in a negotiation, participants thought that the communications via the driver's body behaviors were necessary. Besides, when the driver conveyed information about her/his intentions and cautions to the VRUs, participants selected different communication methods with respect to their transport modes (as a driver, pedestrian, or cyclist). These results suggest that novel eHMIs might be useful for AV-VRU communication when the original drivers are not present. Hence, a potential eHMI design concept was proposed for different VRUs to meet their various expectations. In the end, we further discussed the effects of the eHMIs on improving the sociality in shared spaces and the autonomous driving systems

    Investigating older adults’ preferences for functions within a human-machine interface designed for fully autonomous vehicles

    Get PDF
    © Springer International Publishing AG, part of Springer Nature 2018. Compared to traditional cars, where the driver has most of their attention allocated on the road and on driving tasks, in fully autonomous vehicles it is likely that the user would not need to intervene with driving related functions meaning that there will be little need for HMIs to have features and functionality relating to these factors. However, there will be an opportunity for a range of other interactions with the user. As such, designers and researchers need to have an understanding of what is actually needed or expected and how to balance the type of functionality they make available. Also, in HMI design, the design principles need to be considered in relation to a range of user characteristics, such as age, and sensory, cognitive and physical ability and other impairments. In this study, we proposed an HMI specially designed for connected autonomous vehicles with a focus on older adults. We examined older adults’ preferences of CAV HMI functions, and, the degree to which individual differences (e.g., personality, attitude towards computers, trust in technology, cognitive functioning) correlate with preferences for these functions. Thirty-one participants (M age = 67.52, SD = 7.29), took part in the study. They had to interact with the HMI and rate its functions based on the importance and likelihood of using them. Results suggest that participants prefer adaptive HMIs, with journey planner capabilities. As expected, as it is a CAV HMI, the Information and Entertainment functions are also preferred. Individual differences have limited relationship with HMI preferences
    • 

    corecore