18 research outputs found

    The Interaction Gap: A Step Toward Understanding Trust in Autonomous Vehicles Between Encounters

    Full text link
    Shared autonomous vehicles (SAVs) will be introduced in greater numbers over the coming decade. Due to rapid advances in shared mobility and the slower development of fully autonomous vehicles (AVs), SAVs will likely be deployed before privately-owned AVs. Moreover, existing shared mobility services are transitioning their vehicle fleets toward those with increasingly higher levels of driving automation. Consequently, people who use shared vehicles on an "as needed" basis will have infrequent interactions with automated driving, thereby experiencing interaction gaps. Using human trust data of 25 participants, we show that interaction gaps can affect human trust in automated driving. Participants engaged in a simulator study consisting of two interactions separated by a one-week interaction gap. A moderate, inverse correlation was found between the change in trust during the initial interaction and the interaction gap, suggesting people "forget" some of their gained trust or distrust in automation during an interaction gap.Comment: 5 pages, 3 figure

    Increasing the User Experience in Autonomous Driving through different Feedback Modalities

    Get PDF
    Within the ongoing process of defining autonomous driving solutions, experience design may represent an important interface between humans and the autonomous vehicle. This paper presents an empirical study that uses different ways of unimodal communication in autonomous driving to communicate awareness and intent of autonomous vehicles. The goal is to provide recommendations for feedback solutions within holistic autonomous driving experiences. 22 test subjects took part in four autonomous, simulated virtual reality shuttle rides and were presented with different unimodal feedback in the form of light, sound, visualisation, text and vibration. The empirical study showed that, compared to a no-feedback baseline ride, light, and visualisation were able to create a positive user experience

    Trust in automated vehicles:constructs, psychological processes and assessment

    Get PDF
    There is a growing body of research on trust in driving automation systems. In this paper, we seek to clarify the way trust is conceptualized, calibrated and measured taking into account issues related to specific levels of driving automation. We find that: (1) experience plays a vital role in trust calibration; (2) experience should be measured not just in terms of distance traveled, but in terms of the range of situations encountered; (3) system malfunctions and recovery from such malfunctions is a fundamental part of this experience. We summarize our findings in a framework describing the dynamics of trust calibration. We observe that methods used to quantify trust often lack objectivity, reliability, and validity, and propose a set of recommendations for researchers seeking to select suitable trust measures for their studies. In conclusion, we argue that the safe deployment of current and future automated vehicles depends on drivers developing appropriate levels of trust. Given the potentially severe consequences of miscalibrated trust, it is essential that drivers incorporate the possibility of new and unexpected driving situations in their mental models of system capabilities. It is vitally important that we develop methods that contribute to this goal

    Selectively Providing Reliance Calibration Cues With Reliance Prediction

    Full text link
    For effective collaboration between humans and intelligent agents that employ machine learning for decision-making, humans must understand what agents can and cannot do to avoid over/under-reliance. A solution to this problem is adjusting human reliance through communication using reliance calibration cues (RCCs) to help humans assess agents' capabilities. Previous studies typically attempted to calibrate reliance by continuously presenting RCCs, and when an agent should provide RCCs remains an open question. To answer this, we propose Pred-RC, a method for selectively providing RCCs. Pred-RC uses a cognitive reliance model to predict whether a human will assign a task to an agent. By comparing the prediction results for both cases with and without an RCC, Pred-RC evaluates the influence of the RCC on human reliance. We tested Pred-RC in a human-AI collaboration task and found that it can successfully calibrate human reliance with a reduced number of RCCs.Comment: 8 page

    Personal space of autonomous car's passengers sitting in the driver's seat

    Get PDF
    International audienceThis article deals with the specific context of an autonomous car navigating in an urban center within a shared space between pedestrians and cars. The driver delegates the control to the autonomous system while remaining seated in the driver's seat. The proposed study aims at giving a first insight into the definition of human perception of space applied to vehicles by testing the existence of a personal space around the car.It aims at measuring proxemic information about the driver's comfort zone in such conditions.Proxemics, or human perception of space, has been largely explored when applied to humans or to robots, leading to the concept of personal space, but poorly when applied to vehicles. In this article, we highlight the existence and the characteristics of a zone of comfort around the car which is not correlated to the risk of a collision between the car and other road users. Our experiment includes 19 volunteers using a virtual reality headset to look at 30 scenarios filmed in 360° from the point of view of a passenger sitting in the driver's seat of an autonomous car.They were asked to say "stop" when they felt discomfort visualizing the scenarios.As said, the scenarios voluntarily avoid collision effect as we do not want to measure fear but discomfort.The scenarios involve one or three pedestrians walking past the car at different distances from the wings of the car, relative to the direction of motion of the car, on both sides. The car is either static or moving straight forward at different speeds.The results indicate the existence of a comfort zone around the car in which intrusion causes discomfort.The size of the comfort zone is sensitive neither to the side of the car where the pedestrian passes nor to the number of pedestrians. In contrast, the feeling of discomfort is relative to the car's motion (static or moving).Another outcome from this study is an illustration of the usage of first person 360° video and a virtual reality headset to evaluate feelings of a passenger within an autonomous car

    Why do drivers and automation disengage the automation? Results from a study among Tesla users

    Full text link
    A better understanding of automation disengagements can impact the safety and efficiency of automated systems. This study investigates the factors contributing to driver- and system-initiated disengagements by analyzing semi-structured interviews with 103 users of Tesla's Autopilot and FSD Beta. Through an examination of the data, main categories and sub-categories of disengagements were identified, which led to the development of a triadic model of automation disengagements. The model treats automation and human operators as equivalent agents. It suggests that human operators disengage automation when they anticipate failure, observe unnatural or unwanted automation behavior (e.g., erratic steering, running red lights), or believe the automation is not suited for certain environments (e.g., inclement weather, non-standard roads). Human operators' negative experiences, such as frustration, feelings of unsafety, and distrust, are also incorporated into the model, as these emotions can be triggered by (anticipated) automation behaviors. The automation, in turn, monitors human operators and may disengage itself if it detects insufficient vigilance or traffic rule violations. Moreover, human operators can be influenced by the reactions of passengers and other road users, leading them to disengage automation if they sense discomfort, anger, or embarrassment due to the system's actions. This research offers insights into the factors contributing to automation disengagements, highlighting not only the concerns of human operators but also the social aspects of the phenomenon. Furthermore, the findings provide information on potential edge cases of automated vehicle technology, which may help to enhance the safety and efficiency of such systems.Comment: 51 pages, 1 figur

    Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

    Full text link
    Explainable artificially intelligent (XAI) systems form part of sociotechnical systems, e.g., human+AI teams tasked with making decisions. Yet, current XAI systems are rarely evaluated by measuring the performance of human+AI teams on actual decision-making tasks. We conducted two online experiments and one in-person think-aloud study to evaluate two currently common techniques for evaluating XAI systems: (1) using proxy, artificial tasks such as how well humans predict the AI's decision from the given explanations, and (2) using subjective measures of trust and preference as predictors of actual performance. The results of our experiments demonstrate that evaluations with proxy tasks did not predict the results of the evaluations with the actual decision-making tasks. Further, the subjective measures on evaluations with actual decision-making tasks did not predict the objective performance on those same tasks. Our results suggest that by employing misleading evaluation methods, our field may be inadvertently slowing its progress toward developing human+AI teams that can reliably perform better than humans or AIs alone

    Research on the influence and mechanism of human–vehicle moral matching on trust in autonomous vehicles

    Get PDF
    IntroductionAutonomous vehicles can have social attributes and make ethical decisions during driving. In this study, we investigated the impact of human-vehicle moral matching on trust in autonomous vehicles and its mechanism.MethodsA 2*2 experiment involving 200 participants was conducted.ResultsThe results of the data analysis show that utilitarian moral individuals have greater trust than deontological moral individuals. Perceived value and perceived risk play a double-edged role in people’s trust in autonomous vehicles. People’s moral type has a positive impact on trust through perceived value and a negative impact through perceived risk. Vehicle moral type moderates the impact of human moral type on trust through perceived value and perceived risk.DiscussionThe conclusion shows that heterogeneous moral matching (people are utilitarian, vehicles are deontology) has a more positive effect on trust than homogenous moral matching (both people and vehicles are deontology or utilitarian), which is consistent with the assumption of selfish preferences of individuals. The results of this study provide theoretical expansion for the fields related to human-vehicle interaction and AI social attributes and provide exploratory suggestions for the functional design of autonomous vehicles

    Towards a conceptual model of users’ expectations of an autonomous in-vehicle multimodal experience

    Get PDF
    People are expected to have more opportunities to spend their free time inside the vehicle with advanced vehicle automation in the near future. This will enable people to turn their attention to desirable activities other than driving and to have varied in-vehicle interactions through multimodal ways of conveying and receiving information. Previous studies on in-vehicle multimodal interactions primarily have focused on making users evaluate the impacts of particular multimodal integrations on them, which do not fully provide an overall understanding of user expectations of the multimodal experience in autonomous vehicles. The research was thus designed to fill the research gap by posing the key question “What are the critical aspects that differentiate and characterise in-vehicle multimodal experiences?” To answer this question, five sessions of design fiction workshops were separately conducted with 17 people to understand the users’ expectations of the multimodal experience in autonomous vehicles. Twenty-two sub-themes of users’ expected tasks of multimodal experience were extracted through thematic analysis. The research found that two dimensions – attention and duration, are critical aspects that impact in-vehicle multimodal interactions. With this knowledge, a conceptual model of the users’ in-vehicle multimodal experience was proposed with a two-dimensional spectrum, which populates four different layers: Sustained, Distinct, Concurrent, and Coherent. The proposed conceptual model could help designers understand and approach users' expectations more clearly, allowing them to make more informed decisions from the initial stages of the design process
    corecore