9 research outputs found

    Don't Worry, I'm in Control! Is Users’ Trust in Automated Driving Different When Using a Continuous Ambient Light HMI Compared to an Auditory HMI?

    Get PDF
    Ambient LED displays have been used to provide peripheral light-based cues to drivers about a vehicle's current state, along with providing requests for a driver's attention or action. However, few studies have investigated the use of an ambient LED display to improve drivers' trust, perceived safety, and reactions during L3 automated driving. Due to the ambient nature of an LED lightband display, it could be anticipated that it would provide reassurance of the automation status while automation is on, along with providing a gentle cue for non-urgent transitions of control. This video submission presents a methodological overview of a driving simulator study designed to evaluate the effectiveness of an ambient peripheral light display (Lightband HMI) in terms of its potential to improve drivers' trust in L3 automation, along with a comparison of a Lightband and Auditory HMI in terms of their effectiveness in facilitating transitions of control

    Model of realism score for immersive VR systems

    Get PDF
    A model of a realism score for immersive virtual reality and driving simulators is presented. First, we give an outlook of the different definitions of what ‘‘realism” is and the different approaches that exist in the literature to objectively quantify it. Then, we present the method, the theoretical development of the score and the results proposed. This realism score system aims to objectively quantify the characteristics of the visual perception happening for a perfect (non-altered vision) observer when experiencing an immersive VR system, as compared to the human visual system in a real (non-VR) situation. It addresses not only the visual perception but also the immersivity of the experience. The approach is different from the signal detection theory and the quantum efficiency theory that both rely on probabilities computation. It is made of several items, graded between 0 and 100, and divided in two sections: vision cues and immersion cues. These items represent, and are based on, the different skills of the human visual system. Realism score could be used as a helping tool in many applications such as objectively grading the performance of a VR system, defining the specifications of a new display, or choosing a simulator between several others available for a given experiment

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Is Users’ Trust during Automated Driving Different When Using an Ambient Light HMI, Compared to an Auditory HMI?

    Get PDF
    The aim of this study was to compare the success of two different Human Machine Interfaces (HMIs) in attracting drivers’ attention when they were engaged in a Non-Driving-Related Task (NDRT) during SAE Level 3 driving. We also assessed the value of each on drivers’ perceived safety and trust. A driving simulator experiment was used to investigate drivers’ response to a non-safety-critical transition of control and five cut-in events (one hard; deceleration of 2.4 m/s2, and 4 subtle; deceleration of ~1.16 m/s2) over the course of the automated drive. The experiment used two types of HMI to trigger a takeover request (TOR): one Light-band display that flashed whenever the drivers needed to takeover control; and one auditory warning. Results showed that drivers’ levels of trust in automation were similar for both HMI conditions, in all scenarios, except during a hard cut-in event. Regarding the HMI’s capabilities to support a takeover process, the study found no differences in drivers’ takeover performance or overall gaze distribution. However, with the Light-band HMI, drivers were more likely to focus their attention to the road centre first after a takeover request. Although a high proportion of glances towards the dashboard of the vehicle was seen for both HMIs during the takeover process, the value of these ambient lighting signals for conveying automation status and takeover messages may be useful to help drivers direct their visual attention to the most suitable area after a takeover, such as the forward roadway

    Providing and assessing intelligible explanations in autonomous driving

    Get PDF
    Intelligent vehicles with automated driving functionalities provide many benefits, but also instigate serious concerns around human safety and trust. While the automotive industry has devoted enormous resources to realising vehicle autonomy, there exist uncertainties as to whether the technology would be widely adopted by society. Autonomous vehicles (AVs) are complex systems, and in challenging driving scenarios, they are likely to make decisions that could be confusing to end-users. As a way to bridge the gap between this technology and end-users, the provision of explanations is generally being put forward. While explanations are considered to be helpful, this thesis argues that explanations must also be intelligible (as obligated by the GDPR Article 12) to the intended stakeholders, and should make causal attributions in order to foster confidence and trust in end-users. Moreover, the methods for generating these explanations should be transparent for easy audit. To substantiate this argument, the thesis proceeds in four steps: First, we adopted a mixed method approach (in a user study N=101N=101) to elicit passengers' requirements for effective explainability in diverse autonomous driving scenarios. Second, we explored different representations, data structures and driving data annotation schemes to facilitate intelligible explanation generation and general explainability research in autonomous driving. Third, we developed transparent algorithms for posthoc explanation generation. These algorithms were tested within a collision risk assessment case study and an AV navigation case study, using the Lyft Level5 dataset and our new SAX dataset---a dataset that we have introduced for AV explainability research. Fourth, we deployed these algorithms in an immersive physical simulation environment and assessed (in a lab study N=39N=39) the impact of the generated explanations on passengers' perceived safety while varying the prediction accuracy of an AV's perception system and the specificity of the explanations. The thesis concludes by providing recommendations needed for the realisation of more effective explainable autonomous driving, and provides a future research agenda

    Cooperative speed assistance : interaction and persuasion design

    Get PDF

    Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios

    Get PDF
    This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.In dieser Arbeit werden blickbasierte Benutzerschnittstellen untersucht, die Infor- mationen ¨uber das Blickfeld des Benutzers in multimodale Systeme integrieren, um neuartige Interaktionen mit der Umgebung zu erm¨oglichen. Wir untersuchen zwei neue Modalit¨aten: Blickeingabe und Ausgaben im peripheren Sichtfeld. Alle Modalit¨aten werden im gesamten Spektrum des Mixed-Reality-Kontinuums betra- chtet. Wir zeigen die Anwendung dieser neuen Formen der multimodalen Interak- tion in zwei wichtigen Dom¨anen auf: Fahrerassistenzsysteme und Werkerassistenz bei Mensch-Roboter-Kollaboration. Wir pr¨asentieren Experimente, die blickbasierte Benutzereingaben unter verschiedenen Bedingungen analysieren und helfen, ein 3D- Modell f¨ur das periphere Sehen zu entwerfen. Dar¨uber hinaus stellt diese Arbeit mehrere neue Algorithmen f¨ur die blickbasierte Interaktion vor, wie die deiktis- che Referenz in mobilen Szenarien, die nicht-intrusive Benutzeridentifikation, oder die Nutzung des peripheren Sichtfeldes f¨ur neuartige multimodale Pr¨asentationen. Diese Algorithmen sind in eine Reihe von Software-Werkzeuge integriert, mit de- nen 15 Anwendungsf¨alle f¨ur intelligente Umgebungen implementiert wurden. Diese Demonstratoren decken ein breites Anwendungsspektrum ab: von der r¨aumlichen In- teraktionen aus einem fahrenden Auto heraus bis hin zu Mixed-Reality-Interaktionen zwischen Mensch-Roboter-Teams

    ADAS HMI using peripheral vision

    No full text
    corecore