1,242 research outputs found

    LiDAR-derived digital holograms for automotive head-up displays.

    Get PDF
    A holographic automotive head-up display was developed to project 2D and 3D ultra-high definition (UHD) images using LiDAR data in the driver's field of view. The LiDAR data was collected with a 3D terrestrial laser scanner and was converted to computer-generated holograms (CGHs). The reconstructions were obtained with a HeNe laser and a UHD spatial light modulator with a panel resolution of 3840Ă—2160 px for replay field projections. By decreasing the focal distance of the CGHs, the zero-order spot was diffused into the holographic replay field image. 3D holograms were observed floating as a ghost image at a variable focal distance with a digital Fresnel lens into the CGH and a concave lens.This project was funded by the EPSRC Centre for Doctoral Training in Connected Electronic and Photonic Systems (CEPS) (EP/S022139/1), Project Reference: 2249444

    The influence of system transparency on trust: Evaluating interfaces in a highly automated vehicle

    Get PDF
    Previous studies indicate that, if an automated vehicle communicates its system status and intended behaviour, it could increase user trust and acceptance. However, it is still unclear what types of interfaces will better portray this type of information. The present study evaluated different configurations of screens comparing how they communicated the possible hazards in the environment (e.g. vulnerable road users), and vehicle behaviours (e.g. intended trajectory). These interfaces were presented in a fully automated vehicle tested by 25 participants in an indoor arena. Surveys and interviews measured trust, usability and experience after users were driven by an automated low-speed pod. Participants experienced four types of interfaces, from a simple journey tracker to a windscreen-wide augmented reality (AR) interface which overlays hazards highlighted in the environment and the trajectory of the vehicle. A combination of the survey and interview data showed a clear preference for the AR windscreen and an animated representation of the environment. The trust in the vehicle featuring these interfaces was significantly higher than pretrial measurements. However, some users questioned if they want to see this information all the time. One additional result was that some users felt motion sick when presented with the more engaging content. This paper provides recommendations for the design of interfaces with the potential to improve trust and user experience within highly automated vehicles

    Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios

    Get PDF
    This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.In dieser Arbeit werden blickbasierte Benutzerschnittstellen untersucht, die Infor- mationen ¨uber das Blickfeld des Benutzers in multimodale Systeme integrieren, um neuartige Interaktionen mit der Umgebung zu erm¨oglichen. Wir untersuchen zwei neue Modalit¨aten: Blickeingabe und Ausgaben im peripheren Sichtfeld. Alle Modalit¨aten werden im gesamten Spektrum des Mixed-Reality-Kontinuums betra- chtet. Wir zeigen die Anwendung dieser neuen Formen der multimodalen Interak- tion in zwei wichtigen Dom¨anen auf: Fahrerassistenzsysteme und Werkerassistenz bei Mensch-Roboter-Kollaboration. Wir pr¨asentieren Experimente, die blickbasierte Benutzereingaben unter verschiedenen Bedingungen analysieren und helfen, ein 3D- Modell f¨ur das periphere Sehen zu entwerfen. Dar¨uber hinaus stellt diese Arbeit mehrere neue Algorithmen f¨ur die blickbasierte Interaktion vor, wie die deiktis- che Referenz in mobilen Szenarien, die nicht-intrusive Benutzeridentifikation, oder die Nutzung des peripheren Sichtfeldes f¨ur neuartige multimodale Pr¨asentationen. Diese Algorithmen sind in eine Reihe von Software-Werkzeuge integriert, mit de- nen 15 Anwendungsf¨alle f¨ur intelligente Umgebungen implementiert wurden. Diese Demonstratoren decken ein breites Anwendungsspektrum ab: von der r¨aumlichen In- teraktionen aus einem fahrenden Auto heraus bis hin zu Mixed-Reality-Interaktionen zwischen Mensch-Roboter-Teams
    • …
    corecore