4,611 research outputs found

    Analysis of Disengagements in Semi-Autonomous Vehicles: Drivers’ Takeover Performance and Operational Implications

    Get PDF
    This report analyzes the reactions of human drivers placed in simulated Autonomous Technology disengagement scenarios. The study was executed in a human-in-the-loop setting, within a high-fidelity integrated car simulator capable of handling both manual and autonomous driving. A population of 40 individuals was tested, with metrics for control takeover quantification given by: i) response times (considering inputs of steering, throttle, and braking); ii) vehicle drift from the lane centerline after takeover as well as overall (integral) drift over an S-turn curve compared to a baseline obtained in manual driving; and iii) accuracy metrics to quantify human factors associated with the simulation experiment. Independent variables considered for the study were the age of the driver, the speed at the time of disengagement, and the time at which the disengagement occurred (i.e., how long automation was engaged for). The study shows that changes in the vehicle speed significantly affect all the variables investigated, pointing to the importance of setting up thresholds for maximum operational speed of vehicles driven in autonomous mode when the human driver serves as back-up. The results shows that the establishment of an operational threshold could reduce the maximum drift and lead to better control during takeover, perhaps warranting a lower speed limit than conventional vehicles. With regards to the age variable, neither the response times analysis nor the drift analysis provide support for any claim to limit the age of drivers of semi-autonomous vehicles

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Generative AI-empowered Simulation for Autonomous Driving in Vehicular Mixed Reality Metaverses

    Full text link
    In the vehicular mixed reality (MR) Metaverse, the distance between physical and virtual entities can be overcome by fusing the physical and virtual environments with multi-dimensional communications in autonomous driving systems. Assisted by digital twin (DT) technologies, connected autonomous vehicles (AVs), roadside units (RSU), and virtual simulators can maintain the vehicular MR Metaverse via digital simulations for sharing data and making driving decisions collaboratively. However, large-scale traffic and driving simulation via realistic data collection and fusion from the physical world for online prediction and offline training in autonomous driving systems are difficult and costly. In this paper, we propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations for improving driving safety and traffic efficiency. First, we propose a multi-task DT offloading model for the reliable execution of heterogeneous DT tasks with different requirements at RSUs. Then, based on the preferences of AV's DTs and collected realistic data, virtual simulators can synthesize unlimited conditioned driving and traffic datasets to further improve robustness. Finally, we propose a multi-task enhanced auction-based mechanism to provide fine-grained incentives for RSUs in providing resources for autonomous driving. The property analysis and experimental results demonstrate that the proposed mechanism and architecture are strategy-proof and effective, respectively

    Is the driver ready to receive just car information in the windshield during manual and autonomous driving?

    Get PDF
    A automação está a mudar o mundo. Como na aeronáutica, as empresas da indústria automóvel estão atualmente a desenvolver veículos autónomos. No entanto a autonomia do veículo não é completa, necessitando por vezes das ações do condutor. A forma como é feita a transição entre condução manual e autónoma e como mostrar esta informação de transição para o condutor constitui um desafio para a ergonomia. Novos ecrãs estão a ser estudados para facilitar estas transições. Este estudo usou um simulador de condução para investigar, se a informação em realidade aumentada pode influenciar positivamente a experiência do condutor durante a condução manual e autónoma. Compararam-se duas formas de apresentar a comunicação ao condutor. Um “conceito AR” mostrou toda a informação no para-brisas para ser mais fácil o condutor aceder à informação. O “conceito IC” mostrou a informação que aparece atualmente nos carros, usando o painel de instrumentos e o e-HUD. Os resultados indicam que a experiência do utilizador (UX) é influenciada pelos conceitos, sendo que o “conceito AR” teve uma melhor UX em todos os estados de transição. Em termos de confiança, os resultados revelaram também valores mais elevado para o “conceito AR”. O tipo de conceito não influenciou nem o tempo nem o comportamento de retomar o controlo do carro. Em termos de situação consciente, o “conceito AR” deixa os condutores mais conscientes durante a disponibilidade e ativação da função. Este estudo traz implicações para as empresas que desenvolvem a próxima geração de ecrãs no mundo automóvel.Automation is changing the world. As in aviation, the car manufacturers are currently developing autonomous vehicles. However, the autonomy of that vehicles isn’t complete, still being needed in certain moments the driver on ride. The way how is done this transition between manual and autonomous driving and how show this information to the driver is a challenge for Ergonomics. New displays are being studied to facilitate these transitions. This study used a driving simulator to investigates, whether augmented reality information can positively influence the user experience during manual and autonomous driving. Therefore, we compared two ways of present the communicate to the driver. The “AR concept” displays all the information in windshield to be easier to the driver access to the information. The “IC concept” displays the information that appears nowadays in the cars, where they use the Instrument Cluster and the e-HUD to display information. Results indicate that the user experience (UX) is influence by concepts, where “AR concept” had better UX in all the states. In terms of confidence, the results revealed higher scores in “AR concept” too. The type of concept does not influence the takeover times or the behavior of take control. In terms of situational awareness (SA), “AR concept” leave the drivers more aware during availability and activation. This study provides implications for automotive companies developing the next generation of car displays

    Designing an Adaptive Interface: Using Eye Tracking to Classify How Information Usage Changes Over Time in Partially Automated Vehicles

    Get PDF
    While partially automated vehicles can provide a range of benefits, they also bring about new Human Machine Interface (HMI) challenges around ensuring the driver remains alert and is able to take control of the vehicle when required. While humans are poor monitors of automated processes, specifically during ‘steady state’ operation, presenting the appropriate information to the driver can help. But to date, interfaces of partially automated vehicles have shown evidence of causing cognitive overload. Adaptive HMIs that automatically change the information presented (for example, based on workload, time or physiologically), have been previously proposed as a solution, but little is known about how information should adapt during steady-state driving. This study aimed to classify information usage based on driver experience to inform the design of a future adaptive HMI in partially automated vehicles. The unique feature of this study over existing literature is that each participant attended for five consecutive days; enabling a first look at how information usage changes with increasing familiarity and providing a methodological contribution to future HMI user trial study design. Seventeen participants experienced a steady-state automated driving simulation for twenty-six minutes per day in a driving simulator, replicating a regularly driven route, such as a work commute. Nine information icons, representative of future partially automated vehicle HMIs, were displayed on a tablet and eye tracking was used to record the information that the participants fixated on. The results found that information usage did change with increased exposure, with significant differences in what information participants looked at between the first and last trial days. With increasing experience, participants tended to view information as confirming technical competence rather than the future state of the vehicle. On this basis, interface design recommendations are made, particularly around the design of adaptive interfaces for future partially automated vehicles

    Impact of Smart Phones’ Interaction Modality on Driving Performance for Conventional and Autonomous Vehicles

    Get PDF
    Distracted driving related to cell phone usage ranks among the top three causes of fatal crashes on the road. Although forty-eight of 50 U.S. states allow the use of personal devices if operated hands-free and secured in the vehicle, scientific studies have yet to quantify the safety improvement presumed to be introduced by voice-to-text interactions. Thus, this study investigated how different modes of interaction of drivers with a smart phone (i.e., manual texting vs. vocal input) affect drivers’ distraction and performance in both conventional and semi-autonomous vehicles. The study was executed in a full-car integrated simulator and tested a population of 32 drivers. The study considered two scenarios: (1) conventional manual driving in a suburban environment with intersection stops; and (2) control takeover from an engaged autonomous vehicle that reverted to manual driving at a highway exit. The quality of execution of maneuvers as well as timing and tracking of eye-gaze focus areas were assessed in both scenarios. Results demonstrated that while participants perceived an increased level of safety while using the hands-free interface, response times and drift did not significantly differ from those manually texting. Furthermore, even though participants perceived a greater effort in accomplishing the text reply through the manual interface, none of the measured quantities for driving performance or eye-gaze focus revealed statistical difference between the two interfaces, ultimately calling into question the assumption of greater safety implicit in the laws allowing hands-free devices
    corecore