16 research outputs found

    Situational Awareness, Driver’s Trust in Automated Driving Systems and Secondary Task Performance

    Full text link
    Driver assistance systems, also called automated driving systems, allow drivers to immerse themselves in non-driving-related tasks. Unfortunately, drivers may not trust the automated driving system, which prevents either handing over the driving task or fully focusing on the secondary task. We assert that enhancing situational awareness can increase a driver's trust in automation. Situational awareness should increase a driver's trust and lead to better secondary task performance. This study manipulated driversʼ situational awareness by providing them with different types of information: the control condition provided no information to the driver, the low condition provided a status update, while the high condition provided a status update and a suggested course of action. Data collected included measures of trust, trusting behavior, and task performance through surveys, eye-tracking, and heart rate data. Results show that situational awareness both promoted and moderated the impact of trust in the automated vehicle, leading to better secondary task performance. This result was evident in measures of self-reported trust and trusting behavior.This research was supported in part by the Automotive Research Center (ARC) at the University of Michigan, with funding from government contract Department of the Army W56HZV-14-2-0001 through the U. S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC). The authors acknowledge and greatly appreciate the guidance of Victor Paul (TARDEC), Ben Haynes (TARDEC), and Jason Metcalfe (ARL) in helping design the study. The authors would also like to thank Quantum Signal, LLC, for providing its ANVEL software and invaluable development support.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/148141/1/SA Trust - SAE- Public.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/148141/4/Petersen et al. 2019.pdfDescription of Petersen et al. 2019.pdf : Final Publication Versio

    Exploring how drivers perceive spatial earcons in automated vehicles

    Get PDF
    Automated vehicles seek to relieve the human driver from primary driving tasks, but this substantially diminishes the connection between driver and vehicle compared to manual operation. At present, automated vehicles lack any form of continual, appropriate feedback to re-establish this connection and offer a feeling of control. We suggest that auditory feedback can be used to support the driver in this context. A preliminary field study that explored how drivers respond to existing auditory feedback in manual vehicles was first undertaken. We then designed a set of abstract, synthesised sounds presented spatially around the driver, known as Spatial Earcons, that represented different primary driving sounds e.g. acceleration. To evaluate their effectiveness, we undertook a driving simulator study in an outdoor setting using a real vehicle. Spatial Earcons performed as well as Existing Vehicle Sounds during automated and manual driving scenarios. Subjective responses suggested Spatial Earcons produced an engaging driving experience. This paper argues that entirely new synthesised primary driving sounds, such as Spatial Earcons, can be designed for automated vehicles to replace Existing Vehicle Sounds. This creates new possibilities for presenting primary driving information in automated vehicles using auditory feedback, in order to re-establish a connection between driver and vehicle

    Using dynamic task allocation to evaluate driving performance, situation awareness, and cognitive load at different levels of partial autonomy

    Get PDF
    The state of the art of autonomous vehicles requires operators to remain vigilant while performing secondary tasks. The goal of this research was to investigate how dynamically allocated secondary tasks affected driving performance, cognitive load, and situation awareness. Secondary tasks were presented at rates based on the autonomy level present and whether the autonomous system was engaged. A rapid secondary task rate was also presented for two short periods regardless of whether autonomy was engaged. There was a three-minute familiarization phase followed by a data collection phase where participants responded to secondary tasks while preventing the vehicle from colliding into random obstacles. After data collection, there was a brief survey to gather data on cognitive load, situation awareness, and relevant demographics. The data was compared to data gathered in a similar study by Cossitt [10] where secondary tasks were presented at a controlled frequency and a gradually increasing frequency

    Using Eye-tracking Data to Predict Situation Awareness in Real Time during Takeover Transitions in Conditionally Automated Driving

    Get PDF
    Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167003/1/hkwggmgngbsqcmmqkbffywbrjtcmhhxx.pdfDescription of hkwggmgngbsqcmmqkbffywbrjtcmhhxx.pdf : Mian articleSEL

    Centralised and decentralised sensor fusion‐based emergency brake assist

    Get PDF
    Copyright: © 2021 by the authors. Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sen-sors. As mono‐sensor systems cannot provide reliable and consistent readings under all circum-stances because of errors and other limitations, fusing data from multiple sensors ensures that the environmental parameters are perceived correctly and reliably for most scenarios, thereby substan-tially improving the reliability of the multi‐sensor‐based automotive systems. This paper first high-lights the significance of efficiently fusing data from multiple sensors in ADAS features. An emergency brake assist (EBA) system is showcased using multiple sensors, namely, a light detection and ranging (LiDAR) sensor and camera. The architectures of the proposed ‘centralised’ and ‘decentral-ised’ sensor fusion approaches for EBA are discussed along with their constituents, i.e., the detection algorithms, the fusion algorithm, and the tracking algorithm. The centralised and decentralised architectures are built and analytically compared, and the performance of these two fusion architectures for EBA are evaluated in terms of speed of execution, accuracy, and computational cost. While both fusion methods are seen to drive the EBA application at an acceptable frame rate (~20fps or higher) on an Intel i5‐based Ubuntu system, it was concluded through the experiments and analyt-ical comparisons that the decentralised fusion‐driven EBA leads to higher accuracy; however, it has the downside of a higher computational cost. The centralised fusion‐driven EBA yields compara-tively less accurate results, but with the benefits of a higher frame rate and lesser computational cost

    Is the driver ready to receive just car information in the windshield during manual and autonomous driving?

    Get PDF
    A automação está a mudar o mundo. Como na aeronáutica, as empresas da indústria automóvel estão atualmente a desenvolver veículos autónomos. No entanto a autonomia do veículo não é completa, necessitando por vezes das ações do condutor. A forma como é feita a transição entre condução manual e autónoma e como mostrar esta informação de transição para o condutor constitui um desafio para a ergonomia. Novos ecrãs estão a ser estudados para facilitar estas transições. Este estudo usou um simulador de condução para investigar, se a informação em realidade aumentada pode influenciar positivamente a experiência do condutor durante a condução manual e autónoma. Compararam-se duas formas de apresentar a comunicação ao condutor. Um “conceito AR” mostrou toda a informação no para-brisas para ser mais fácil o condutor aceder à informação. O “conceito IC” mostrou a informação que aparece atualmente nos carros, usando o painel de instrumentos e o e-HUD. Os resultados indicam que a experiência do utilizador (UX) é influenciada pelos conceitos, sendo que o “conceito AR” teve uma melhor UX em todos os estados de transição. Em termos de confiança, os resultados revelaram também valores mais elevado para o “conceito AR”. O tipo de conceito não influenciou nem o tempo nem o comportamento de retomar o controlo do carro. Em termos de situação consciente, o “conceito AR” deixa os condutores mais conscientes durante a disponibilidade e ativação da função. Este estudo traz implicações para as empresas que desenvolvem a próxima geração de ecrãs no mundo automóvel.Automation is changing the world. As in aviation, the car manufacturers are currently developing autonomous vehicles. However, the autonomy of that vehicles isn’t complete, still being needed in certain moments the driver on ride. The way how is done this transition between manual and autonomous driving and how show this information to the driver is a challenge for Ergonomics. New displays are being studied to facilitate these transitions. This study used a driving simulator to investigates, whether augmented reality information can positively influence the user experience during manual and autonomous driving. Therefore, we compared two ways of present the communicate to the driver. The “AR concept” displays all the information in windshield to be easier to the driver access to the information. The “IC concept” displays the information that appears nowadays in the cars, where they use the Instrument Cluster and the e-HUD to display information. Results indicate that the user experience (UX) is influence by concepts, where “AR concept” had better UX in all the states. In terms of confidence, the results revealed higher scores in “AR concept” too. The type of concept does not influence the takeover times or the behavior of take control. In terms of situational awareness (SA), “AR concept” leave the drivers more aware during availability and activation. This study provides implications for automotive companies developing the next generation of car displays

    Assessing the Development of Operator Trust in Automation

    Get PDF
    Miscalibrated relationships between operator trust and automation can lead to accidents, some even fatal. If an operator either over or under trusts the system's capability, their overall assessment of the system's reliability can be inaccurate and potentially lead to poor decision making. As autonomous vehicles emerge, understanding the natural trust formation process as it occurs over time between drivers and these vehicles is crucial to increase safety and reliability, as well as identifying any factors that can affect this process. To fill this gap, an autonomous vehicle was observed as it operated on Texas A&M University's campus in mixed traffic for an 8-week demonstration. Throughout the deployment, the vehicle was operated autonomously and used four safety operators from the student population to take over shuttle operations, as necessary. Research personnel collected daily and weekly surveys and hosted interviews to investigate how operators' trust developed and changed over time and to study the relationship between trust and operational factors. Preliminary findings established a potential relationship between trust and the number of vehicle errors. Interview data also suggested that trust was dependent on situational circumstances affected by the operator’s emotional comfort and familiarity with the vehicle

    Aktueller Stand von Prozess Mining als Methode zur Unterstützung der Prozessautomatisierung

    Get PDF
    Prozess Mining ist eine Technologie, die Unternehmen bei der Verbesserung der Prozesse durch verschiedene Anwendungen wie Process Discovery, Conformance Checking oder Predictive Process Mining unterstützt. Prozessautomatisierung ist eine verbreitete Variante der Prozessverbesserung, da sie einen bedeutenden Wettbewerbsvorteil verspricht. Diese Studie untersucht anhand einer Literaturanalyse wie geeignet Prozess Mining für die Unterstützung der Prozessautomatisierung ist. Die Analyse bedient sich einer Systematisierung nach dem BPM-Lebenszyklus und der Level of Automation Taxonomie. Prozess Mining weist viel Potential für die Unterstützung der Automatisierung auf, aber es bleibt unklar, inwieweit dieses Potential in der Praxis umgesetzt werden kann. Die Stärken von Prozess Mining liegen im Diagnostischen Bereich, doch die Umsetzung wird kaum unterstützt. Die größten Hürden bildet hierbei die fehlende Limitation des Anwendungsbereichs von PM und das benötigte Expertenwissen für die Anwendung
    corecore