227 research outputs found

    Evaluating Effects of Cognitive Load, Takeover Request Lead Time, and Traffic Density on Drivers’ Takeover Performance in Conditionally Automated Driving

    Full text link
    The views expressed are those of the authors and do not reflect the official policy or position of State Farm®.In conditionally automated driving, drivers engaged in non-driving related tasks (NDRTs) have difficulty taking over control of the vehicle when requested. This study aimed to examine the relationships between takeover performance and drivers’ cognitive load, takeover request (TOR) lead time, and traffic density. We conducted a driving simulation experiment with 80 participants, where they experienced 8 takeover events. For each takeover event, drivers’ subjective ratings of takeover readiness, objective measures of takeover timing and quality, and NDRT performance were collected. Results showed that drivers had lower takeover readiness and worse performance when they were in high cognitive load, short TOR lead time, and heavy oncoming traffic density conditions. Interestingly, if drivers had low cognitive load, they paid more attention to driving environments and responded more quickly to takeover requests in high oncoming traffic conditions. The results have implications for the design of in-vehicle alert systems to help improve takeover performance.University of Michigan McityNational Science FoundationPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/156045/1/Du et al. 2020.pd

    Psychophysiological responses to takeover requests in conditionally automated driving

    Get PDF
    In SAE Level 3 automated driving, taking over control from automation raises significant safety concerns because drivers out of the vehicle control loop have difficulty negotiating takeover transitions. Existing studies on takeover transitions have focused on drivers' behavioral responses to takeover requests (TORs). As a complement, this exploratory study aimed to examine drivers' psychophysiological responses to TORs as a result of varying non-driving-related tasks (NDRTs), traffic density and TOR lead time. A total number of 102 drivers were recruited and each of them experienced 8 takeover events in a high fidelity fixed-base driving simulator. Drivers' gaze behaviors, heart rate (HR) activities, galvanic skin responses (GSRs), and facial expressions were recorded and analyzed during two stages. First, during the automated driving stage, we found that drivers had lower heart rate variability, narrower horizontal gaze dispersion, and shorter eyes-on-road time when they had a high level of cognitive load relative to a low level of cognitive load. Second, during the takeover transition stage, 4s lead time led to inhibited blink numbers and larger maximum and mean GSR phasic activation compared to 7s lead time, whilst heavy traffic density resulted in increased HR acceleration patterns than light traffic density. Our results showed that psychophysiological measures can indicate specific internal states of drivers, including their workload, emotions, attention, and situation awareness in a continuous, non-invasive and real-time manner. The findings provide additional support for the value of using psychophysiological measures in automated driving and for future applications in driver monitoring systems and adaptive alert systems.University of Michigan McityPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162593/1/AAP_physiological_responses_HF_template.pdfSEL

    Predicting Driver Takeover Performance in Conditionally Automated Driving

    Full text link
    http://deepblue.lib.umich.edu/bitstream/2027.42/156409/1/AAP_Predicting_takeover_performance.pdfSEL

    Building Trust Profiles in Conditionally Automated Driving

    Full text link
    Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the public may not be willing to use them. This research seeks to investigate trust profiles in order to create personalized experiences for drivers in AVs. This technique helps in better understanding drivers' dynamic trust from a persona's perspective. The study was conducted in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition with eight takeover requests (TORs) in different scenarios. Drivers' dispositional trust, initial learned trust, dynamic trust, personality, and emotions were measured. We identified three trust profiles (i.e., believers, oscillators, and disbelievers) using a K-means clustering model. In order to validate this model, we built a multinomial logistic regression model based on SHAP explainer that selected the most important features to predict the trust profiles with an F1-score of 0.90 and accuracy of 0.89. We also discussed how different individual factors influenced trust profiles which helped us understand trust dynamics better from a persona's perspective. Our findings have important implications for designing a personalized in-vehicle trust monitoring and calibrating system to adjust drivers' trust levels in order to improve safety and experience in automated driving

    Modeling driver distraction mechanism and its safety impact in automated vehicle environment.

    Get PDF
    Automated Vehicle (AV) technology expects to enhance driving safety by eliminating human errors. However, driver distraction still exists under automated driving. The Society of Automotive Engineers (SAE) has defined six levels of driving automation from Level 0~5. Until achieving Level 5, human drivers are still needed. Therefore, the Human-Vehicle Interaction (HVI) necessarily diverts a driver’s attention away from driving. Existing research mainly focused on quantifying distraction in human-operated vehicles rather than in the AV environment. It causes a lack of knowledge on how AV distraction can be detected, quantified, and understood. Moreover, existing research in exploring AV distraction has mainly pre-defined distraction as a binary outcome and investigated the patterns that contribute to distraction from multiple perspectives. However, the magnitude of AV distraction is not accurately quantified. Moreover, past studies in quantifying distraction have mainly used wearable sensors’ data. In reality, it is not realistic for drivers to wear these sensors whenever they drive. Hence, a research motivation is to develop a surrogate model that can replace the wearable device-based data to predict AV distraction. From the safety perspective, there lacks a comprehensive understanding of how AV distraction impacts safety. Furthermore, a solution is needed for safely offsetting the impact of distracted driving. In this context, this research aims to (1) improve the existing methods in quantifying Human-Vehicle Interaction-induced (HVI-induced) driver distraction under automated driving; (2) develop a surrogate driver distraction prediction model without using wearable sensor data; (3) quantitatively reveal the dynamic nature of safety benefits and collision hazards of HVI-induced visual and cognitive distractions under automated driving by mathematically formulating the interrelationships among contributing factors; and (4) propose a conceptual prototype of an AI-driven, Ultra-advanced Collision Avoidance System (AUCAS-L3) targeting HVI-induced driver distraction under automated driving without eye-tracking and video-recording. Fixation and pupil dilation data from the eye tracking device are used to model driver distraction, focusing on visual and cognitive distraction, respectively. In order to validate the proposed methods for measuring and modeling driver distraction, a data collection was conducted by inviting drivers to try out automated driving under Level 3 automation on a simulator. Each driver went through a jaywalker scenario twice, receiving a takeover request under two types of HVI, namely “visual only” and “visual and audible”. Each driver was required to wear an eye-tracker so that the fixation and pupil dilation data could be collected when driving, along with driving performance data being recorded by the simulator. In addition, drivers’ demographical information was collected by a pre-experiment survey. As a result, the magnitude of visual and cognitive distraction was quantified, exploring the dynamic changes over time. Drivers are more concentrated and maintain a higher level of takeover readiness under the “visual and audible” warning, compared to “visual only” warning. The change of visual distraction was mathematically formulated as a function of time. In addition, the change of visual distraction magnitude over time is explained from the driving psychology perspective. Moreover, the visual distraction was also measured by direction in this research, and hotspots of visual distraction were identified with regard to driving safety. When discussing the cognitive distraction magnitude, the driver’s age was identified as a contributing factor. HVI warning type contributes to the significant difference in cognitive distraction acceleration rate. After drivers reach the maximum visual distraction, cognitive distraction tends to increase continuously. Also, this research contributes to quantitatively revealing how visual and cognitive distraction impacts the collision hazards, respectively. Moreover, this research contributes to the literature by developing deep learning-based models in predicting a driver’s visual and cognitive distraction intensity, focusing on demographics, HVI warning types, and driving performance. As a solution to safety issues caused by driver distraction, the AUCAS-L3 has been proposed. The AUCAS-L3 is validated with high accuracies in predicting (a) whether a driver is distracted and does not perform takeover actions and (b) whether crashes happen or not if taken over. After predicting the presence of driver distraction or a crash, AUCAS-L3 automatically applies the brake pedal for drivers as effective and efficient protection to driver distraction under automated driving. And finally, a conceptual prototype in predicting AV distraction and traffic conflict was proposed, which can predict the collision hazards in advance of 0.82 seconds on average

    An Examination of Drivers’ Responses to Take-over Requests with Different Warning Systems During Conditional Automated Driving

    Full text link
    Today, the autonomous vehicle industry is growing at a fast pace towards Level-5 autonomous cars, based on the Society of Automotive Engineers (SAE) definition, for customers. It is expected that there will soon be SAELevel-3 automated cars in the market–which corresponds to a plethora of research works in this sector and one of them is the study of the design of takeover request warning system because failure to respond a takeover request warning may lead to fatal accidents. The objective of this study is to examine the effects of different warning types on drivers’ takeover responses while they are engaging in different non-driving tasks during conditional automated driving. This study is a simulator-based with a mixed-subjects design while participants interacting with a simulated Level-3 automation system under different conditions. A total of 24 participants were recruited and participated in the study. Each participant experienced two types of takeover request (TOR )warning systems (Auditory TOR and Multimodal TOR) under four types of non-driving task conditions with two levels of non-driving task duration. One baseline drive without any secondary task was also designed for comparison with those conditions with non-driving tasks. Three research questions are addressed in this thesis: •Will a Multimodal TOR lead to better driver responses in reaction to takeover requests than Auditory TOR? •Will the different type of non-driving tasks lead to different cognitive engagement of drivers, therefore resulting in different reactions to takeover requests? Will different duration of engagement in non-driving tasks impact on responses of drivers’ re-engagement in driving tasks? In this study, data was collected for both objective driver measures through simulator run log files and subjective driver measures through questionnaires. For analysis purposes, a Mixed-Effects Model was conducted to test the response variables, followed by the Fisher LSD Pairwise Comparison test for significant factors with more than two levels and Two-Sample t-tests for subjective measures were used. Results showed that Multimodal TOR leads to shorter brake time and steer touching time comparatively and the difference of these dependent variables between the TORs is significant as p-value<0.05. The findings also suggest that the Multimodal TOR warning system leads to a better reaction of drivers. Moreover, it was also found that the type of non-driving tasks leads to different driver responses, more specifically, drivers have a significantly slower reaction towards the takeover request if they are engaging in visual-manual non-driving tasks when compared to if they are engaging in other types of non-driving tasks (e.g., cognitive or visual tasks). However, there are no significant gender-based effects observed for Brake Time and Steer Touch Time.Master of Science in EngineeringIndustrial and Systems Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/152430/1/Kanishk Bakshi Final Thesis.pdfDescription of Kanishk Bakshi Final Thesis.pdf : Thesi

    Developing Takeover Request Warning System to Improve Takeover Time and Post-takeover Performance in Level 3 Automated Drivin

    Get PDF
    The automotive industry is shifting towards partial (level 3) or fully automated vehicles. An important research question in level 3 automated driving is how quickly drivers can take over the vehicle control in response to a critical event. In this regard, this study develops an integrated takeover request (TOR) system which provides visual and auditorial TOR warning in both vehicle interface and personal portable device (e.g., tablet). The study also evaluated the effectiveness of the integrated TOR system in reducing the takeover time and improving post-takeover performance. For these purposes, 44 drivers participated in the driving simulator experiment where they were involved in secondary task (watching video on a tablet) in automated driving and they were requested to manually drive after the integrated TOR or the conventional TOR (which provides visual and auditorial TOR warning in vehicle interface only) was provided. Results from the statistical analysis suggest that the integrated TOR significantly reduced the takeover time and improved post-takeover performance as indicated by longer minimum TTC, shorter lane change duration, lower standard deviation of steering wheel angle and lower maximum acceleration during lane changing. The result also suggests that the integrated TOR can reduce the takeover time more effectively with the use of headphone. As more people are likely to use headphone in automated driving for better sound quality, understanding the effect of the use of headphone is critical for improving the effectiveness of the integrated TOR in reducing the takeover time. The results of subjective questionnaire show that the participants generally perceived higher subjective comfort and safety level with the integrated TOR system. Therefore, it is recommended to apply the proposed integrated TOR system for safe transition from automated to manual driving

    Relevant Physiological Indicators for Assessing Workload in Conditionally Automated Driving, Through Three-Class Classification and Regression

    Get PDF
    In future conditionally automated driving, drivers may be asked to take over control of the car while it is driving autonomously. Performing a non-driving-related task could degrade their takeover performance, which could be detected by continuous assessment of drivers' mental load. In this regard, three physiological signals from 80 subjects were collected during 1 h of conditionally automated driving in a simulator. Participants were asked to perform a non-driving cognitive task (N-back) for 90 s, 15 times during driving. The modality and difficulty of the task were experimentally manipulated. The experiment yielded a dataset of drivers' physiological indicators during the task sequences, which was used to predict drivers' workload. This was done by classifying task difficulty (three classes) and regressing participants' reported level of subjective workload after each task (on a 0–20 scale). Classification of task modality was also studied. For each task, the effect of sensor fusion and task performance were studied. The implemented pipeline consisted of a repeated cross validation approach with grid search applied to three machine learning algorithms. The results showed that three different levels of mental load could be classified with a f1-score of 0.713 using the skin conductance and respiration signals as inputs of a random forest classifier. The best regression model predicted the subjective level of workload with a mean absolute error of 3.195 using the three signals. The accuracy of the model increased with participants' task performance. However, classification of task modality (visual or auditory) was not successful. Some physiological indicators such as estimates of respiratory sinus arrhythmia, respiratory amplitude, and temporal indices of heart rate variability were found to be relevant measures of mental workload. Their use should be preferred for ongoing assessment of driver workload in automated driving
    • …
    corecore