47,485 research outputs found

    A Fuzzy-Logic Approach to Dynamic Bayesian Severity Level Classification of Driver Distraction Using Image Recognition

    Get PDF
    open access articleDetecting and classifying driver distractions is crucial in the prevention of road accidents. These distractions impact both driver behavior and vehicle dynamics. Knowing the degree of driver distraction can aid in accident prevention techniques, including transitioning of control to a level 4 semi- autonomous vehicle, when a high distraction severity level is reached. Thus, enhancement of Advanced Driving Assistance Systems (ADAS) is a critical component in the safety of vehicle drivers and other road users. In this paper, a new methodology is introduced, using an expert knowledge rule system to predict the severity of distraction in a contiguous set of video frames using the Naturalistic Driving American University of Cairo (AUC) Distraction Dataset. A multi-class distraction system comprises the face orientation, drivers’ activities, hands and previous driver distraction, a severity classification model is developed as a discrete dynamic Bayesian (DDB). Furthermore, a Mamdani-based fuzzy system was implemented to detect multi- class of distractions into a severity level of safe, careless or dangerous driving. Thus, if a high level of severity is reached the semi-autonomous vehicle will take control. The result further shows that some instances of driver’s distraction may quickly transition from a careless to dangerous driving in a multi-class distraction context

    Smart driving aids and their effects on driving performance and driver distraction

    Get PDF
    In-vehicle information systems have been shown to increase driver workload and cause distraction; both of which are causal factors for accidents. This simulator study evaluates the impact that two designs for a smart driving aid, and scenario complexity have on workload, distraction and driving performance. Results showed that real-time delivery of smart driving information did not increase driver workload or adversely effect driver distraction, while having the effect of decreasing mean driving speed in both the simple and complex driving scenarios. Subjective workload was shown to increase with task difficulty, as well as revealing important differences between the two interface designs

    Smart driving assistance systems : designing and evaluating ecological and conventional displays

    Get PDF
    In-vehicle information systems have been shown to increase driver workload and cause distraction; both are causal factors for accidents. This simulator study evaluates the impact that two designs for a smart driving aid and scenario complexity has on workload, distraction and driving performance. Results showed that real-time delivery of smart driving information did not increase driver workload or adversely affect driver distraction, while having the effect of decreasing mean driving speed in both the simple and complex driving scenarios. Important differences were also highlighted between conventional and ecologically designed smart driving interfaces with respect to subjective workload and peripheral detection

    Towards Operationalizing Driver Distraction

    Get PDF
    Driver distraction has been the subject of much research interest and scientific inquiry. Operationalizing driver distraction is a complex task—one that is necessary for advancing both science and public policy in this domain. While many operational definitions can be gathered from the literature, gaps are common. In order to fill such gaps, 21 experts reviewed 55 driver distraction definitions in the literature. Aided by the results of a pre-workshop questionnaire the experts narrowed these definitions. The Regan et al. (2011) definition of driver distraction was agreed to at a workshop. Subsidiary terms related to this definition were defined to improve clarity and applicability of the definition. It is hoped that a consistent and agreed definition of driver distraction and its associated terms will advance scientific progress in understanding and measuring driver distraction

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Modeling driver distraction mechanism and its safety impact in automated vehicle environment.

    Get PDF
    Automated Vehicle (AV) technology expects to enhance driving safety by eliminating human errors. However, driver distraction still exists under automated driving. The Society of Automotive Engineers (SAE) has defined six levels of driving automation from Level 0~5. Until achieving Level 5, human drivers are still needed. Therefore, the Human-Vehicle Interaction (HVI) necessarily diverts a driver’s attention away from driving. Existing research mainly focused on quantifying distraction in human-operated vehicles rather than in the AV environment. It causes a lack of knowledge on how AV distraction can be detected, quantified, and understood. Moreover, existing research in exploring AV distraction has mainly pre-defined distraction as a binary outcome and investigated the patterns that contribute to distraction from multiple perspectives. However, the magnitude of AV distraction is not accurately quantified. Moreover, past studies in quantifying distraction have mainly used wearable sensors’ data. In reality, it is not realistic for drivers to wear these sensors whenever they drive. Hence, a research motivation is to develop a surrogate model that can replace the wearable device-based data to predict AV distraction. From the safety perspective, there lacks a comprehensive understanding of how AV distraction impacts safety. Furthermore, a solution is needed for safely offsetting the impact of distracted driving. In this context, this research aims to (1) improve the existing methods in quantifying Human-Vehicle Interaction-induced (HVI-induced) driver distraction under automated driving; (2) develop a surrogate driver distraction prediction model without using wearable sensor data; (3) quantitatively reveal the dynamic nature of safety benefits and collision hazards of HVI-induced visual and cognitive distractions under automated driving by mathematically formulating the interrelationships among contributing factors; and (4) propose a conceptual prototype of an AI-driven, Ultra-advanced Collision Avoidance System (AUCAS-L3) targeting HVI-induced driver distraction under automated driving without eye-tracking and video-recording. Fixation and pupil dilation data from the eye tracking device are used to model driver distraction, focusing on visual and cognitive distraction, respectively. In order to validate the proposed methods for measuring and modeling driver distraction, a data collection was conducted by inviting drivers to try out automated driving under Level 3 automation on a simulator. Each driver went through a jaywalker scenario twice, receiving a takeover request under two types of HVI, namely “visual only” and “visual and audible”. Each driver was required to wear an eye-tracker so that the fixation and pupil dilation data could be collected when driving, along with driving performance data being recorded by the simulator. In addition, drivers’ demographical information was collected by a pre-experiment survey. As a result, the magnitude of visual and cognitive distraction was quantified, exploring the dynamic changes over time. Drivers are more concentrated and maintain a higher level of takeover readiness under the “visual and audible” warning, compared to “visual only” warning. The change of visual distraction was mathematically formulated as a function of time. In addition, the change of visual distraction magnitude over time is explained from the driving psychology perspective. Moreover, the visual distraction was also measured by direction in this research, and hotspots of visual distraction were identified with regard to driving safety. When discussing the cognitive distraction magnitude, the driver’s age was identified as a contributing factor. HVI warning type contributes to the significant difference in cognitive distraction acceleration rate. After drivers reach the maximum visual distraction, cognitive distraction tends to increase continuously. Also, this research contributes to quantitatively revealing how visual and cognitive distraction impacts the collision hazards, respectively. Moreover, this research contributes to the literature by developing deep learning-based models in predicting a driver’s visual and cognitive distraction intensity, focusing on demographics, HVI warning types, and driving performance. As a solution to safety issues caused by driver distraction, the AUCAS-L3 has been proposed. The AUCAS-L3 is validated with high accuracies in predicting (a) whether a driver is distracted and does not perform takeover actions and (b) whether crashes happen or not if taken over. After predicting the presence of driver distraction or a crash, AUCAS-L3 automatically applies the brake pedal for drivers as effective and efficient protection to driver distraction under automated driving. And finally, a conceptual prototype in predicting AV distraction and traffic conflict was proposed, which can predict the collision hazards in advance of 0.82 seconds on average
    • …
    corecore