5,560 research outputs found

    Detecting Distracted Driving with Deep Learning

    Get PDF
    © Springer International Publishing AG 2017Driver distraction is the leading factor in most car crashes and near-crashes. This paper discusses the types, causes and impacts of distracted driving. A deep learning approach is then presented for the detection of such driving behaviors using images of the driver, where an enhancement has been made to a standard convolutional neural network (CNN). Experimental results on Kaggle challenge dataset have confirmed the capability of a convolutional neural network (CNN) in this complicated computer vision task and illustrated the contribution of the CNN enhancement to a better pattern recognition accuracy.Peer reviewe

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    Driver Distraction Identification with an Ensemble of Convolutional Neural Networks

    Full text link
    The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949

    Human-Centric Detection and Mitigation Approach for Various Levels of Cell Phone-Based Driver Distractions

    Get PDF
    abstract: Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.Dissertation/ThesisDoctoral Dissertation Applied Psychology 201

    Real-Time Detection System of Driver Distraction Using Machine Learning

    Get PDF

    Modeling driver distraction mechanism and its safety impact in automated vehicle environment.

    Get PDF
    Automated Vehicle (AV) technology expects to enhance driving safety by eliminating human errors. However, driver distraction still exists under automated driving. The Society of Automotive Engineers (SAE) has defined six levels of driving automation from Level 0~5. Until achieving Level 5, human drivers are still needed. Therefore, the Human-Vehicle Interaction (HVI) necessarily diverts a driver’s attention away from driving. Existing research mainly focused on quantifying distraction in human-operated vehicles rather than in the AV environment. It causes a lack of knowledge on how AV distraction can be detected, quantified, and understood. Moreover, existing research in exploring AV distraction has mainly pre-defined distraction as a binary outcome and investigated the patterns that contribute to distraction from multiple perspectives. However, the magnitude of AV distraction is not accurately quantified. Moreover, past studies in quantifying distraction have mainly used wearable sensors’ data. In reality, it is not realistic for drivers to wear these sensors whenever they drive. Hence, a research motivation is to develop a surrogate model that can replace the wearable device-based data to predict AV distraction. From the safety perspective, there lacks a comprehensive understanding of how AV distraction impacts safety. Furthermore, a solution is needed for safely offsetting the impact of distracted driving. In this context, this research aims to (1) improve the existing methods in quantifying Human-Vehicle Interaction-induced (HVI-induced) driver distraction under automated driving; (2) develop a surrogate driver distraction prediction model without using wearable sensor data; (3) quantitatively reveal the dynamic nature of safety benefits and collision hazards of HVI-induced visual and cognitive distractions under automated driving by mathematically formulating the interrelationships among contributing factors; and (4) propose a conceptual prototype of an AI-driven, Ultra-advanced Collision Avoidance System (AUCAS-L3) targeting HVI-induced driver distraction under automated driving without eye-tracking and video-recording. Fixation and pupil dilation data from the eye tracking device are used to model driver distraction, focusing on visual and cognitive distraction, respectively. In order to validate the proposed methods for measuring and modeling driver distraction, a data collection was conducted by inviting drivers to try out automated driving under Level 3 automation on a simulator. Each driver went through a jaywalker scenario twice, receiving a takeover request under two types of HVI, namely “visual only” and “visual and audible”. Each driver was required to wear an eye-tracker so that the fixation and pupil dilation data could be collected when driving, along with driving performance data being recorded by the simulator. In addition, drivers’ demographical information was collected by a pre-experiment survey. As a result, the magnitude of visual and cognitive distraction was quantified, exploring the dynamic changes over time. Drivers are more concentrated and maintain a higher level of takeover readiness under the “visual and audible” warning, compared to “visual only” warning. The change of visual distraction was mathematically formulated as a function of time. In addition, the change of visual distraction magnitude over time is explained from the driving psychology perspective. Moreover, the visual distraction was also measured by direction in this research, and hotspots of visual distraction were identified with regard to driving safety. When discussing the cognitive distraction magnitude, the driver’s age was identified as a contributing factor. HVI warning type contributes to the significant difference in cognitive distraction acceleration rate. After drivers reach the maximum visual distraction, cognitive distraction tends to increase continuously. Also, this research contributes to quantitatively revealing how visual and cognitive distraction impacts the collision hazards, respectively. Moreover, this research contributes to the literature by developing deep learning-based models in predicting a driver’s visual and cognitive distraction intensity, focusing on demographics, HVI warning types, and driving performance. As a solution to safety issues caused by driver distraction, the AUCAS-L3 has been proposed. The AUCAS-L3 is validated with high accuracies in predicting (a) whether a driver is distracted and does not perform takeover actions and (b) whether crashes happen or not if taken over. After predicting the presence of driver distraction or a crash, AUCAS-L3 automatically applies the brake pedal for drivers as effective and efficient protection to driver distraction under automated driving. And finally, a conceptual prototype in predicting AV distraction and traffic conflict was proposed, which can predict the collision hazards in advance of 0.82 seconds on average

    A Fuzzy-Logic Approach to Dynamic Bayesian Severity Level Classification of Driver Distraction Using Image Recognition

    Get PDF
    open access articleDetecting and classifying driver distractions is crucial in the prevention of road accidents. These distractions impact both driver behavior and vehicle dynamics. Knowing the degree of driver distraction can aid in accident prevention techniques, including transitioning of control to a level 4 semi- autonomous vehicle, when a high distraction severity level is reached. Thus, enhancement of Advanced Driving Assistance Systems (ADAS) is a critical component in the safety of vehicle drivers and other road users. In this paper, a new methodology is introduced, using an expert knowledge rule system to predict the severity of distraction in a contiguous set of video frames using the Naturalistic Driving American University of Cairo (AUC) Distraction Dataset. A multi-class distraction system comprises the face orientation, drivers’ activities, hands and previous driver distraction, a severity classification model is developed as a discrete dynamic Bayesian (DDB). Furthermore, a Mamdani-based fuzzy system was implemented to detect multi- class of distractions into a severity level of safe, careless or dangerous driving. Thus, if a high level of severity is reached the semi-autonomous vehicle will take control. The result further shows that some instances of driver’s distraction may quickly transition from a careless to dangerous driving in a multi-class distraction context
    • …
    corecore