12,546 research outputs found

    A Fuzzy-Logic Approach to Dynamic Bayesian Severity Level Classification of Driver Distraction Using Image Recognition

    Get PDF
    open access articleDetecting and classifying driver distractions is crucial in the prevention of road accidents. These distractions impact both driver behavior and vehicle dynamics. Knowing the degree of driver distraction can aid in accident prevention techniques, including transitioning of control to a level 4 semi- autonomous vehicle, when a high distraction severity level is reached. Thus, enhancement of Advanced Driving Assistance Systems (ADAS) is a critical component in the safety of vehicle drivers and other road users. In this paper, a new methodology is introduced, using an expert knowledge rule system to predict the severity of distraction in a contiguous set of video frames using the Naturalistic Driving American University of Cairo (AUC) Distraction Dataset. A multi-class distraction system comprises the face orientation, drivers’ activities, hands and previous driver distraction, a severity classification model is developed as a discrete dynamic Bayesian (DDB). Furthermore, a Mamdani-based fuzzy system was implemented to detect multi- class of distractions into a severity level of safe, careless or dangerous driving. Thus, if a high level of severity is reached the semi-autonomous vehicle will take control. The result further shows that some instances of driver’s distraction may quickly transition from a careless to dangerous driving in a multi-class distraction context

    Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification

    Full text link
    Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions.Comment: Accepted for Publication in IET Computer Vision. arXiv admin note: text overlap with arXiv:1507.0476

    Integration of an adaptive infotainment system in a vehicle and validation in real driving scenarios

    Get PDF
    More services, functionalities, and interfaces are increasingly being incorporated into current vehicles and may overload the driver capacity to perform primary driving tasks adequately. For this reason, a strategy for easing driver interaction with the infotainment system must be defined, and a good balance between road safety and driver experience must also be achieved. An adaptive Human Machine Interface (HMI) that manages the presentation of information and restricts drivers’ interaction in accordance with the driving complexity was designed and evaluated. For this purpose, the driving complexity value employed as a reference was computed by a predictive model, and the adaptive interface was designed following a set of proposed HMI principles. The system was validated performing acceptance and usability tests in real driving scenarios. Results showed the system performs well in real driving scenarios. Also, positive feedbacks were received from participants endorsing the benefits of integrating this kind of system as regards driving experience and road safety.Postprint (published version

    An Ontological Approach to Inform HMI Designs for Minimizing Driver Distractions with ADAS

    Get PDF
    ADAS (Advanced Driver Assistance Systems) are in-vehicle systems designed to enhance driving safety and efficiency as well as comfort for drivers in the driving process. Recent studies have noticed that when Human Machine Interface (HMI) is not designed properly, an ADAS can cause distraction which would affect its usage and even lead to safety issues. Current understanding of these issues is limited to the context-dependent nature of such systems. This paper reports the development of a holistic conceptualisation of how drivers interact with ADAS and how such interaction could lead to potential distraction. This is done taking an ontological approach to contextualise the potential distraction, driving tasks and user interactions centred on the use of ADAS. Example scenarios are also given to demonstrate how the developed ontology can be used to deduce rules for identifying distraction from ADAS and informing future designs

    Crash/Near-Crash: Impact of Secondary Tasks and Real-Time Detection of Distracted Driving

    Get PDF
    The main goal of this dissertation is to investigate the problem of distracted driving from two different perspectives. First, the identification of possible sources of distraction and their associated crash/near-crash risk. That can assist government officials toward more informed decision-making process, allowing for optimized allocation of available resources to reduce roadway crashes and improve traffic safety. Second, actively counteracting the distracted driving phenomenon by quantitative evaluation of eye glance patterns. This dissertation research consists of two different parts. The first part provides an in-depth analysis for the increased crash/near-crash risk associated with different secondary task activities using the largest real-world naturalistic driving dataset (SHRP2 Naturalistic Driving Study). Several statistical and data mining techniques are developed to analyze the distracted driving and crash risk. More specifically, two different models were employed to quantify the increased risk associated with each secondary task: a baseline-category logit model, and a rule mining association model. The baseline-category logit model identified the increased risk in terms of odds ratios, while the A-priori association algorithm detected the associated risks in terms of rules. Each rule was then evaluated based on the lift index. The two models succeeded in ranking all the secondary task activities according to the associated increased crash/near-crash risk efficiently. To actively counteract to the distracted driving phenomenon, a new approach was developed to analyze eye glance patterns and quantify distracted driving behavior under safety and non-Safety Critical Events (SCEs). This approach is then applied to the Naturalistic Engagement in Secondary Tasks (NEST) dataset to investigate how drivers allocate their attention while driving, especially while distracted. The analysis revealed that distracted driving behavior can be well characterized using two new distraction risk indicators. Additional statistical analyses showed that the two indicators increase significantly for SCE compared to normal driving events. Consequently, an artificial neural network (ANN) model was developed to test the SCEs predictability power when accounting for the two new indicators. The ANN model was able to predict the SCEs with an overall accuracy of 96.1%. This outcome can help build reliable algorithms for in-vehicle driving assistance systems to alert drivers before SCEs

    Real-Time Detection System of Driver Distraction Using Machine Learning

    Get PDF

    Driver activity recognition for intelligent vehicles: a deep learning approach

    Get PDF
    Driver decisions and behaviors are essential factors that can affect the driving safety. To understand the driver behaviors, a driver activities recognition system is designed based on the deep convolutional neural networks (CNN) in this study. Specifically, seven common driving activities are identified, which are the normal driving, right mirror checking, rear mirror checking, left mirror checking, using in-vehicle radio device, texting, and answering the mobile phone, respectively. Among these activities, the first four are regarded as normal driving tasks, while the rest three are classified into the distraction group. The experimental images are collected using a low-cost camera, and ten drivers are involved in the naturalistic data collection. The raw images are segmented using the Gaussian mixture model (GMM) to extract the driver body from the background before training the behavior recognition CNN model. To reduce the training cost, transfer learning method is applied to fine tune the pre-trained CNN models. Three different pre-trained CNN models, namely, AlexNet, GoogLeNet, and ResNet50 are adopted and evaluated. The detection results for the seven tasks achieved an average of 81.6% accuracy using the AlexNet, 78.6% and 74.9% accuracy using the GoogLeNet and ResNet50, respectively. Then, the CNN models are trained for the binary classification task and identify whether the driver is being distracted or not. The binary detection rate achieved 91.4% accuracy, which shows the advantages of using the proposed deep learning approach. Finally, the real-world application are analysed and discussed

    Modeling driver distraction mechanism and its safety impact in automated vehicle environment.

    Get PDF
    Automated Vehicle (AV) technology expects to enhance driving safety by eliminating human errors. However, driver distraction still exists under automated driving. The Society of Automotive Engineers (SAE) has defined six levels of driving automation from Level 0~5. Until achieving Level 5, human drivers are still needed. Therefore, the Human-Vehicle Interaction (HVI) necessarily diverts a driver’s attention away from driving. Existing research mainly focused on quantifying distraction in human-operated vehicles rather than in the AV environment. It causes a lack of knowledge on how AV distraction can be detected, quantified, and understood. Moreover, existing research in exploring AV distraction has mainly pre-defined distraction as a binary outcome and investigated the patterns that contribute to distraction from multiple perspectives. However, the magnitude of AV distraction is not accurately quantified. Moreover, past studies in quantifying distraction have mainly used wearable sensors’ data. In reality, it is not realistic for drivers to wear these sensors whenever they drive. Hence, a research motivation is to develop a surrogate model that can replace the wearable device-based data to predict AV distraction. From the safety perspective, there lacks a comprehensive understanding of how AV distraction impacts safety. Furthermore, a solution is needed for safely offsetting the impact of distracted driving. In this context, this research aims to (1) improve the existing methods in quantifying Human-Vehicle Interaction-induced (HVI-induced) driver distraction under automated driving; (2) develop a surrogate driver distraction prediction model without using wearable sensor data; (3) quantitatively reveal the dynamic nature of safety benefits and collision hazards of HVI-induced visual and cognitive distractions under automated driving by mathematically formulating the interrelationships among contributing factors; and (4) propose a conceptual prototype of an AI-driven, Ultra-advanced Collision Avoidance System (AUCAS-L3) targeting HVI-induced driver distraction under automated driving without eye-tracking and video-recording. Fixation and pupil dilation data from the eye tracking device are used to model driver distraction, focusing on visual and cognitive distraction, respectively. In order to validate the proposed methods for measuring and modeling driver distraction, a data collection was conducted by inviting drivers to try out automated driving under Level 3 automation on a simulator. Each driver went through a jaywalker scenario twice, receiving a takeover request under two types of HVI, namely “visual only” and “visual and audible”. Each driver was required to wear an eye-tracker so that the fixation and pupil dilation data could be collected when driving, along with driving performance data being recorded by the simulator. In addition, drivers’ demographical information was collected by a pre-experiment survey. As a result, the magnitude of visual and cognitive distraction was quantified, exploring the dynamic changes over time. Drivers are more concentrated and maintain a higher level of takeover readiness under the “visual and audible” warning, compared to “visual only” warning. The change of visual distraction was mathematically formulated as a function of time. In addition, the change of visual distraction magnitude over time is explained from the driving psychology perspective. Moreover, the visual distraction was also measured by direction in this research, and hotspots of visual distraction were identified with regard to driving safety. When discussing the cognitive distraction magnitude, the driver’s age was identified as a contributing factor. HVI warning type contributes to the significant difference in cognitive distraction acceleration rate. After drivers reach the maximum visual distraction, cognitive distraction tends to increase continuously. Also, this research contributes to quantitatively revealing how visual and cognitive distraction impacts the collision hazards, respectively. Moreover, this research contributes to the literature by developing deep learning-based models in predicting a driver’s visual and cognitive distraction intensity, focusing on demographics, HVI warning types, and driving performance. As a solution to safety issues caused by driver distraction, the AUCAS-L3 has been proposed. The AUCAS-L3 is validated with high accuracies in predicting (a) whether a driver is distracted and does not perform takeover actions and (b) whether crashes happen or not if taken over. After predicting the presence of driver distraction or a crash, AUCAS-L3 automatically applies the brake pedal for drivers as effective and efficient protection to driver distraction under automated driving. And finally, a conceptual prototype in predicting AV distraction and traffic conflict was proposed, which can predict the collision hazards in advance of 0.82 seconds on average

    Investigating the feasibility of vehicle telemetry data as a means of predicting driver workload

    Get PDF
    Driving is a safety critical task that requires a high level of attention and workload from the driver. Despite this, people often also perform secondary tasks such as eating or using a mobile phone, which increase workload levels and divert cognitive and physical attention from the primary task of driving. If a vehicle is aware that the driver is currently under high workload, the vehicle functionality can be changed in order to minimize any further demand. Traditionally, workload measurements have been performed using intrusive means such as physiological sensors. Another approach may be to use vehicle telemetry data as a performance measure for workload. In this paper, we present the Warwick-JLR Driver Monitoring Dataset (DMD) and analyse it to investigate the feasibility of using vehicle telemetry data for determining the driver workload. We perform a statistical analysis of subjective ratings, physiological data, and vehicle telemetry data collected during a track study. A data mining methodology is then presented to build predictive models using this data, for the driver workload monitoring problem
    • 

    corecore