51 research outputs found

    A sophisticated Drowsiness Detection System via Deep Transfer Learning for real time scenarios

    Get PDF
    Driver drowsiness is one of the leading causes of road accidents resulting in serious physical injuries, fatalities, and substantial economic losses. A sophisticated Driver Drowsiness Detection (DDD) system can alert the driver in case of abnormal behavior and avoid catastrophes. Several studies have already addressed driver drowsiness through behavioral measures and facial features. In this paper, we propose a hybrid real-time DDD system based on the Eyes Closure Ratio and Mouth Opening Ratio using simple camera and deep learning techniques. This system seeks to model the driver's behavior in order to alert him/her in case of drowsiness states to avoid potential accidents. The main contribution of the proposed approach is to build a reliable system able to avoid false detected drowsiness situations and to alert only the real ones. To this end, our research procedure is divided into two processes. The offline process performs a classification module using pretrained Convolutional Neural Networks (CNNs) to detect the drowsiness of the driver. In the online process, we calculate the percentage of the eyes' closure and yawning frequency of the driver online from real-time video using the Chebyshev distance instead of the classic Euclidean distance. The accurate drowsiness state of the driver is evaluated with the aid of the pretrained CNNs based on an ensemble learning paradigm. In order to improve models' performances, we applied data augmentation techniques for the generated dataset. The accuracies achieved are 97 % for the VGG16 model, 96% for VGG19 model and 98% for ResNet50 model. This system can assess the driver's dynamics with a precision rate of 98%

    Multimodal Human Eye Blink Recognition Using Z-score Based Thresholding and Weighted Features

    Get PDF
    A novel real-time multimodal eye blink detection method using an amalgam of five unique weighted features extracted from the circle boundary formed from the eye landmarks is proposed. The five features, namely (Vertical Head Positioning, Orientation Factor, Proportional Ratio, Area of Intersection, and Upper Eyelid Radius), provide imperative gen (z score threshold) accurately predicting the eye status and thus the blinking status. An accurate and precise algorithm employing the five weighted features is proposed to predict eye status (open/close). One state-of-the-art dataset ZJU (eye-blink), is used to measure the performance of the method. Precision, recall, F1-score, and ROC curve measure the proposed method performance qualitatively and quantitatively. Increased accuracy (of around 97.2%) and precision (97.4%) are obtained compared to other existing unimodal approaches. The efficiency of the proposed method is shown to outperform the state-of-the-art methods

    Computing driver tiredness and fatigue in automobile via eye tracking and body movements

    Get PDF
    The aim of this paper is to classify the driver tiredness and fatigue in automobile via eye tracking and body movements using deep learning based Convolutional Neural Network (CNN) algorithm. Vehicle driver face localization serves as one of the most widely used real-world applications in fields like toll control, traffic accident scene analysis, and suspected vehicle tracking. The research proposed a CNN classifier for simultaneously localizing the region of human face and eye positioning. The classifier, rather than bounding rectangles, gives bounding quadrilaterals, which gives a more precise indication for vehicle driver face localization. The adjusted regions are preprocessed to remove noise and passed to the CNN classifier for real time processing. The preprocessing of the face features extracts connected components, filters them by size, and groups them into face expressions. The employed CNN is the well-known technology for human face recognition. One we aim to extract the facial landmarks from the frames, we will then leverage classification models and deep learning based convolutional neural networks that predict the state of the driver as 'Alert' or 'Drowsy' for each of the frames extracted. The CNN model could predict the output state labels (Alert/Drowsy) for each frame, but we wanted to take care of sequential image frames as that is extremely important while predicting the state of an individual. The process completes, if all regions have a sufficiently high score or a fixed number of retries are exhausted. The output consists of the detected human face type, the list of regions including the extracted mouth and eyes with recognition reliability through CNN with an accuracy of 98.57% with 100 epochs of training and testing

    Context-Based Rider Assistant System for Two Wheeled Self-Balancing Vehicles

    Get PDF
    Personal mobility devises become more and more popular last years. Gyroscooters, two wheeled self-balancing vehicles, wheelchair, bikes, and scooters help people to solve the first and last mile problems in big cities. To help people with navigation and to increase their safety the intelligent rider assistant systems can be utilized that are used the rider personal smartphone to form the context and provide the rider with the recommendations. We understand the context as any information that characterize current situation. So, the context represents the model of current situation. We assume that rider mounts personal smartphone that allows it to track the rider face using the front-facing camera. Modern smartphones allow to track current situation using such sensors as: GPS / GLONASS, accelerometer, gyroscope, magnetometer, microphone, and video cameras. The proposed rider assistant system uses these sensors to capture the context information about the rider and the vehicle and generates context-oriented recommendations. The proposed system is aimed at dangerous situation detection for the rider, we are considering two dangerous situations: drowsiness and distraction. Using the computer vision methods, we determine parameters of the rider face (eyes, nose, mouth, head pith and rotation angles) and based on analysis of this parameters detect the dangerous situations. The paper presents a comprehensive related work analysis in the topic of intelligent driver assistant systems and recommendation generation, an approach to dangerous situation detection and recommendation generation is proposed, and evaluation of the distraction dangerous state determination for personal mobility device riders

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    Drowsiness Detection for Driver Assistance

    Get PDF
    This thesis presents a noninvasive approach to detect drowsiness of drivers using behavioral and vehicle based measuring techniques. The system accepts stream of driver's images from a camera and steering wheel movement from G-27 Logitech racing wheel system. It first describes a standalone implementation of the behavioral based drowsiness detection method. The method accepts the input images and analyzes the facial expressions of the driver through sets of processing stages. In order to improve the reliability of the system, we also proposed a comprehensive approach of combining the facial expression analysis with a steering wheel data analysis in decision level as well as feature level integration. We also presented a new approach of modeling the temporal information of facial expressions of drowsiness using HMM. Each proposed approach has been implemented in a simulated driving setup. The detection performance of each method is evaluated through experiments and its parameter settings were optimized. Finally we present a case study which discusses the practicality of our system in a small-scaled intelligent transportation system where it switches the driving mechanism between manual and autonomous control depending on the state of the driver.Electrical Engineerin

    A Comparative Emotions-detection Review for Non-intrusive Vision-Based Facial Expression Recognition

    Get PDF
    Affective computing advocates for the development of systems and devices that can recognize, interpret, process, and simulate human emotion. In computing, the field seeks to enhance the user experience by finding less intrusive automated solutions. However, initiatives in this area focus on solitary emotions that limit the scalability of the approaches. Further reviews conducted in this area have also focused on solitary emotions, presenting challenges to future researchers when adopting these recommendations. This review aims at highlighting gaps in the application areas of Facial Expression Recognition Techniques by conducting a comparative analysis of various emotion detection datasets, algorithms, and results provided in existing studies. The systematic review adopted the PRISMA model and analyzed eighty-three publications. Findings from the review show that different emotions call for different Facial Expression Recognition techniques, which should be analyzed when conducting Facial Expression Recognition. Keywords: Facial Expression Recognition, Emotion Detection, Image Processing, Computer Visio

    A framework for context-aware driver status assessment systems

    Get PDF
    The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard. Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented. With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform. To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior. A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety. Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition
    • …
    corecore