278 research outputs found

    Driver drowsiness detection using Gray Wolf Optimizer based on voice recognition

    Get PDF
    Globally, drowsiness detection prevents accidents. Blood biochemicals, brain impulses, etc., can measure tiredness. However, due to user discomfort, these approaches are challenging to implement. This article describes a voice-based drowsiness detection system and shows how to detect driver fatigue before it hampers driving. A neural network and Gray Wolf Optimizer are used to classify sleepiness automatically. The recommended approach is evaluated in alert and sleep-deprived states on the driver tiredness detection voice real dataset. The approach used in speech recognition is mel-frequency cepstral coefficients (MFCCs) and linear prediction coefficients (LPCs). The SVM algorithm has the lowest accuracy (71.8%) compared to the typical neural network. GWOANN employs 13-9-7-5 and 30-20-13-7 neurons in hidden layers, where the GWOANN technique had 86.96% and 90.05% accuracy, respectively, whereas the ANN model achieved 82.50% and 85.27% accuracy, respective

    Driver Drowsiness Detection Using Gray Wolf Optimizer Based on Voice Recognition

    Get PDF
    Globally, drowsiness detection prevents accidents. Blood biochemicals, brain impulses, etc., can measure tiredness. However, due to user discomfort, these approaches are challenging to implement. This article describes a voice-based drowsiness detection system and shows how to detect driver fatigue before it hampers driving. A neural network and Gray Wolf Optimizer are used to classify sleepiness automatically. The recommended approach is evaluated in alert and sleep-deprived states on the driver tiredness detection voice real dataset. The approach used in speech recognition is mel-frequency cepstral coefficients (MFCCs) and linear prediction coefficients (LPCs). The SVM algorithm has the lowest accuracy (71.8%) compared to the typical neural network. GWOANN employs 13-9-7-5 and 30-20-13-7 neurons in hidden layers, where the GWOANN technique had 86.96% and 90.05% accuracy, respectively, whereas the ANN model achieved 82.50% and 85.27% accuracy, respectively

    Modern drowsiness detection techniques: a review

    Get PDF
    According to recent statistics, drowsiness, rather than alcohol, is now responsible for one-quarter of all automobile accidents. As a result, many monitoring systems have been created to reduce and prevent such accidents. However, despite the huge amount of state-of-the-art drowsiness detection systems, it is not clear which one is the most appropriate. The following points will be discussed in this paper: Initial consideration should be given to the many sorts of existing supervised detecting techniques that are now in use and grouped into four types of categories (behavioral, physiological, automobile and hybrid), Second, the supervised machine learning classifiers that are used for drowsiness detection will be described, followed by a discussion of the advantages and disadvantages of each technique that has been evaluated, and lastly the recommendation of a new strategy for detecting drowsiness

    Vision-based Driver State Monitoring Using Deep Learning

    Get PDF
    Road accidents cause thousands of injuries and losses of lives every year, ranking among the top lifetime odds of death causes. More than 90% of the traffic accidents are caused by human errors [1], including sight obstruction, failure to spot danger through inattention, speeding, expectation errors, and other reasons. In recent years, driver monitoring systems (DMS) have been rapidly studied and developed to be used in commercial vehicles to prevent human error-caused car crashes. A DMS is a vehicle safety system that monitors driver’s attention and warns if necessary. Such a system may contain multiple modules that detect the most accident-related human factors, such as drowsiness and distractions. Typical DMS approaches seek driver distraction cues either from vehicle acceleration and steering (vehicle-based approach), driver physiological signals (physiological approach), or driver behaviours (behavioural-based approach). Behavioural-based driver state monitoring has numerous advantages over vehicle-based and physiological-based counterparts, including fast responsiveness and non-intrusiveness. In addition, the recent breakthrough in deep learning enables high-level action and face recognition, expanding driver monitoring coverage and improving model performance. This thesis presents CareDMS, a behavioural approach-based driver monitoring system using deep learning methods. CareDMS consists of driver anomaly detection and classification, gaze estimation, and emotion recognition. Each approach is developed with state-of-the-art deep learning solutions to address the shortcomings of the current DMS functionalities. Combined with a classic drowsiness detection method, CareDMS thoroughly covers three major types of distractions: physical (hands-off-steering wheel), visual (eyes-off-road ahead), and cognitive (minds-off-driving). There are numerous challenges in behavioural-based driver state monitoring. Current driver distraction detection methods either lack detailed distraction classification or unknown driver anomalies generalization. This thesis introduces a novel two-phase proposal and classification network architecture. It can suspect all forms of distracted driving and recognize driver actions simultaneously, which provide downstream DMS important information for warning level customization. Next, gaze estimation for driver monitoring is difficult as drivers tend to have severe head movements while driving. This thesis proposes a video-based neural network that jointly learns head pose and gaze dynamics together. The design significantly reduces per-head-pose gaze estimation performance variance compared to benchmarks. Furthermore, emotional driving such as road rage and sadness could seriously impact driving performance. However, individuals have various emotional expressions, which makes vision-based emotion recognition a challenging task. This work proposes an efficient and versatile multimodal fusion module that effectively fuses facial expression and human voice for emotion recognition. Visible advantages are demonstrated compared to using a single modality. Finally, a driver state monitoring system, CareDMS, is presented to convert the output of each functionality into a specific driver’s status measurement and integrates various measurements into the driver’s level of alertness

    Human Attention Assessment Using A Machine Learning Approach with GAN-based Data Augmentation Technique Trained Using a Custom Dataset

    Get PDF
    Human–robot interactions require the ability of the system to determine if the user is paying attention. However, to train such systems, massive amounts of data are required. In this study, we addressed the issue of data scarcity by constructing a large dataset (containing ~120,000 photographs) for the attention detection task. Then, by using this dataset, we established a powerful baseline system. In addition, we extended the proposed system by adding an auxiliary face detection module and introducing a unique GAN-based data augmentation technique. Experimental results revealed that the proposed system yields superior performance compared to baseline models and achieves an accuracy of 88% on the test set. Finally, we created a web application for testing the proposed model in real time

    A framework for context-aware driver status assessment systems

    Get PDF
    The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard. Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented. With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform. To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior. A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety. Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition
    • …
    corecore