15 research outputs found

    Learning multimodal representations for drowsiness detection

    Get PDF

    Visual analysis of fatigue in Industry 4.0

    Get PDF
    The performance of manufacturing operations relies heavily on the operators’ performance. When operators begin to exhibit signs of fatigue, both their individual performance and the overall performance of the manufacturing plant tend to decline. This research presents a methodology for analyzing fatigue in assembly operations, considering indicators such as the EAR (Eye Aspect Ratio) indicator, operator pose, and elapsed operating time. To facilitate the analysis, a dataset of assembly operations was generated and recorded from three different perspectives: frontal, lateral, and top views. The top view enables the analysis of the operator’s face and posture to identify hand positions. By labeling the actions in our dataset, we train a deep learning system to recognize the sequence of operator actions required to complete the operation. Additionally, we propose a model for determining the level of fatigue by processing multimodal information acquired from various sources, including eye blink rate, operator pose, and task duration during assembly operations.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. “A way of making Europe” European Regional Development Fund (ERDF) and MCIN/AEI/10.13039/501100011033 for supporting this work under the MoDeaAS project (grant PID2019-104818RB-I00)

    Efficient and Robust Driver Fatigue Detection Framework Based on the Visual Analysis of Eye States

    Get PDF
    Fatigue detection based on vision is widely employed in vehicles due to its real-time and reliable detection results. With the coronavirus disease (COVID-19) outbreak, many proposed detection systems based on facial characteristics would be unreliable due to the face covering with the mask. In this paper, we propose a robust visual-based fatigue detection system for monitoring drivers, which is robust regarding the coverings of masks, changing illumination and head movement of drivers. Our system has three main modules: face key point alignment, fatigue feature extraction and fatigue measurement based on fused features. The innovative core techniques are described as follows: (1) a robust key point alignment algorithm by fusing global face information and regional eye information, (2) dynamic threshold methods to extract fatigue characteristics and (3) a stable fatigue measurement based on fusing percentage of eyelid closure (PERCLOS) and proportion of long closure duration blink (PLCDB). The excellent performance of our proposed algorithm and methods are verified in experiments. The experimental results show that our key point alignment algorithm is robust to different scenes, and the performance of our proposed fatigue measurement is more reliable due to the fusion of PERCLOS and PLCDB

    Research on a Real-Time Driver Fatigue Detection Algorithm Based on Facial Video Sequences

    Get PDF
    The research on driver fatigue detection is of great significance to improve driving safety. This paper proposes a real-time comprehensive driver fatigue detection algorithm based on facial landmarks to improve the detection accuracy, which detects the driver’s fatigue status by using facial video sequences without equipping their bodies with other intelligent devices. A tasks-constrained deep convolutional network is constructed to detect the face region based on 68 key points, which can solve the optimization problem caused by the different convergence speeds of each task. According to the real-time facial video images, the eye feature of the eye aspect ratio (EAR), mouth aspect ratio (MAR) and percentage of eye closure time (PERCLOS) are calculated based on facial landmarks. A comprehensive driver fatigue assessment model is established to assess the fatigue status of drivers through eye/mouth feature selection. After a series of comparative experiments, the results show that this proposed algorithm achieves good performance in both accuracy and speed for driver fatigue detection.</jats:p

    IoT-Based Vision Techniques in Autonomous Driving

    Get PDF
    As more people drive vehicles, there is a corresponding increase in the number of deaths and injuries that happen due to road traffic accidents. Thus, various solutions have been proposed to reduce the impact of accidents. One of the most popular solutions is autonomous driving, which involves a series of embedded systems. These embedded systems assist drivers by providing crucial information on the traffic environment or by acting to protect the vehicle occupants in particular situations or to aid driving. Autonomous driving has the capacity to improve transportation services dramatically. Given the successful use of visual technologies and the implementation of driver assistance systems in recent decades, vehicles are prepared to eliminate accidents, congestion, collisions, and pollution. In addition, the IoT is a state-of-the-art invention that will usher in the new age of the Internet by allowing different physical objects to connect without the need for human interaction. The accuracy with which the vehicle's environment is detected from static images or videos, as well as the IoT connections and data management, is critical to the success of autonomous driving. The main aim of this review article is to encapsulate the latest advances in vision strategies and IoT technologies for autonomous driving by analysing numerous publications from well-known databases

    A sophisticated Drowsiness Detection System via Deep Transfer Learning for real time scenarios

    Get PDF
    Driver drowsiness is one of the leading causes of road accidents resulting in serious physical injuries, fatalities, and substantial economic losses. A sophisticated Driver Drowsiness Detection (DDD) system can alert the driver in case of abnormal behavior and avoid catastrophes. Several studies have already addressed driver drowsiness through behavioral measures and facial features. In this paper, we propose a hybrid real-time DDD system based on the Eyes Closure Ratio and Mouth Opening Ratio using simple camera and deep learning techniques. This system seeks to model the driver's behavior in order to alert him/her in case of drowsiness states to avoid potential accidents. The main contribution of the proposed approach is to build a reliable system able to avoid false detected drowsiness situations and to alert only the real ones. To this end, our research procedure is divided into two processes. The offline process performs a classification module using pretrained Convolutional Neural Networks (CNNs) to detect the drowsiness of the driver. In the online process, we calculate the percentage of the eyes' closure and yawning frequency of the driver online from real-time video using the Chebyshev distance instead of the classic Euclidean distance. The accurate drowsiness state of the driver is evaluated with the aid of the pretrained CNNs based on an ensemble learning paradigm. In order to improve models' performances, we applied data augmentation techniques for the generated dataset. The accuracies achieved are 97 % for the VGG16 model, 96% for VGG19 model and 98% for ResNet50 model. This system can assess the driver's dynamics with a precision rate of 98%

    YAPAY ZEKA İLE GERÇEK ZAMANLI SÜRÜCÜ YORGUNLUK TESPİTİ

    Get PDF
    Tezsiz Yüksek Lisans Bitirme ProjesiElde edilen istatistikler, trafik kazalarının birçok nedeni olduğunu ortaya koymaktadır. Yanlış davranışlar, dikkatsizlik, ihmal ve benzeri faktörlerin bir araya gelmesi, kazaların oluşmasında etkili olmaktadır. Bu tür kazaların sonuçları can ve mal kaybı gibi ciddi sonuçlar doğurabilmektedir. Günümüzde, trafik kazalarının önde gelen nedenlerinden biri, sürücünün yorgun ve uykusuz olmasıdır. Bu nedenle, sürücülerin anlık durumlarının izlenmesi ve yorgunluk tespitinin yapılması, kazaların sayısında önemli bir azalmaya neden olacaktır. Bu amaçla, gerçek zamanlı ve yüksek doğruluklu bir sistem gerekmektedir. Aracın güvenlik sisteminde, sürücünün yüzünü tanırız ve sürücü gözlerini kırptığında ve uykulu olduğunda uyarılar veririz ve güvenliğini korumak için bir alarm çalarız. Uykusuzluk, günümüzdeki trafik kazalarının ana nedenlerinden biridir. Burada, bu tür kazaları önlemek için uykusuzluk uyarı sistemi ve araç güvenlik sistemi geliştiriyoruz. Yapay Zeka (AI) teknolojisini kullanarak bu sistem inşa ediliyor. İlk olarak, sürücünün resmi çekilir ve yüz tanıma teknikleri kullanılarak tanımlanır. Sürücü araçta olduğunda ve aracı sürmeye başladığında, örneğin uykulu hissederse, bir uyarı/alarm oluşur böylece kendini uyanık tutabilir, mola alabilir ve daha sonra aracı sürebilir

    Аналіз алгоритмів контролю втоми та зосередженості користувача, з використанням засобів комп'ютерного зору

    Get PDF
    В даній роботі досліджено 3 різні алгоритми визначення втоми за допомо- гою засобів комп'ютерного зору. Для захоплення відео та обличчя на ньому ви- користовувалися бібліотеки OpenCV і Dlib які є у вільному доступі. Було ро-зроб- лено додаток для визначення втоми на мові Python. Цей додаток було протесто- вано на власнозібраному датасеті. В результаті тестування було встановлено при- близну точність алгоритму і фактори які можуть мати значний вплив на неї.This paper investigates 3 different algorithms for determining fatigue using computer vision. OpenCV and Dlib libraries, which are freely available, were used to capture video and faces. An application was made to determine fatigue in Python. This application was tested on a self-assembled dataset. As a result of testing, the approximate accuracy of the algorithm and factors that may have a significant impact on it were established

    Vision-based Driver State Monitoring Using Deep Learning

    Get PDF
    Road accidents cause thousands of injuries and losses of lives every year, ranking among the top lifetime odds of death causes. More than 90% of the traffic accidents are caused by human errors [1], including sight obstruction, failure to spot danger through inattention, speeding, expectation errors, and other reasons. In recent years, driver monitoring systems (DMS) have been rapidly studied and developed to be used in commercial vehicles to prevent human error-caused car crashes. A DMS is a vehicle safety system that monitors driver’s attention and warns if necessary. Such a system may contain multiple modules that detect the most accident-related human factors, such as drowsiness and distractions. Typical DMS approaches seek driver distraction cues either from vehicle acceleration and steering (vehicle-based approach), driver physiological signals (physiological approach), or driver behaviours (behavioural-based approach). Behavioural-based driver state monitoring has numerous advantages over vehicle-based and physiological-based counterparts, including fast responsiveness and non-intrusiveness. In addition, the recent breakthrough in deep learning enables high-level action and face recognition, expanding driver monitoring coverage and improving model performance. This thesis presents CareDMS, a behavioural approach-based driver monitoring system using deep learning methods. CareDMS consists of driver anomaly detection and classification, gaze estimation, and emotion recognition. Each approach is developed with state-of-the-art deep learning solutions to address the shortcomings of the current DMS functionalities. Combined with a classic drowsiness detection method, CareDMS thoroughly covers three major types of distractions: physical (hands-off-steering wheel), visual (eyes-off-road ahead), and cognitive (minds-off-driving). There are numerous challenges in behavioural-based driver state monitoring. Current driver distraction detection methods either lack detailed distraction classification or unknown driver anomalies generalization. This thesis introduces a novel two-phase proposal and classification network architecture. It can suspect all forms of distracted driving and recognize driver actions simultaneously, which provide downstream DMS important information for warning level customization. Next, gaze estimation for driver monitoring is difficult as drivers tend to have severe head movements while driving. This thesis proposes a video-based neural network that jointly learns head pose and gaze dynamics together. The design significantly reduces per-head-pose gaze estimation performance variance compared to benchmarks. Furthermore, emotional driving such as road rage and sadness could seriously impact driving performance. However, individuals have various emotional expressions, which makes vision-based emotion recognition a challenging task. This work proposes an efficient and versatile multimodal fusion module that effectively fuses facial expression and human voice for emotion recognition. Visible advantages are demonstrated compared to using a single modality. Finally, a driver state monitoring system, CareDMS, is presented to convert the output of each functionality into a specific driver’s status measurement and integrates various measurements into the driver’s level of alertness

    Detecting fatigue in car drivers and aircraft pilots by using eye-motion metrics

    Full text link
    Fatigue is widely recognised for risking the safety of aviation and ground transportation. To enhance transport safety, fatigue detection systems based on psychophysiological measures have been under development for many years. However, a reliable and robust fatigue detection system is still missing. This thesis starts with a literature review of fatigue concepts in the transportation field and the current psychophysiological measures to fatigue, and narrows down the focus to improving fatigue detection systems using eye-motion measures. A research gap was identified between current fatigue systems only focusing on part of sleepy symptoms and a comprehensive fatigue detection system including mental fatigue needed. To address this gap, four studies were conducted to reshape the understanding of fatigue in transportation and explore effective eye-motion metrics for indicating fatigue considering different causal factors. Studies 1 and 2 investigated the influence of two types of task-related fatigue on eye movement. Twenty participants completed a vigilance task before and after a 1-h simulator-based drive with a secondary task. Forty participants, divided equally into two groups, finished the same task before and after a 1-h and 1.5-h monotonous driving task. The results demonstrated that two types of task-related fatigue caused by cognitive overload and prolonged underload induced different physiological responses to eye-motion metrics. The results also proved that the increased mental fatigue decreased driver’s vigilance. Studies 3 and 4 simulated two hazardous fatigue scenarios for pilots. Study 3 explored the relationship between eye-motion metrics and pilot fatigue in an underload flight condition with sleep deprivation (low workload and sleep pressure). Study 4 explored the effective eye-motion metrics to estimate pilot’s cognitive fatigue imposed by time on task and high workload. The results suggested different eye-motion metrics to indicate sleepiness and mental fatigue. In addition, based on the sleepiness and mental fatigue indicators in Studies 3 and 4, several classifiers were built and evaluated to accurately detect sleepiness and mental fatigue. These findings show that considering casual factors such as sleep pressure, time on task and workload when using eye-motion metrics to detect fatigue can improve the accuracy and face validity of the current fatigue detection systems
    corecore