2,090 research outputs found

    A study on tiredness assessment by using eye blink detection

    Get PDF
    In this paper, the loss of attention of automotive drivers is studied by using eye blink detection. Facial landmark detection for detecting eye is explored. Afterward, eye blink is detected using Eye Aspect Ratio. By comparing the time of eye closure to a particular period, the driver’s tiredness is decided. The total number of eye blinks in a minute is counted to detect drowsiness. Calculation of total eye blinks in a minute for the driver is done, then compared it with a known standard value. If any of the above conditions fulfills, the system decides the driver is unconscious. A total of 120 samples were taken by placing the light source front, back, and side. There were 40 samples for each position of the light source. The maximum error rate occurred when the light source was placed back with a 15% error rate. The best scenario was 7.5% error rate where the light source was placed front side. The eye blinking process gave an average error of 11.67% depending on the various position of the light source. Another 120 samples were taken at a different time of the day for calculating total eye blink in a minute. The maximum number of blinks was in the morning with an average blink rate of 5.78 per minute, and the lowest number of blink rate was in midnight with 3.33% blink rate. The system performed satisfactorily and achieved the eye blink pattern with 92.7% accuracy

    Driver attention analysis and drowsiness detection using mobile devices

    Get PDF
    Drowsiness and lack of attention are some of the most fatal and underrated accident causes while driving. In this thesis a non intrusive classifier based on features from drivers' facial movements has been developed, focusing on detection strategies that could be deployed on low-complexity devices, like smartphones. Different classification architectures will be proposed and studied in order to understand which implementation performed the best in terms of detection accuracy.openEmbargo temporaneo per motivi di segretezza e/o di proprietà dei risultati e informazioni di enti esterni o aziende private che hanno partecipato alla realizzazione del lavoro di ricerca relativo alla tes

    SleepyWheels: An Ensemble Model for Drowsiness Detection leading to Accident Prevention

    Full text link
    Around 40 percent of accidents related to driving on highways in India occur due to the driver falling asleep behind the steering wheel. Several types of research are ongoing to detect driver drowsiness but they suffer from the complexity and cost of the models. In this paper, SleepyWheels a revolutionary method that uses a lightweight neural network in conjunction with facial landmark identification is proposed to identify driver fatigue in real time. SleepyWheels is successful in a wide range of test scenarios, including the lack of facial characteristics while covering the eye or mouth, the drivers varying skin tones, camera placements, and observational angles. It can work well when emulated to real time systems. SleepyWheels utilized EfficientNetV2 and a facial landmark detector for identifying drowsiness detection. The model is trained on a specially created dataset on driver sleepiness and it achieves an accuracy of 97 percent. The model is lightweight hence it can be further deployed as a mobile application for various platforms.Comment: 20 page

    Driver Drowsiness Detection by Applying Deep Learning Techniques to Sequences of Images

    Get PDF
    This work presents the development of an ADAS (advanced driving assistance system) focused on driver drowsiness detection, whose objective is to alert drivers of their drowsy state to avoid road traffic accidents. In a driving environment, it is necessary that fatigue detection is performed in a non-intrusive way, and that the driver is not bothered with alarms when he or she is not drowsy. Our approach to this open problem uses sequences of images that are 60 s long and are recorded in such a way that the subject’s face is visible. To detect whether the driver shows symptoms of drowsiness or not, two alternative solutions are developed, focusing on the minimization of false positives. The first alternative uses a recurrent and convolutional neural network, while the second one uses deep learning techniques to extract numeric features from images, which are introduced into a fuzzy logic-based system afterwards. The accuracy obtained by both systems is similar: around 65% accuracy over training data, and 60% accuracy on test data. However, the fuzzy logic-based system stands out because it avoids raising false alarms and reaches a specificity (proportion of videos in which the driver is not drowsy that are correctly classified) of 93%. Although the obtained results do not achieve very satisfactory rates, the proposals presented in this work are promising and can be considered a solid baseline for future works.This work was supported by the Spanish Government under projects PID2019- 104793RB-C31, TRA2016-78886-C3-1-R, RTI2018-096036-B-C22, PEAVAUTO-CM-UC3M and by the Region of Madrid’s Excellence Program (EPUC3M17)

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays

    Estimating Level of Engagement from Ocular Landmarks

    Get PDF
    E-learning offers many advantages like being economical, flexible and customizable, but also has challenging aspects such as lack of – social-interaction, which results in contemplation and sense of remoteness. To overcome these and sustain learners’ motivation, various stimuli can be incorporated. Nevertheless, such adjustments initially require an assessment of engagement level. In this respect, we propose estimating engagement level from facial landmarks exploiting the facts that (i) perceptual decoupling is promoted by blinking during mentally demanding tasks; (ii) eye strain increases blinking rate, which also scales with task disengagement; (iii) eye aspect ratio is in close connection with attentional state and (iv) users’ head position is correlated with their level of involvement. Building empirical models of these actions, we devise a probabilistic estimation framework. Our results indicate that high and low levels of engagement are identified with considerable accuracy, whereas medium levels are inherently more challenging, which is also confirmed by inter-rater agreement of expert coders
    corecore