366 research outputs found

    Project Awakesure: Intelligent Drowsiness Detection Using Eye Tracking

    Get PDF
    Being sleepy or drowsy is referred to as being drowsy. A person who is sleepy may feel exhausted or lethargic and struggle to stay awake. People who are sleepy tend to be less attentive and may even nod off, though they can still be awakened. An increasing number of vocations nowadays call for sustained focus. In order for drivers to respond quickly to unexpected incidents, they must maintain a watchful eye on the road. Many road incidents are directly caused by tired drivers. In order to drastically lower the frequency of fatigue-related auto accidents, it is crucial to develop technologies that can identify and alert a driver to a poor psychophysical state. However, there are many challenges in developing systems that can quickly and accurately recognize a driver's signs of fatigue. Using vision-based technology is one technological option for implementing driver fatigue monitoring systems. The available driver drowsiness detection systems are described in this article. Here, we are assessing the driver's level of sleepiness utilizing his visual system. The automated system for preventing accidents and monitoring sleepy drivers developed for this study is based on detecting variations in the length of eye blinks. Our recommended technique makes use of the eyes' postulated horizontal symmetry property to identify visual changes in eye positions. Our novel approach precisely positions a standard webcam in front of the driver's seat to identify eye blinks. It will identify the eyeballs based on a specific EAR (Eye Aspect Ratio)

    Video surveillance for monitoring driver's fatigue and distraction

    Get PDF
    Fatigue and distraction effects in drivers represent a great risk for road safety. For both types of driver behavior problems, image analysis of eyes, mouth and head movements gives valuable information. We present in this paper a system for monitoring fatigue and distraction in drivers by evaluating their performance using image processing. We extract visual features related to nod, yawn, eye closure and opening, and mouth movements to detect fatigue as well as to identify diversion of attention from the road. We achieve an average of 98.3% and 98.8% in terms of sensitivity and specificity for detection of driver's fatigue, and 97.3% and 99.2% for detection of driver's distraction when evaluating four video sequences with different drivers

    Intelligent and secure real-time auto-stop car system using deep-learning models

    Get PDF
    In this study, we introduce an innovative auto-stop car system empowered by deep learning technology, specifically employing two Convolutional Neural Networks (CNNs) for face recognition and travel drowsiness detection. Implemented on a Raspberry Pi 4, our system is designed to cater exclusively to certified drivers, ensuring enhanced safety through intelligent features. The face recognition CNN model accurately identifies authorized drivers, employing deep learning techniques to verify their identity before granting access to vehicle functions. This first model demonstrates a remarkable accuracy rate of 99.1%, surpassing existing solutions in secure driver authentication. Simultaneously, our second CNN focuses on real-time detecting+ of driver drowsiness, monitoring eye movements, and utilizing a touch sensor on the steering wheel. Upon detecting signs of drowsiness, the system issues an immediate alert through a speaker, initiating an emergency park and sending a distress message via Global Positioning System (GPS). The successful implementation of our proposed system on the Raspberry Pi 4, integrated with a real-time monitoring camera, attains an impressive accuracy of 99.1% for both deep learning models. This performance surpasses current industry benchmarks, showcasing the efficacy and reliability of our solution. Our auto-stop car system advances user convenience and establishes unparalleled safety standards, marking a significant stride in autonomous vehicle technology

    Physiological-based Driver Monitoring Systems: A Scoping Review

    Get PDF
    A physiological-based driver monitoring system (DMS) has attracted research interest and has great potential for providing more accurate and reliable monitoring of the driver’s state during a driving experience. Many driving monitoring systems are driver behavior-based or vehicle-based. When these non-physiological based DMS are coupled with physiological-based data analysis from electroencephalography (EEG), electrooculography (EOG), electrocardiography (ECG), and electromyography (EMG), the physical and emotional state of the driver may also be assessed. Drivers’ wellness can also be monitored, and hence, traffic collisions can be avoided. This paper highlights work that has been published in the past five years related to physiological-based DMS. Specifically, we focused on the physiological indicators applied in DMS design and development. Work utilizing key physiological indicators related to driver identification, driver alertness, driver drowsiness, driver fatigue, and drunk driver is identified and described based on the PRISMA Extension for Scoping Reviews (PRISMA-Sc) Framework. The relationship between selected papers is visualized using keyword co-occurrence. Findings were presented using a narrative review approach based on classifications of DMS. Finally, the challenges of physiological-based DMS are highlighted in the conclusion. Doi: 10.28991/CEJ-2022-08-12-020 Full Text: PD

    An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles

    Full text link
    Nowadays, automobile manufacturers make efforts to develop ways to make cars fully safe. Monitoring driver's actions by computer vision techniques to detect driving mistakes in real-time and then planning for autonomous driving to avoid vehicle collisions is one of the most important issues that has been investigated in the machine vision and Intelligent Transportation Systems (ITS). The main goal of this study is to prevent accidents caused by fatigue, drowsiness, and driver distraction. To avoid these incidents, this paper proposes an integrated safety system that continuously monitors the driver's attention and vehicle surroundings, and finally decides whether the actual steering control status is safe or not. For this purpose, we equipped an ordinary car called FARAZ with a vision system consisting of four mounted cameras along with a universal car tool for communicating with surrounding factory-installed sensors and other car systems, and sending commands to actuators. The proposed system leverages a scene understanding pipeline using deep convolutional encoder-decoder networks and a driver state detection pipeline. We have been identifying and assessing domestic capabilities for the development of technologies specifically of the ordinary vehicles in order to manufacture smart cars and eke providing an intelligent system to increase safety and to assist the driver in various conditions/situations.Comment: 15 pages and 5 figures, Submitted to the international conference on Contemporary issues in Data Science (CiDaS 2019), Learn more about this project at https://iasbs.ac.ir/~ansari/fara

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars

    An EEG-based perceptual function integration network for application to drowsy driving

    Full text link
    © 2015 Elsevier B.V. All rights reserved. Drowsy driving is among the most critical causes of fatal crashes. Thus, the development of an effective algorithm for detecting a driver's cognitive state demands immediate attention. For decades, studies have observed clear evidence using electroencephalography that the brain's rhythmic activities fluctuate from alertness to drowsiness. Recognition of this physiological signal is the major consideration of neural engineering for designing a feasible countermeasure. This study proposed a perceptual function integration system which used spectral features from multiple independent brain sources for application to recognize the driver's vigilance state. The analysis of brain spectral dynamics demonstrated physiological evidenced that the activities of the multiple cortical sources were highly related to the changes of the vigilance state. The system performances showed a robust and improved accuracy as much as 88% higher than any of results performed by a single-source approach
    corecore