26 research outputs found

    Driver Drowsiness Detection System

    Get PDF
    In recent years’ driver fatigue is one of the major causes of vehicle accidents in the world. A direct way of measuring driver fatigue is measuring the state of the driver i.e. drowsiness. So it is very important to detect the drowsiness of the driver to save life and property. This project is aimed towards developing a prototype of drowsiness detection system. This system is a real time system which captures image continuously and measures the state of the eye according to the specified algorithm and gives warning if required. Though there are several methods for measuring the drowsiness but this approach is completely non-intrusive which does not affect the driver in any way, hence giving the exact condition of the driver. For detection of drowsiness the per closure value of eye is considered. So when the closure of eye exceeds a certain amount then the driver is identified to be sleepy. For implementing this system several OpenCv libraries are used including Haar-cascade. The entire system is implemented using Raspberry-Pi

    A Comparative Emotions-detection Review for Non-intrusive Vision-Based Facial Expression Recognition

    Get PDF
    Affective computing advocates for the development of systems and devices that can recognize, interpret, process, and simulate human emotion. In computing, the field seeks to enhance the user experience by finding less intrusive automated solutions. However, initiatives in this area focus on solitary emotions that limit the scalability of the approaches. Further reviews conducted in this area have also focused on solitary emotions, presenting challenges to future researchers when adopting these recommendations. This review aims at highlighting gaps in the application areas of Facial Expression Recognition Techniques by conducting a comparative analysis of various emotion detection datasets, algorithms, and results provided in existing studies. The systematic review adopted the PRISMA model and analyzed eighty-three publications. Findings from the review show that different emotions call for different Facial Expression Recognition techniques, which should be analyzed when conducting Facial Expression Recognition. Keywords: Facial Expression Recognition, Emotion Detection, Image Processing, Computer Visio

    Development and Evaluation of Facial Gesture Recognition and Head Tracking for Assistive Technologies

    Get PDF
    Globally, the World Health Organisation estimates that there are about 1 billion people suffering from disabilities and the UK has about 10 million people suffering from neurological disabilities in particular. In extreme cases these individuals with disabilities such as Motor Neuron Disease(MND), Cerebral Palsy(CP) and Multiple Sclerosis(MS) may only be able to perform limited head movement, move their eyes or make facial gestures. The aim of this research is to investigate low-cost and reliable assistive devices using automatic gesture recognition systems that will enable the most severely disabled user to access electronic assistive technologies and communication devices thus enabling them to communicate with friends and relative. The research presented in this thesis is concerned with the detection of head movements, eye movements, and facial gestures, through the analysis of video and depth images. The proposed system, using web cameras or a RGB-D sensor coupled with computer vision and pattern recognition techniques, will have to be able to detect the movement of the user and calibrate it to facilitate communication. The system will also provide the user with the functionality of choosing the sensor to be used i.e. the web camera or the RGB-D sensor, and the interaction or switching mechanism i.e. eye blink or eyebrows movement to use. This ability to system to enable the user to select according to the user's needs would make it easier on the users as they would not have to learn how to operating the same system as their condition changes. This research aims to explore in particular the use of depth data for head movement based assistive devices and the usability of different gesture modalities as switching mechanisms. The proposed framework consists of a facial feature detection module, a head tracking module and a gesture recognition module. Techniques such as Haar-Cascade and skin detection were used to detect facial features such as the face, eyes and nose. The depth data from the RGB-D sensor was used to segment the area nearest to the sensor. Both the head tracking module and the gesture recognition module rely on the facial feature module as it provided data such as the location of the facial features. The head tracking module uses the facial feature data to calculate the centroid of the face, the distance to the sensor, the location of the eyes and the nose to detect head motion and translate it into pointer movement. The gesture detection module uses features such as the location of the eyes, the location of the pupil, the size of the pupil and calculates the interocular distance for the detection of blink or eyebrows movement to perform a click action. The research resulted in the creation of four assistive devices based on the combination of the sensors (Web Camera and RGB-D sensor) and facial gestures (Blink and Eyebrows movement): Webcam-Blink, Webcam-Eyebrows, Kinect-Blink and Kinect-Eyebrows. Another outcome of this research has been the creation of an evaluation framework based on Fitts' Law with a modified multi-directional task including a central location and a dataset consisting of both colour images and depth data of people performing head movement towards different direction and performing gestures such as eye blink, eyebrows movement and mouth movements. The devices have been tested with healthy participants. From the observed data, it was found that both Kinect-based devices have lower Movement Time and higher Index of Performance and Effective Throughput than the web camera-based devices thus showing that the introduction of the depth data has had a positive impact on the head tracking algorithm. The usability assessment survey, suggests that there is a significant difference in eye fatigue experienced by the participants; blink gesture was less tiring to the eye than eyebrows movement gesture. Also, the analysis of the gestures showed that the Index of Difficulty has a large effect on the error rates of the gesture detection and also that the smaller the Index of Difficulty the higher the error rate

    Detection of Driver Drowsiness and Distraction Using Computer Vision and Machine Learning Approaches

    Get PDF
    Drowsiness and distracted driving are leading factor in most car crashes and near-crashes. This research study explores and investigates the applications of both conventional computer vision and deep learning approaches for the detection of drowsiness and distraction in drivers. In the first part of this MPhil research study conventional computer vision approaches was studied to develop a robust drowsiness and distraction system based on yawning detection, head pose detection and eye blinking detection. These algorithms were implemented by using existing human crafted features. Experiments were performed for the detection and classification with small image datasets to evaluate and measure the performance of system. It was observed that the use of human crafted features together with a robust classifier such as SVM gives better performance in comparison to previous approaches. Though, the results were satisfactorily, there are many drawbacks and challenges associated with conventional computer vision approaches, such as definition and extraction of human crafted features, thus making these conventional algorithms to be subjective in nature and less adaptive in practice. In contrast, deep learning approaches automates the feature selection process and can be trained to learn the most discriminative features without any input from human. In the second half of this research study, the use of deep learning approaches for the detection of distracted driving was investigated. It was observed that one of the advantages of the applied methodology and technique for distraction detection includes and illustrates the contribution of CNN enhancement to a better pattern recognition accuracy and its ability to learn features from various regions of a human body simultaneously. The comparison of the performance of four convolutional deep net architectures (AlexNet, ResNet, MobileNet and NASNet) was carried out, investigated triplet training and explored the impact of combining a support vector classifier (SVC) with a trained deep net. The images used in our experiments with the deep nets are from the State Farm Distracted Driver Detection dataset hosted on Kaggle, each of which captures the entire body of a driver. The best results were obtained with the NASNet trained using triplet loss and combined with an SVC. It was observed that one of the advantages of deep learning approaches are their ability to learn discriminative features from various regions of a human body simultaneously. The ability has enabled deep learning approaches to reach accuracy at human level.

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    Development of a robust active infrared-based eye tracking system

    Get PDF
    Eye tracking has a number of useful applications ranging from monitoring a vehicle driver for possible signs of fatigue, providing an interface to enable severely disabled people to communicate with others, to a number of medical applications. Most eye tracking applications require a non-intrusive way of tracking the eyes, making a camera-based approach a natural choice. However, although significant progress has been made in recent years, modern eye tracking systems still have not overcome a number of challenges including eye occlusions, variable ambient lighting conditions and inter-subject variability. This thesis describes the complete design and implementation of a real-time camera-based eye tracker, which was developed mainly for indoor applications. The developed eye tracker relies on the so-called bright/dark pupil effect for both the eye detection and eye tracking phases. The bright/dark pupil effect was realised by the development of specialised hardware and near-infrared illumination, which were interfaced with a machine vision camera. For the eye detection phase the performance of three different types of classifiers, namely neurals networks, SVMs and AdaBoost were directly compared with each other on a dataset consisting of 17 individual subjects from different ethnic backgrounds. For the actual tracking of the eyes, a Kalman filter was combined with the mean-shift tracking algorithm. A PC application with a graphical user interface (GUI) was also developed to integrate the various aspects of the eye tracking system, which allows the user to easily configure and use the system. Experimental results have shown the eye detection phase to be very robust, whereas the eye tracking phase was also able to accurately track the eyes from frame-to-frame in real-time, given a few constraints. AFRIKAANS : Oogvolging het ’n beduidende aantal toepassings wat wissel van die deteksie van bestuurderuitputting, die voorsiening van ’n rekenaarintervlak vir ernstige fisies gestremde mense, tot ’n groot aantal mediese toepassings. Die meeste toepassings van oogvolging vereis ’n nie-indringende manier om die oë te volg, wat ’n kamera-gebaseerde benadering ’n natuurlike keuse maak. Alhoewel daar alreeds aansienlike vordering gemaak is in die afgelope jare, het moderne oogvolgingstelsels egter nogsteeds verskeie uitdagings nie oorkom nie, insluitende oog okklusies, veranderlike beligtingsomstandighede en variansies tussen gebruikers. Die verhandeling beskryf die volledige ontwerp en implementering van ’n kamera-gebaseerde oogvolgingsstelsel wat in reële tyd werk. Die ontwikkeling van die oogvolgingsstelsel maak staat op die sogenaamde helder/donker pupil effek vir beide die oogdeteksie en oogvolging fases. Die helder/donker pupil effek was moontlik gemaak deur die ontwikkeling van gespesialiseerde hardeware en naby-infrarooi illuminasie. Vir die oogdeteksie fase was die akkuraatheid van drie verskillende tipes klassifiseerders getoets en direk vergelyk, insluitende neurale netwerke, SVMs en AdaBoost. Die datastel waarmee die klassifiseerders getoets was, het bestaan uit 17 individuele toetskandidate van verskillende etniese groepe. Vir die oogvolgings fase was ’n Kalman filter gekombineer met die gemiddelde-verskuiwings algoritme. ’n Rekenaar program met ’n grafiese gebruikersintervlak was ontwikkel vir ’n persoonlike rekenaar, sodat al die verskillende aspekte van die oogvolgingsstelsel met gemak opgestel kon word. Eksperimentele resultate het getoon dat die oogdeteksie fase uiters akkuraat en robuust was, terwyl die oogvolgings fase ook hoogs akuraat die oë gevolg het, binne sekere beperkinge. CopyrightDissertation (MEng)--University of Pretoria, 2011.Electrical, Electronic and Computer Engineeringunrestricte
    corecore