154 research outputs found

    Facial Video based Detection of Physical Fatigue for Maximal Muscle Activity

    Get PDF

    Driver monitoring system based on eye tracking

    Get PDF
    Dissertação de mestrado integrado em Engenharia Electrónica Industrial e ComputadoresRecent statistics indicate that driver drowsiness is one of the major causes of road accidents and deaths behind the wheel. This reveals the need of reliable systems capable of predict when drivers are in this state and warn them in order to avoid crashes with other vehicles or stationary objects. Therefore, the purpose of this dissertation is to develop a driver’s monitoring system based on eye tracking that will be able to detect driver’s drowsiness level and actuate accordingly. The alert to the driver may vary from a message on the cluster to a vibration on the seat. The proposed algorithm to estimate driver’s state only requires one variable: eyelid opening. Through this variable the algorithm computes several eye parameters used to decide if the driver is drowsy or not, namely: PERCLOS, blink frequency and blink duration. Eyelid opening is obtained over a software and hardware platform called SmartEye Pro. This eye tracking system uses infrared cameras and computer vision software to gather eye’s state information. Additionally, since this dissertation is part of the project "INNOVATIVE CAR HMI", from Bosch and University of Minho partnership, the driver monitoring system will be integrated in the Bosch DSM (Driver Simulator Mockup).Estatísticas recentes indicam que a sonolência do condutor é uma das principais causas de acidentes e mortes nas estradas. Isto revela a necessidade de sistemas fiáveis capazes de prever quando um condutor está sonolento e avisá-lo, de modo a evitar colisões com outros veículos ou objetos estacionários. Portanto, o propósito desta dissertação é desenvolver um sistema de monitorização do condutor baseado em eye tracking que será capaz de detetar o nível de sonolência do condutor e atuar em conformidade. O alerta para o condutor pode variar entre uma mensagem no painel de instrumentos ou uma vibração no assento. O algoritmo proposto para estimar o estado do condutor apenas requer a aquisição de uma variável: abertura da pálpebra. Através desta variável, o algoritmo computa alguns parâmetros utilizados para verificar se o condutor está sonolento ou não, nomeadamente: PERCLOS, frequência do pestanejar e duração do pestanejar. A abertura da pálpebra é obtida através de uma plataforma de hardware e software chamada SmartEye Pro. Esta plataforma de eye tracking utiliza câmaras infravermelho e software de visão por computador para obter informação sobre o estado dos olhos. Adicionalmente, uma vez que esta dissertação está inserida projeto: "INNOVATIVE CAR HMI", da parceria entre a Bosch e a Universidade do Minho, o sistema desenvolvido será futuramente integrado no Bosch DSM (Driver Simulator Mockup)

    Modern drowsiness detection techniques: a review

    Get PDF
    According to recent statistics, drowsiness, rather than alcohol, is now responsible for one-quarter of all automobile accidents. As a result, many monitoring systems have been created to reduce and prevent such accidents. However, despite the huge amount of state-of-the-art drowsiness detection systems, it is not clear which one is the most appropriate. The following points will be discussed in this paper: Initial consideration should be given to the many sorts of existing supervised detecting techniques that are now in use and grouped into four types of categories (behavioral, physiological, automobile and hybrid), Second, the supervised machine learning classifiers that are used for drowsiness detection will be described, followed by a discussion of the advantages and disadvantages of each technique that has been evaluated, and lastly the recommendation of a new strategy for detecting drowsiness

    A framework for context-aware driver status assessment systems

    Get PDF
    The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard. Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented. With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform. To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior. A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety. Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition

    Iris Region and Bayes Classifier for Robust Open or Closed Eye Detection

    Get PDF
    AbstractThis paper presents a robust method to detect sequence of state open or closed of eye in low-resolution image which can finally lead to efficient eye blink detection for practical use. Eye states and eye blink detection play an important role in human-computer interaction (HCI) systems. Eye blinks can be used as communication method for people with severe disability providing an alternate input modality to control a computer or as detection method for a driver’s drowsiness. The proposed approach is based on an analysis of eye and skin in eye region image. Evidently, the iris and sclera regions increase as a person opens an eye and decrease while an eye is closing. In particular, the distributions of these eye components, during each eye state, form a bell-like shape. By using color tone differences, the iris and sclera regions can be extracted from the skin. Next, a naive Bayes classifier effectively classifies the eye states. Further, a study also shows that iris region as a feature gives better detection rate over sclera region as a feature. The approach works online with low-resolution image and in typical lighting conditions. It was successfully tested in  image sequences (  frames) and achieved high accuracy of over  for open eye and over  for closed eye compared to the ground truth. In particular, it improves almost  in terms of open eye state detection compared to a recent commonly used approach, template matching algorithm

    The potential of the BCI for accessible and smart e-learning

    Get PDF
    The brain computer interface (BCI) should be the accessibility solution “par excellence” for interactive and e-learning systems. There is a substantial tradition of research on the human electro encephalogram (EEG) and on BCI systems that are based, inter alia, on EEG measurement. We have not yet seen a viable BCI for e-learning. For many users for a BCI based interface is their first choice for good quality interaction, such as those with major psychomotor or cognitive impairments. However, there are many more for whom the BCI would be an attractive option given an acceptable learning overhead, including less severe disabilities and safety critical conditions where cognitive overload or limited responses are likely. Recent progress has been modest as there are many technical and accessibility problems to overcome. We present these issues and report a survey of fifty papers to capture the state-of-the-art in BCI and the implications for e-learning

    Head motion tracking in 3D space for drivers

    Get PDF
    Ce travail présente un système de vision par ordinateur capable de faire un suivi du mouvement en 3D de la tête d’une personne dans le cadre de la conduite automobile. Ce système de vision par ordinateur a été conçu pour faire partie d'un système intégré d’analyse du comportement des conducteurs tout en remplaçant des équipements et des accessoires coûteux, qui sont utilisés pour faire le suivi du mouvement de la tête, mais sont souvent encombrants pour le conducteur. Le fonctionnement du système est divisé en quatre étapes : l'acquisition d'images, la détection de la tête, l’extraction des traits faciaux, la détection de ces traits faciaux et la reconstruction 3D des traits faciaux qui sont suivis. Premièrement, dans l'étape d'acquisition d'images, deux caméras monochromes synchronisées sont employées pour former un système stéréoscopique qui facilitera plus tard la reconstruction 3D de la tête. Deuxièmement, la tête du conducteur est détectée pour diminuer la dimension de l’espace de recherche. Troisièmement, après avoir obtenu une paire d’images de deux caméras, l'étape d'extraction des traits faciaux suit tout en combinant les algorithmes de traitement d'images et la géométrie épipolaire pour effectuer le suivi des traits faciaux qui, dans notre cas, sont les deux yeux et le bout du nez du conducteur. Quatrièmement, dans une étape de détection des traits faciaux, les résultats 2D du suivi sont consolidés par la combinaison d'algorithmes de réseau de neurones et la géométrie du visage humain dans le but de filtrer les mauvais résultats. Enfin, dans la dernière étape, le modèle 3D de la tête est reconstruit grâce aux résultats 2D du suivi et ceux du calibrage stéréoscopique des caméras. En outre, on détermine les mesures 3D selon les six axes de mouvement connus sous le nom de degrés de liberté de la tête (longitudinal, vertical, latéral, roulis, tangage et lacet). La validation des résultats est effectuée en exécutant nos algorithmes sur des vidéos préenregistrés des conducteurs utilisant un simulateur de conduite afin d'obtenir des mesures 3D avec notre système et par la suite, à les comparer et les valider plus tard avec des mesures 3D fournies par un dispositif pour le suivi de mouvement installé sur la tête du conducteur.This work presents a computer vision module capable of tracking the head motion in 3D space for drivers. This computer vision module was designed to be part of an integrated system to analyze the behaviour of the drivers by replacing costly equipments and accessories that track the head of a driver but are often cumbersome for the user. The vision module operates in five stages: image acquisition, head detection, facial features extraction, facial features detection, and 3D reconstruction of the facial features that are being tracked. Firstly, in the image acquisition stage, two synchronized monochromatic cameras are used to set up a stereoscopic system that will later make the 3D reconstruction of the head simpler. Secondly the driver’s head is detected to reduce the size of the search space for finding facial features. Thirdly, after obtaining a pair of images from the two cameras, the facial features extraction stage follows by combining image processing algorithms and epipolar geometry to track the chosen features that, in our case, consist of the two eyes and the tip of the nose. Fourthly, in a detection stage, the 2D tracking results are consolidated by combining a neural network algorithm and the geometry of the human face to discriminate erroneous results. Finally, in the last stage, the 3D model of the head is reconstructed from the 2D tracking results (e.g. tracking performed in each image independently) and calibration of the stereo pair. In addition 3D measurements according to the six axes of motion known as degrees of freedom of the head (longitudinal, vertical and lateral, roll, pitch and yaw) are obtained. The validation of the results is carried out by running our algorithms on pre-recorded video sequences of drivers using a driving simulator in order to obtain 3D measurements to be compared later with the 3D measurements provided by a motion tracking device installed on the driver’s head

    Drowsiness Detection for Driver Assistance

    Get PDF
    This thesis presents a noninvasive approach to detect drowsiness of drivers using behavioral and vehicle based measuring techniques. The system accepts stream of driver's images from a camera and steering wheel movement from G-27 Logitech racing wheel system. It first describes a standalone implementation of the behavioral based drowsiness detection method. The method accepts the input images and analyzes the facial expressions of the driver through sets of processing stages. In order to improve the reliability of the system, we also proposed a comprehensive approach of combining the facial expression analysis with a steering wheel data analysis in decision level as well as feature level integration. We also presented a new approach of modeling the temporal information of facial expressions of drowsiness using HMM. Each proposed approach has been implemented in a simulated driving setup. The detection performance of each method is evaluated through experiments and its parameter settings were optimized. Finally we present a case study which discusses the practicality of our system in a small-scaled intelligent transportation system where it switches the driving mechanism between manual and autonomous control depending on the state of the driver.Electrical Engineerin
    corecore