62 research outputs found

    Human Activity Recognition in a Car with Embedded Devices

    Get PDF
    Detection and prediction of drowsiness is key for the implementation of intelligent vehicles aimed to prevent highway crashes. There are several approaches for such solution.In thispaper the computer vision approach will be analysed, where embedded devices (e.g.videocameras) are used along with machine learning and pattern recognition techniques for implementing suitable solutions for detecting driver fatigue.Most of the research in computer vision systems focused on the analysis of blinks, this is a notable solution when it is combined with additional patterns like yawing or head motion for the recognition of drowsiness. The first step in this approach is the face recognition, where AdaBoost algorithm shows accurate results for the feature extraction, whereas regarding the detection of drowsiness the data-driven classifiers such as Support Vector Machine (SVM) yields remarkable results.One underlying component for implementing a computer vision technology for detection of drowsiness is a database of spontaneous images from the Facial Action Coding System (FACS), where the classifier can be trained accordingly.This paper introduces a straightforward prototype for detection of drowsiness, where the Viola-Jones method is used for face recognition and cascade classifier is used for the detection of a contiguous sequence of eyes closed, which a reconsidered as drowsiness.La detección y predicción de la somnolencia es clave para la implementación de vehículos inteligentes destinados a prevenir accidentes en carreteras. Existen varios enfoques para crear este tipo de vehículos. En este artículo se analiza el enfoque de visión por computador, donde dispositivos embebidos son usados conjuntamente con técnicas de inteligencia artificial y reconocimiento de patrones para implementar soluciones para la detección del nivel de fatiga de un conductor de un vehículo. La mayoría de investigaciones en este campo basados en visión por computador se enfocan en el análisis del parpadeo de los ojos del conductor, esta solución combinada con patrones adicionales como el reconocimiento del bostezo o el movimiento de la cabeza constituye ser una solución bastante eficiente. El primer paso en este enfoque es el reconocimiento del rostro, para lo cual el uso del algoritmo AdaBoost muestra resultados precisos en el proceso de extracción de características, mientras para la detección de somnolencia, el uso de clasificadores como el Support Vector Machine (SVM) muestra también resultados prometedores.Un componente básico en la tecnología de visión por computador es el uso de una base de datos de imágenes espontaneas acorde al Sistema Codificado de Acciones Faciales (SCAF), con la cual el clasificador puede ser entrenado. Este artículo presenta un prototipo sencillo para detección de somnolencia, en el cual el método de Viola-Jones es utilizado para el reconocimiento de rostros y un clasificador tipo cascada es usado para la detección de ojos cerrados en una secuencia continua de imágenes lo que constituye un indicador de somnolencia

    Video based detection of driver fatigue

    Get PDF
    This thesis addresses the problem of drowsy driver detection using computer vision techniques applied to the human face. Specifically we explore the possibility of discriminating drowsy from alert video segments using facial expressions automatically extracted from video. Several approaches were previously proposed for the detection and prediction of drowsiness. There has recently been increasing interest in computer vision approaches as it is a potentially promising approach due to its non-invasive nature for detecting drowsiness. Previous studies with vision based approaches detect driver drowsiness primarily by making pre-assumptions about the relevant behavior, focusing on blink rate, eye closure, and yawning. Here we employ machine learning to explore, understand and exploit actual human behavior during drowsiness episodes. We have collected two datasets including facial and head movement measures. Head motion is collected through an accelerometer for the first dataset (UYAN-1) and an automatic video based head pose detector for the second dataset (UYAN-2). We use outputs of the automatic classifiers of the facial action coding system (FACS) for detecting drowsiness. These facial actions include blinking and yawn motions, as well as a number of other facial movements. These measures are passed to a learning-based classifier based on multinomial logistic regression. In UYAN-1 the system is able to predict sleep and crash episodes during a driving computer game with 0.98 performance area under the receiver operator characteristic curve for across subjects tests. This is the highest prediction rate reported to date for detecting real drowsiness. Moreover, the analysis reveals new information about human facial behavior during drowsy driving. In UYAN-2 fine discrimination of drowsy states are also explored on a separate dataset. The degree to which individual facial action units can predict the difference between moderately drowsy to acutely drowsy is studied. Signal processing techniques and machine learning methods are employed to build a person independent acute drowsiness detection system. Temporal dynamics are captured using a bank of temporal filters. Individual action unit predictive power is explored with an MLR based classifier. Best performing five action units have been determined for a person independent system. The system is able to obtain 0.96 performance of area under the receiver operator characteristic curve for a more challenging dataset with the combined features of the best performing 5 action units. Moreover the analysis reveals new markers for different levels of drowsiness

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions
    corecore