576 research outputs found

    Facial Component Detection in Thermal Imagery

    Get PDF
    This paper studies the problem of detecting facial components in thermal imagery (specifically eyes, nostrils and mouth). One of the immediate goals is to enable the automatic registration of facial thermal images. The detection of eyes and nostrils is performed using Haar features and the GentleBoost algorithm, which are shown to provide superior detection rates. The detection of the mouth is based on the detections of the eyes and the nostrils and is performed using measures of entropy and self similarity. The results show that reliable facial component detection is feasible using this methodology, getting a correct detection rate for both eyes and nostrils of 0.8. A correct eyes and nostrils detection enables a correct detection of the mouth in 65% of closed-mouth test images and in 73% of open-mouth test images

    Automated drowsiness detection for improved driving safety

    Get PDF
    Several approaches were proposed for the detection and prediction of drowsiness. The approaches can be categorized as estimating the ïŹtness of duty, modeling the sleep-wake rhythms, measuring the vehicle based performance and online operator monitoring. Computer vision based online operator monitoring approach has become prominent due to its predictive ability of detecting drowsiness. Previous studies with this approach detect driver drowsiness primarily by making preassumptions about the relevant behavior, focusing on blink rate, eye closure, and yawning. Here we employ machine learning to datamine actual human behavior during drowsiness episodes. Automatic classiïŹers for 30 facial actions from the Facial Action Coding system were developed using machine learning on a separate database of spontaneous expressions. These facial actions include blinking and yawn motions, as well as a number of other facial movements. In addition, head motion was collected through automatic eye tracking and an accelerometer. These measures were passed to learning-based classiïŹers such as Adaboost and multinomial ridge regression. The system was able to predict sleep and crash episodes during a driving computer game with 96% accuracy within subjects and above 90% accuracy across subjects. This is the highest prediction rate reported to date for detecting real drowsiness. Moreover, the analysis revealed new information about human behavior during drowsy drivin

    Finding Faces in Cluttered Scenes using Random Labeled Graph Matching

    Get PDF
    An algorithm for locating quasi-frontal views of human faces in cluttered scenes is presented. The algorithm works by coupling a set of local feature detectors with a statistical model of the mutual distances between facial features it is invariant with respect to translation, rotation (in the plane), and scale and can handle partial occlusions of the face. On a challenging database with complicated and varied backgrounds, the algorithm achieved a correct localization rate of 95% in images where the face appeared quasi-frontally

    Face recognition with variation in pose angle using face graphs

    Get PDF
    Automatic recognition of human faces is an important and growing field. Several real-world applications have started to rely on the accuracy of computer-based face recognition systems for their own performance in terms of efficiency, safety and reliability. Many algorithms have already been established in terms of frontal face recognition, where the person to be recognized is looking directly at the camera. More recently, methods for non-frontal face recognition have been proposed. These include work related to 3D rigid face models, component-based 3D morphable models, eigenfaces and elastic bunched graph matching (EBGM). This thesis extends recognition algorithm based on EBGM to establish better face recognition across pose variation. Facial features are localized using active shape models and face recognition is based on elastic bunch graph matching. Recognition is performed by comparing feature descriptors based on Gabor wavelets for various orientations and scales, called jets. Two novel recognition schemes, feature weighting and jet-mapping, are proposed for improved performance of the base scheme, and a combination of the two schemes is considered as a further enhancement. The improvements in performance have been evaluated by studying recognition rates on an existing database and comparing the results with the base recognition scheme over which the schemes have been developed. Improvement of up to 20% has been observed for face pose variation as large as 45°

    Comparing 3D descriptors for local search of craniofacial landmarks

    Get PDF
    This paper presents a comparison of local descriptors for a set of 26 craniofacial landmarks annotated on 144 scans acquired in the context of clinical research. We focus on the accuracy of the different descriptors on a per-landmark basis when constrained to a local search. For most descriptors, we ïŹnd that the curves of expected error against the search radius have a plateau that can be used to characterize their performance, both in terms of accuracy and maximum usable range for the local search. Six histograms-based descriptors were evaluated: three describing distances and three describing orientations. No descriptor dominated over the rest and the best accuracy per landmark was strongly distributed among 3 of the 6 algorithms evaluated. Ordering the descriptors by average error (over all landmarks) did not coincide with the ordering by most frequently selected, indicating that a comparison of descriptors based on their global behavior might be misleading when targeting facial landmarks

    An Efficient Boosted Classifier Tree-Based Feature Point Tracking System for Facial Expression Analysis

    Get PDF
    The study of facial movement and expression has been a prominent area of research since the early work of Charles Darwin. The Facial Action Coding System (FACS), developed by Paul Ekman, introduced the first universal method of coding and measuring facial movement. Human-Computer Interaction seeks to make human interaction with computer systems more effective, easier, safer, and more seamless. Facial expression recognition can be broken down into three distinctive subsections: Facial Feature Localization, Facial Action Recognition, and Facial Expression Classification. The first and most important stage in any facial expression analysis system is the localization of key facial features. Localization must be accurate and efficient to ensure reliable tracking and leave time for computation and comparisons to learned facial models while maintaining real-time performance. Two possible methods for localizing facial features are discussed in this dissertation. The Active Appearance Model is a statistical model describing an object\u27s parameters through the use of both shape and texture models, resulting in appearance. Statistical model-based training for object recognition takes multiple instances of the object class of interest, or positive samples, and multiple negative samples, i.e., images that do not contain objects of interest. Viola and Jones present a highly robust real-time face detection system, and a statistically boosted attentional detection cascade composed of many weak feature detectors. A basic algorithm for the elimination of unnecessary sub-frames while using Viola-Jones face detection is presented to further reduce image search time. A real-time emotion detection system is presented which is capable of identifying seven affective states (agreeing, concentrating, disagreeing, interested, thinking, unsure, and angry) from a near-infrared video stream. The Active Appearance Model is used to place 23 landmark points around key areas of the eyes, brows, and mouth. A prioritized binary decision tree then detects, based on the actions of these key points, if one of the seven emotional states occurs as frames pass. The completed system runs accurately and achieves a real-time frame rate of approximately 36 frames per second. A novel facial feature localization technique utilizing a nested cascade classifier tree is proposed. A coarse-to-fine search is performed in which the regions of interest are defined by the response of Haar-like features comprising the cascade classifiers. The individual responses of the Haar-like features are also used to activate finer-level searches. A specially cropped training set derived from the Cohn-Kanade AU-Coded database is also developed and tested. Extensions of this research include further testing to verify the novel facial feature localization technique presented for a full 26-point face model, and implementation of a real-time intensity sensitive automated Facial Action Coding System

    Analysis of yawning behaviour in spontaneous expressions of drowsy drivers

    Get PDF
    Driver fatigue is one of the main causes of road accidents. It is essential to develop a reliable driver drowsiness detection system which can alert drivers without disturbing them and is robust to environmental changes. This paper explores yawning behaviour as a sign of drowsiness in spontaneous expressions of drowsy drivers in simulated driving scenarios. We analyse a labelled dataset of videos of sleep-deprived versus alert drivers and demonstrate the correlation between hand-over-face touches, face occlusions and yawning. We propose that face touches can be used as a novel cue in automated drowsiness detection alongside yawning and eye behaviour. Moreover, we present an automatic approach to detect yawning based on extracting geometric and appearance features of both mouth and eye regions. Our approach successfully detects both hand-covered and uncovered yawns with an accuracy of 95%. Ultimately, our goal is to use these results in designing a hybrid drowsiness-detection system
    • 

    corecore