9,778 research outputs found

    An In-Vehicle Vision-Based Driver's Drowsiness Detection System

    Get PDF
    [[abstract]]Many traffic accidents have been reported due to driver’s drowsiness/fatigue. Drowsiness degrades driving performance due to the declinations of visibility, situational awareness and decision-making capability. In this study, a vision-based drowsiness detection and warning system is presented, which attempts to bring to the attention of a driver to his/her own potential drowsiness. The information provided by the system can also be utilized by adaptive systems to manage noncritical operations, such as starting a ventilator, spreading fragrance, turning on a radio, and providing entertainment options. In high drowsiness situation, the system may initiate navigation aids and alert others to the drowsiness of the driver. The system estimates the fatigue level of a driver based on his/her facial images acquired by a video camera mounted in the front of the vehicle. There are five major steps involved in the system process: preprocessing, facial feature extraction, face tracking, parameter estimation, and reasoning. In the preprocessing step, the input image is sub-sampled for reducing the image size and in turn the processing time. A lighting compensation process is next applied to the reduced image in order to remove the influences of ambient illumination variations. Afterwards, for each image pixel a number of chrominance values are calculated, which are to be used in the next step for detecting facial features. There are four sub-steps constituting the feature extraction step: skin detection, face localization, eyes and mouth detection, and feature confirmation. To begin, the skin areas are located in the image based on the chrominance values of pixels calculated in the previous step and a predefined skin model. We next search for the face region within the largest skin area. However, the detected face is typically imperfect. Facial feature detection within the imperfect face region is unreliable. We actually look for facial features throughout the entire image. As to the face region, it will later be used to confirm the detected facial features. Once facial features are located, they are tracked over the video sequence until they are missed detecting in a video image. At this moment, the facial feature detection process is revoked again. Although facial feature detection is time consuming, facial feature tracking is fast and reliable. During facial feature tracking, parameters of facial expression, including percentage of eye closure over time, eye blinking frequency, durations of eye closure, gaze and mouth opening, as well as head orientation, are estimated. The estimated parameters are then utilized in the reasoning step to determine the driver’s drowsiness level. A fuzzy integral technique is employed, which integrates various types of parameter values to arrive at a decision about the drowsiness level of the driver. A number of video sequences of different drivers and illumination conditions have been tested. The results revealed that our system can work reasonably in daytime. We may extend the system in the future work to apply in nighttime. For this, infrared sensors should be included.

    A graphical model based solution to the facial feature point tracking problem

    Get PDF
    In this paper a facial feature point tracker that is motivated by applications such as human-computer interfaces and facial expression analysis systems is proposed. The proposed tracker is based on a graphical model framework. The facial features are tracked through video streams by incorporating statistical relations in time as well as spatial relations between feature points. By exploiting the spatial relationships between feature points, the proposed method provides robustness in real-world conditions such as arbitrary head movements and occlusions. A Gabor feature-based occlusion detector is developed and used to handle occlusions. The performance of the proposed tracker has been evaluated on real video data under various conditions including occluded facial gestures and head movements. It is also compared to two popular methods, one based on Kalman filtering exploiting temporal relations, and the other based on active appearance models (AAM). Improvements provided by the proposed approach are demonstrated through both visual displays and quantitative analysis

    Graphical model based facial feature point tracking in a vehicle environment

    Get PDF
    Facial feature point tracking is a research area that can be used in human-computer interaction (HCI), facial expression analysis, fatigue detection, etc. In this paper, a statistical method for facial feature point tracking is proposed. Feature point tracking is a challenging topic in case of uncertain data because of noise and/or occlusions. With this motivation, a graphical model that incorporates not only temporal information about feature point movements, but also information about the spatial relationships between such points is built. Based on this model, an algorithm that achieves feature point tracking through a video observation sequence is implemented. The proposed method is applied on 2D gray scale real video sequences taken in a vehicle environment and the superiority of this approach over existing techniques is demonstrated

    Automatic Detection of Pain from Spontaneous Facial Expressions

    Get PDF
    This paper presents a new approach for detecting pain in sequences of spontaneous facial expressions. The motivation for this work is to accompany mobile-based self-management of chronic pain as a virtual sensor for tracking patients' expressions in real-world settings. Operating under such constraints requires a resource efficient approach for processing non-posed facial expressions from unprocessed temporal data. In this work, the facial action units of pain are modeled as sets of distances among related facial landmarks. Using standardized measurements of pain versus no-pain that are specific to each user, changes in the extracted features in relation to pain are detected. The activated features in each frame are combined using an adapted form of the Prkachin and Solomon Pain Intensity scale (PSPI) to detect the presence of pain per frame. Painful features must be activated in N consequent frames (time window) to indicate the presence of pain in a session. The discussed method was tested on 171 video sessions for 19 subjects from the McMaster painful dataset for spontaneous facial expressions. The results show higher precision than coverage in detecting sequences of pain. Our algorithm achieves 94% precision (F-score=0.82) against human observed labels, 74% precision (F-score=0.62) against automatically generated pain intensities and 100% precision (F-score=0.67) against self-reported pain intensities

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    3D face tracking and multi-scale, spatio-temporal analysis of linguistically significant facial expressions and head positions in ASL

    Full text link
    Essential grammatical information is conveyed in signed languages by clusters of events involving facial expressions and movements of the head and upper body. This poses a significant challenge for computer-based sign language recognition. Here, we present new methods for the recognition of nonmanual grammatical markers in American Sign Language (ASL) based on: (1) new 3D tracking methods for the estimation of 3D head pose and facial expressions to determine the relevant low-level features; (2) methods for higher-level analysis of component events (raised/lowered eyebrows, periodic head nods and head shakes) used in grammatical markings—with differentiation of temporal phases (onset, core, offset, where appropriate), analysis of their characteristic properties, and extraction of corresponding features; (3) a 2-level learning framework to combine lowand high-level features of differing spatio-temporal scales. This new approach achieves significantly better tracking and recognition results than our previous methods

    On using gait to enhance frontal face extraction

    No full text
    Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario
    corecore