260,238 research outputs found

    Recognising facial expressions in video sequences

    Full text link
    We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base

    Video-based driver identification using local appearance face recognition

    Get PDF
    In this paper, we present a person identification system for vehicular environments. The proposed system uses face images of the driver and utilizes local appearance-based face recognition over the video sequence. To perform local appearance-based face recognition, the input face image is decomposed into non-overlapping blocks and on each local block discrete cosine transform is applied to extract the local features. The extracted local features are then combined to construct the overall feature vector. This process is repeated for each video frame. The distribution of the feature vectors over the video are modelled using a Gaussian distribution function at the training stage. During testing, the feature vector extracted from each frame is compared to each person’s distribution, and individual likelihood scores are generated. Finally, the person is identified as the one who has maximum joint-likelihood score. To assess the performance of the developed system, extensive experiments are conducted on different identification scenarios, such as closed set identification, open set identification and verification. For the experiments a subset of the CIAIR-HCC database, an in-vehicle data corpus that is collected at the Nagoya University, Japan is used. We show that, despite varying environment and illumination conditions, that commonly exist in vehicular environments, it is possible to identify individuals robustly from their face images. Index Terms — Local appearance face recognition, vehicle environment, discrete cosine transform, fusion. 1

    Low Resolution Face Recognition in Surveillance Systems

    Get PDF
    In surveillance systems, the captured facial images are often very small and different from the low-resolution images down-sampled from high-resolution facial images. They generally lead to low performance in face recog-nition. In this paper, we study specific scenarios of face recognition with surveillance cameras. Three important factors that influence face recognition performance are investigated: type of cameras, distance between the ob-ject and camera, and the resolution of the captured face images. Each factor is numerically investigated and analyzed in this paper. Based on these observations, a new approach is proposed for face recognition in real sur-veillance environment. For a raw video sequence captured by a surveillance camera, image pre-processing tech-niques are employed to remove the illumination variations for the enhancement of image quality. The face im-ages are further improved through a novel face image super-resolution method. The proposed approach is proven to significantly improve the performance of face recognition as demonstrated by experiments

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set

    A Comparative Analysis of the Face Recognition Methods in Video Surveillance Scenarios

    Full text link
    Facial recognition is fundamental for a wide variety of security systems operating in real-time applications. In video surveillance based face recognition, face images are typically captured over multiple frames in uncontrolled conditions; where head pose, illumination, shadowing, motion blur and focus change over the sequence. We can generalize that the three fundamental operations involved in the facial recognition tasks: face detection, face alignment and face recognition. This study presents comparative benchmark tables for the state-of-art face recognition methods by testing them with same backbone architecture in order to focus only on the face recognition solution instead of network architecture. For this purpose, we constructed a video surveillance dataset of face IDs that has high age variance, intra-class variance (face make-up, beard, etc.) with native surveillance facial imagery data for evaluation. On the other hand, this work discovers the best recognition methods for different conditions like non-masked faces, masked faces, and faces with glasses
    corecore