6 research outputs found

    Facial Point Detection using Boosted Regression and Graph Models

    Get PDF
    Finding fiducial facial points in any frame of a video showing rich naturalistic facial behaviour is an unsolved problem. Yet this is a crucial step for geometric-featurebased facial expression analysis, and methods that use appearance-based features extracted at fiducial facial point locations. In this paper we present a method based on a combination of Support Vector Regression and Markov Random Fields to drastically reduce the time needed to search for a pointā€™s location and increase the accuracy and robustness of the algorithm. Using Markov Random Fields allows us to constrain the search space by exploiting the constellations that facial points can form. The regressors on the other hand learn a mapping between the appearance of the area surrounding a point and the positions of these points, which makes detection of the points very fast and can make the algorithm robust to variations of appearance due to facial expression and moderate changes in head pose. The proposed point detection algorithm was tested on 1855 images, the results of which showed we outperform current state of the art point detectors

    Real-time View-based Face Alignment using Active Wavelet Networks

    No full text
    The Active Wavelet Network (AWN) [9] approach was recently proposed for automatic face alignment, showing advantages over Active Appearance Models (AAM), such as more robustness against partial occlusions and illumination changes. In this paper, we (1) extend the AWN method to a view-based approach, (2) verify the robustness of our algorithm with respect to unseen views in a large dataset and (3) show that using only nine wavelets, our method yields similar performance to state-of-the-art face alignment systems, with a significant enhancement in terms of speed. After optimization, our system requires only 3ms per iteration on a 1.6GHz Pentium IV. We show applications in face alignment for recognition and real-time facial feature tracking under large pose variations

    Facial feature point tracking based on a graphical model framework

    Get PDF
    In this thesis a facial feature point tracker that can be used in applications such as human-computer interfaces, facial expression analysis systems, driver fatigue detection systems, etc. is proposed. The proposed tracker is based on a graphical model framework. The position of the facial features are tracked through video streams by incorporating statistical relations in time and the spatial relations between feature points. In many application areas, including those mentioned above, tracking is a key intermediate step that has a significant effect on the overall system performance. For this reason, a good practical tracking algorithm should take into account real-world phenomena such as arbitrary head movements and occlusions. Many existing algorithms track each feature point independently, and do not properly handle occlusions. This causes drifts in the case of arbitrary head movements and occlusions. By exploiting the spatial relationships between feature points, the proposed method provides robustness in a number of scenarios, including e.g. various head movements. To prevent drifts because of occlusions, a Gabor feature based occlusion detector is developed and used in the proposed method. The performance of the proposed tracker has been evaluated on real video data under various conditions. These conditions include occluded facial gestures, low video resolution, illumination changes in the scene, in-plane head motion, and out-of-plane head motion. The proposed method has also been tested on videos recorded in a vehicle environment, in order to evaluate its performance in a practical setting. Given these results it can be concluded that the proposed method provides a general promising framework for facial feature tracking. It is a robust tracker for facial expression sequences in which there are occlusions and arbitrary head movements. The results in the vehicle environment suggest that the proposed method has the potential to be useful for tasks such as driver behavior analysis or driver fatigue detection

    3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions

    Get PDF
    With the advent of 9/11, face detection and recognition is becoming an important tool to be used for securing homeland safety against potential terrorist attacks by tracking and identifying suspects who might be trying to indulge in such activities. It is also a technology that has proven its usefulness for law enforcement agencies by helping identifying or narrowing down a possible suspect from surveillance tape on the crime scene, or quickly by finding a suspect based on description from witnesses.In this thesis we introduce several improvements to morphable model based algorithms and make use of the 3D face structures extracted from multiple images to conduct illumination analysis and face recognition experiments. We present an enhanced Active Appearance Model (AAM), which possesses several sub-models that are independently updated to introduce more model flexibility to achieve better feature localization. Most appearance based models suffer from the unpredictability of facial background, which might result in a bad boundary extraction. To overcome this problem we propose a local projection models that accurately locates face boundary landmarks. We also introduce a novel and unbiased cost function that casts the face alignment as an optimization problem, where shape constraints obtained from direct motion estimation are incorporated to achieve a much higher convergence rate and more accurate alignment. Viewing angles are roughly categorized to four different poses, and the customized view-based AAMs align face images in different specific pose categories. We also attempt at obtaining individual 3D face structures by morphing a 3D generic face model to fit the individual faces. Face contour is dynamically generated so that the morphed face looks realistic. To overcome the correspondence problem between facial feature points on the generic and the individual face, we use an approach based on distance maps. With the extracted 3D face structure we study the illumination effects on the appearance based on the spherical harmonic illumination analysis. By normalizing the illumination conditions on different facial images, we extract a global illumination-invariant texture map, which jointly with the extracted 3D face structure in the form of cubic morphing parameters completely encode an individual face, and allow for the generation of images at arbitrary pose and under arbitrary illumination.Face recognition is conducted based on the face shape matching error, texture error and illumination-normalized texture error. Experiments show that a higher face recognition rate is achieved by compensating for illumination effects. Furthermore, it is observed that the fusion of shape and texture information result in a better performance than using either shape or texture information individually.Ph.D., Electrical Engineering -- Drexel University, 200
    corecore