15 research outputs found

    Nonparametric Facial Feature Localization Using Segment-Based Eigenfeatures

    Get PDF
    We present a nonparametric facial feature localization method using relative directional information between regularly sampled image segments and facial feature points. Instead of using any iterative parameter optimization technique or search algorithm, our method finds the location of facial feature points by using a weighted concentration of the directional vectors originating from the image segments pointing to the expected facial feature positions. Each directional vector is calculated by linear combination of eigendirectional vectors which are obtained by a principal component analysis of training facial segments in feature space of histogram of oriented gradient (HOG). Our method finds facial feature points very fast and accurately, since it utilizes statistical reasoning from all the training data without need to extract local patterns at the estimated positions of facial features, any iterative parameter optimization algorithm, and any search algorithm. In addition, we can reduce the storage size for the trained model by controlling the energy preserving level of HOG pattern space

    Reconstructing dynamic morphable models of the human face

    No full text
    In this thesis we developed new techniques to detect, reconstruct and track human faces from pure image data. It is divided into two parts. While the first part considers static faces only, the second part deals with dynamic facial movements. For static faces we introduce a new facial feature localization method that determines the position of facial features relative to segments that were uniformly distributed in an input image. In this work we introduce and train a compact codebook that is the foundation of a voting scheme: Based on the appearance of an image segment this codebook provides offset vectors originating form the segments center and pointing towards possible feature locations. Compared to state-of-the-art methods, we show that this compact codebook has advantages regarding computational time and memory consumptions without losing accuracy. Leaving the two-dimensional image space, in the following chapter we introduce and compare two new 3D reconstruction approaches that extracts the 3D shape of a human face from multiple images. Those images were synchronously taken by a calibrated camera rig. With the aim of generating a large database of 3D facial movements, in the second part of this thesis we extend both systems to reconstruct and track human faces in 3D from videos taken by our camera rig. Both systems are completely image based and do not require any kind of facial markers. By carefully taking all requirements and characteristics into account and discussing single steps of the pipeline, we propose our facial reconstruction system that efficiently and robustly deforms a generic 3D mesh template to track a human face over time. Our tracking system preserves temporal and spatial correspondences between reconstructed faces. Due to this fact we can use the resulting database of facial movements, showing different facial expressions of a fairly large number of subjects, for further statistical analysis and to compute a generic movement model for facial actions. This movement model is independent from individual facial physiognomies. In the last chapter we introduce a new marker-less 3D face tracking approach for 2D video streams captured by a single consumer grade camera. Our approach tracks 2D facial features and uses them to drive the evolution of our generic motion model. Here, our major contribution lies in the formulation of a smooth deformation prior which we derive from our generic motion model. We show that derived motions can be mapped back onto the individual facial shape, which leads to a reconstruction of the facial performance as seen in the video sequence. Additionally we show that it is possible to map the motion to another facial shape to drive the facial performance of a different (virtual) character. We demonstrate the effectiveness of our technique on a number of examples

    Fast interactive region of interest selection for volume visualization

    No full text

    High Speed Circles

    No full text

    High Speed Circles

    No full text
    corecore