59 research outputs found

    Estimating Coloured 3D Face Models from Single Images: An Example Based Approach

    No full text
    Abstract. In this paper we present a method to derive 3D shape and surface texture of a human face from a single image. The method draws on a general flexible 3D face model which is “learned ” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop, the flexible model is matched to the novel face image. From the coloured 3D model obtained by this procedure, we can generate new images of the face across changes in viewpoint and illumination. Moreover, nonrigid transformations which are represented within the flexible model can be applied, for example changes in facial expression. The key problem for generating a flexible face model is the computation of dense correspondence between all given 3D example faces. A new correspondence algorithm is described which is a generalization of common algorithms for optic flow computation to 3D-face data.

    A Modular Computer Vision Sonification Model For The Visually Impaired

    Get PDF
    Presented at the 18th International Conference on Auditory Display (ICAD2012) on June 18-21, 2012 in Atlanta, Georgia.Reprinted by permission of the International Community for Auditory Display, http://www.icad.org.This paper presents a Modular Computer Vision Sonification Model which is a general framework for acquisition, exploration and sonification of visual information to support visually impaired people. The model exploits techniques from Computer Vision and aims to convey as much information as possible about the image to the user, including color, edges and what we refer to as Orientation maps and Micro-Textures. We deliberatively focus on low level features to provide a very general image analysis tool. Our sonification approach relies on MIDI using "real-world" instead of synthetic instruments. The goal is to provide direct perceptual access to images or environments actively and in real time. Our system is already in use, at an experimental stage, at a local residential school, helping congenital blind children develop various cognitive abilities such as geometric understanding and spatial sense as well as offering an intuitive approach to colors and textures

    Face Recognition Based on Fitting a 3D Morphable Model

    No full text
    Abstract—This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database. Index Terms—Face recognition, shape estimation, deformable model, 3D faces, pose invariance, illumination invariance. é

    A Morphable Model For The Synthesis Of 3D Faces

    No full text
    In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an "unlikely" appearance. Starting fro

    Component-based face recognition with 3D morphable models

    No full text
    Abstract. We present a novel approach to pose and illumination invariant face recognition that combines two recent advances in the computer vision field: component-based recognition and 3D morphable models. First, a 3D morphable model is used to generate 3D face models from three input images from each person in the training database. The 3D models are rendered under varying pose and illumination conditions to build a large set of synthetic images. These images are then used to train a component-based face recognition system. The resulting system achieved 90 % accuracy on a database of 1200 real images of six people and significantly outperformed a comparable global face recognition system. The results show the potential of the combination of morphable models and component-based recognition towards pose and illumination invariant face recognition based on only three training images of each subject.

    Face identification by fitting a 3D morphable model using linear shape and texture error functions

    No full text
    Abstract This paper presents a novel algorithm aiming at analysis and identification of faces viewed from different poses and illumination conditions. Face analysis from a single image is performed by recovering the shape and textures parameters of a 3D Morphable Model in an analysis-by-synthesis fashion. The shape parameters are computed from a shape error estimated by optical flow and the texture parameters are obtained from a texture error. The algorithm uses linear equations to recover the shape and texture parameters irrespective of pose and lighting conditions of the face image. Identification experiments are reported on more than 5000 images from the publicly available CMU-PIE database which includes faces viewed from 13 different poses and under 22 different illuminations. Extensive identification results are available on our web page for future comparison with novel algorithms.

    Holistic processing of shape cues in face identification : evidence from face inversion, composite faces and acquired prosopagnosia

    No full text
    Face recognition is based on two main sources of information: Three-dimensional (3-D) shape and two-dimensional surface reflectance (colour and texture). The respective contribution of these two sources of information in face identity matching task is usually equal, suggesting that there is no functional dissociation. However, there is recent evidence from electrophysiology and neuroimaging that contribution of shape and surface reflectance can be dissociated in time and neural localization. To understand the nature of a potential functional dissociation between shape and surface information during face individualization, we used a 3-D morphable model (Blanz & Vetter 1999) to generate pairs of face stimuli that differed selectively in shape, reflectance, or both. In three experiments, we provided evidence that the processing of shape and surface reflectance can be functionally dissociated. First, participants performed a delayed face matching task, in which discrimination between the sample and distractor faces with the same orientation (either upright or inverted) was possible based on shape information alone, reflectance information alone, or both. Inversion decreased performance for all conditions, but the effect was significantly larger when discrimination was based on shape information alone. Second, we found that participants’ composite face effect, a marker of holistic processing, was caused primarily by the presence of interfering shape cues, with little interference from surface reflectance cues. Finally, contrary to normal observers, a well-known patient with acquired prosopagnosia suffering from holistic face perception impairment performed significantly better when discriminating faces based on reflectance than on shape cues. Altogether, these observations support the view that the diagnosticity of shape information for individualizing faces depends relatively more on holistic face processing than that of surface reflectance cues
    • 

    corecore