31,345 research outputs found

    Face recognition under varying pose: The role of texture and shape

    Get PDF
    Although remarkably robust, face recognition is not perfectly invariant to pose and viewpoint changes. It has been known since long, that the profile as well as the full-face view result in a recognition performance that is worse than a view from within that range. However, only few data exists that investigate this phenomenon in detail. This work intends to provide such data using a high angular resolution and a large range of poses. Since there are inconsistencies in the literature concerning these issues, we emphasize on the different role of the learning view and the testing view in the recognition experiment and on the role of information contained in the texture and in the shape of a face. Our stimuli were generated from laser-scanned head models and contained either the natural texture or only Lambertian shading and no texture. The results of our same/different face recognition experiments are: 1. Only the learning view but not the testing view effects the recognition performance. 2. For the textured faces the optimal learning view is closer to the full-face view than for the shaded faces. 3. For the shaded faces, we find a significantly better recognition performance for the symmetric view. The results can be interpreted in terms of different strategies to recover invariants from texture and from shading

    Video-based online face recognition using identity surfaces

    Get PDF
    Recognising faces across multiple views is more challenging than that from a fixed view because of the severe non-linearity caused by rotation in depth, self-occlusion, self-shading, and change of illumination. The problem can be related to the problem of modelling the spatiotemporal dynamics of moving faces from video input for unconstrained live face recognition. Both problems remain largely under-developed. To address the problems, a novel approach is presented in this paper. A multi-view dynamic face model is designed to extract the shape-and-pose-free texture patterns of faces. The model provides a precise correspondence to the task of recognition since the 3D shape information is used to warp the multi-view faces onto the model mean shape in frontal-view. The identity surface of each subject is constructed in a discriminant feature space from a sparse set of face texture patterns, or more practically, from one or more learning sequences containing the face of the subject. Instead of matching templates or estimating multi-modal density functions, face recognition can be performed by computing the pattern distances to the identity surfaces or trajectory distances between the object and model trajectories. Experimental results depict that this approach provides an accurate recognition rate while using trajectory distances achieves a more robust performance since the trajectories encode the spatio-temporal information and contain accumulated evidence about the moving faces in a video input

    Effects of lighting on the perception of facial surfaces

    Get PDF
    The problem of variable illumination for object constancy has been largely neglected by "edge-based" theories of object recognition. However, there is evidence that edge-based schemes may not be sufficient for face processing and that shading information may be necessary (Bruce. 1988). Changes in lighting affect the pattern of shading on any three-dimensional object and the aim of this thesis was to investigate the effects of lighting on tasks involving face perception. Effects of lighting are first reported on the perception of the hollow face illusion (Gregory, 1973). The impression of a convex face was found to be stronger when light appeared to be from above, consistent with the importance of shape-from- shading which is thought to incorporate a light-from-above assumption. There was an independent main effect of orientation with the illusion stronger when the face was upright. This confirmed that object knowledge was important in generating the illusion, a conclusion which was confirmed by comparison with a "hollow potato" illusion. There was an effect of light on the inverted face suggesting that the direction of light may generally affect the interpretation of surfaces as convex or concave. It was also argued that there appears to be a general preference for convex interpretations of patterns of shading. The illusion was also found to be stronger when viewed monocularly and this effect was also independent of orientation. This was consistent with the processing of shape information by independent modules with object knowledge acting as a further constraint on the final interpretation. Effects of lighting were next reported on the recognition of shaded representations of facial surfaces, with top lighting facilitating processing. The adverse effects of bottom lighting on the interpretation of facial shape appear to affect within category as well as between category discriminations. Photographic negation was also found to affect recognition performance and it was suggested that its effects may be complimentary to those of bottom lighting in some respects. These effects were reported to be dependent on view. The last set of experiments investigated the effects of lighting and view on a simultaneous face matching task using the same surface representations which required subjects to decide if two images were of the same or different people. Subjects were found to be as much affected by a change in lighting as a change in view, which seems inconsistent with edge-based accounts. Top lighting was also found to facilitate matches across changes in view. When the stimuli were inverted matches across changes in both view and light were poorer, although image differences were the same. In other experiments subjects were found to match better across changes between two directions of top lighting than between directions of bottom lighting, although the extent of the changes were the same, suggesting the importance of top lighting for lighting as well as view invariance. Inverting the stimuli, which also inverts the lighting relative to the observer, disrupted matching across directions of top lighting but facilitated matching between levels of bottom lighting, consistent with the use of shading information. Changes in size were not found to affect matching showing that the effect of lighting was not only because it changes image properties. The effect of lighting was also found to transfer to digitised photographs showing that it was not an artifact of the materials. Lastly effects of lighting were reported when images were presented sequentially showing that the effect was not an artifact of simultaneous presentation. In the final section the effects reported were considered within the framework of theories of object recognition and argued to be inconsistent with invariant features, edge-based or alignment approaches. An alternative scheme employing surface-based primitives derived from shape-from-shuding was developed to account for the pattern of effects and contrasted with an image-based accoun

    Recovering facial shape using a statistical model of surface normal direction

    Get PDF
    In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images

    Neural Face Editing with Intrinsic Image Disentangling

    Full text link
    Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other --- a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on "in-the-wild" images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.Comment: CVPR 2017 ora

    3D Face Reconstruction by Learning from Synthetic Data

    Full text link
    Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.Comment: The first two authors contributed equally to this wor

    Shape-from-shading using the heat equation

    Get PDF
    This paper offers two new directions to shape-from-shading, namely the use of the heat equation to smooth the field of surface normals and the recovery of surface height using a low-dimensional embedding. Turning our attention to the first of these contributions, we pose the problem of surface normal recovery as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. We perform our analysis on a plane perpendicular to the light source direction, where the z component of the surface normal is equal to the normalized image brightness. The x - y or azimuthal component of the surface normal is found by computing the gradient of a scalar field that evolves with time subject to the heat equation. We solve the heat equation for the scalar potential and, hence, recover the azimuthal component of the surface normal from the average image brightness, making use of a simple finite difference method. The second contribution is to pose the problem of recovering the surface height function as that of embedding the field of surface normals on a manifold so as to preserve the pattern of surface height differences and the lattice footprint of the surface normals. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces

    Photometric Depth Super-Resolution

    Full text link
    This study explores the use of photometric techniques (shape-from-shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall
    • …
    corecore