2,347 research outputs found

    Cognitive representation of facial asymmetry

    Get PDF
    The human face displays mild asymmetry, with measurements of facial structure differing from left to right of the meridian by an average of three percent. Presently this source of variation is of theoretical interest primarily to researchers studying the perception of beauty, but a very limited amount of research has addressed the question of how this variation contributes to the cognitive processes underlying face recognition. This is surprising given that measurement of facial asymmetry can reliably distinguish between even the most similar of faces. Furthermore, brain regions responsible for symmetry detection support face-processing regions, and detection of symmetry is superior in upright faces relative to inverted and contrast-reversed face stimuli. In addition, facial asymmetry provides a useful biometric for automatic face recognition systems, and understanding the contribution of facial asymmetry in human face recognition may therefore inform the development of these systems. In this thesis the extent to which facial asymmetry is implicated in the process of recognition in human participants is quantified. By measuring the effect of left-right reversal on various tasks of face processing, the degree to which facial asymmetry is represented by memory is investigated. Marginal sensitivity to mirror reversal is demonstrated in a number of instances, and it is therefore concluded that cognitive representations of faces specify structural asymmetry. Reversal effects are typically slight however and on a number of occasions no reliable effect of this stimulus manipulation is detected. It is likely that a general tendency to treat mirror reversals as equivalent stimuli, in addition to an inability to recall lateral orientation of objects from memory, somewhat obscure the effect of reversal. The findings are discussed in the context of existing literature examining the way in which faces are cognitively represented

    Statistical Modelling of Craniofacial Shape

    Get PDF
    With prior knowledge and experience, people can easily observe rich shape and texture variation for a certain type of objects, such as human faces, cats or chairs, in both 2D and 3D images. This ability helps us recognise the same person, distinguish different kinds of creatures and sketch unseen samples of the same object class. The process of capturing this prior knowledge is mathematically interpreted as statistical modelling. The outcome is a morphable model, a vector space representation of objects, that captures the variation of shape and texture. This thesis presents research aimed at constructing 3DMMs of craniofacial shape and texture using new algorithms and processing pipelines to offer enhanced modelling abilities over existing techniques. In particular, we present several fully automatic modelling approaches and apply them to a large dataset of 3D images of the human head, the Headspace dataset, thus generating the first public shape-and- texture 3D Morphable Model (3DMM) of the full human head. We call this the Liverpool-York Head Model, reflecting the data collection and statistical modelling respectively. We also explore the craniofacial symmetry and asymmetry in template morphing and statistical modelling. We propose a Symmetry-aware Coherent Point Drift (SA-CPD) algorithm, which mitigates the tangential sliding problem seen in competing morphing algorithms. Based on the symmetry-constrained correspondence output of SA-CPD, we present a symmetry-factored statistical modelling method for craniofacial shape. Also, we propose an iterative process of refinement for a 3DMM of the human ear that employs data augmentation. Then we merge the proposed 3DMMs of the ear with the full head model. As craniofacial clinicians like to look at head profiles, we propose a new pipeline to build a 2D morphable model of the craniofacial sagittal profile and augment it with profile models from frontal and top-down views. Our models and data are made publicly available online for research purposes

    Fine-Scaled 3D Geometry Recovery from Single RGB Images

    Get PDF
    3D geometry recovery from single RGB images is a highly ill-posed and inherently ambiguous problem, which has been a challenging research topic in computer vision for several decades. When fine-scaled 3D geometry is required, the problem become even more difficult. 3D geometry recovery from single images has the objective of recovering geometric information from a single photograph of an object or a scene with multiple objects. The geometric information that is to be retrieved can be of different representations such as surface meshes, voxels, depth maps or 3D primitives, etc. In this thesis, we investigate fine-scaled 3D geometry recovery from single RGB images for three categories: facial wrinkles, indoor scenes and man-made objects. Since each category has its own particular features, styles and also variations in representation, we propose different strategies to handle different 3D geometry estimates respectively. We present a lightweight non-parametric method to generate wrinkles from monocular Kinect RGB images. The key lightweight feature of the method is that it can generate plausible wrinkles using exemplars from one high quality 3D face model with textures. The local geometric patches from the source could be copied to synthesize different wrinkles on the blendshapes of specific users in an offline stage. During online tracking, facial animations with high quality wrinkle details can be recovered in real-time as a linear combination of these personalized wrinkled blendshapes. We propose a fast-to-train two-streamed CNN with multi-scales, which predicts both dense depth map and depth gradient for single indoor scene images.The depth and depth gradient are then fused together into a more accurate and detailed depth map. We introduce a novel set loss over multiple related images. By regularizing the estimation between a common set of images, the network is less prone to overfitting and achieves better accuracy than competing methods. Fine-scaled 3D point cloud could be produced by re-projection to 3D using the known camera parameters. To handle highly structured man-made objects, we introduce a novel neural network architecture for 3D shape recovering from a single image. We develop a convolutional encoder to map a given image to a compact code. Then an associated recursive decoder maps this code back to a full hierarchy, resulting a set of bounding boxes to represent the estimated shape. Finally, we train a second network to predict the fine-scaled geometry in each bounding box at voxel level. The per-box volumes are then embedded into a global one, and from which we reconstruct the final meshed model. Experiments on a variety of datasets show that our approaches can estimate fine-scaled geometry from single RGB images for each category successfully, and surpass state-of-the-art performance in recovering faithful 3D local details with high resolution mesh surface or point cloud

    Content based image pose manipulation

    Get PDF
    This thesis proposes the application of space-frequency transformations to the domain of pose estimation in images. This idea is explored using the Wavelet Transform with illustrative applications in pose estimation for face images, and images of planar scenes. The approach is based on examining the spatial frequency components in an image, to allow the inherent scene symmetry balance to be recovered. For face images with restricted pose variation (looking left or right), an algorithm is proposed to maximise this symmetry in order to transform the image into a fronto-parallel pose. This scheme is further employed to identify the optimal frontal facial pose from a video sequence to automate facial capture processes. These features are an important pre-requisite in facial recognition and expression classification systems. The under lying principles of this spatial-frequency approach are examined with respect to images with planar scenes. Using the Continuous Wavelet Transform, full perspective planar transformations are estimated within a featureless framework. Restoring central symmetry to the wavelet transformed images in an iterative optimisation scheme removes this perspective pose. This advances upon existing spatial approaches that require segmentation and feature matching, and frequency only techniques that are limited to affine transformation recovery. To evaluate the proposed techniques, the pose of a database of subjects portraying varying yaw orientations is estimated and the accuracy is measured against the captured ground truth information. Additionally, full perspective homographies for synthesised and imaged textured planes are estimated. Experimental results are presented for both situations that compare favourably with existing techniques in the literature

    Automatic Landmarking for Non-cooperative 3D Face Recognition

    Get PDF
    This thesis describes a new framework for 3D surface landmarking and evaluates its performance for feature localisation on human faces. This framework has two main parts that can be designed and optimised independently. The first one is a keypoint detection system that returns positions of interest for a given mesh surface by using a learnt dictionary of local shapes. The second one is a labelling system, using model fitting approaches that establish a one-to-one correspondence between the set of unlabelled input points and a learnt representation of the class of object to detect. Our keypoint detection system returns local maxima over score maps that are generated from an arbitrarily large set of local shape descriptors. The distributions of these descriptors (scalars or histograms) are learnt for known landmark positions on a training dataset in order to generate a model. The similarity between the input descriptor value for a given vertex and a model shape is used as a descriptor-related score. Our labelling system can make use of both hypergraph matching techniques and rigid registration techniques to reduce the ambiguity attached to unlabelled input keypoints for which a list of model landmark candidates have been seeded. The soft matching techniques use multi-attributed hyperedges to reduce ambiguity, while the registration techniques use scale-adapted rigid transformation computed from 3 or more points in order to obtain one-to-one correspondences. Our final system achieves better or comparable (depending on the metric) results than the state-of-the-art while being more generic. It does not require pre-processing such as cropping, spike removal and hole filling and is more robust to occlusion of salient local regions, such as those near the nose tip and inner eye corners. It is also fully pose invariant and can be used with kinds of objects other than faces, provided that labelled training data is available

    3D Face Recognition under Expressions, Occlusions, and Pose Variations

    Full text link
    corecore