8 research outputs found

    {3D} Morphable Face Models -- Past, Present and Future

    No full text
    In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications

    3D statistical shape analysis of the face in Apert syndrome

    Get PDF
    Timely diagnosis of craniofacial syndromes as well as adequate timing and choice of surgical technique are essential for proper care management. Statistical shape models and machine learning approaches are playing an increasing role in Medicine and have proven its usefulness. Frameworks that automate processes have become more popular. The use of 2D photographs for automated syndromic identification has shown its potential with the Face2Gene application. Yet, using 3D shape information without texture has not been studied in such depth. Moreover, the use of these models to understand shape change during growth and its applicability for surgical outcome measurements have not been analysed at length. This thesis presents a framework using state-of-the-art machine learning and computer vision algorithms to explore possibilities for automated syndrome identification based on shape information only. The purpose of this was to enhance understanding of the natural development of the Apert syndromic face and its abnormality as compared to a normative group. An additional method was used to objectify changes as result of facial bipartition distraction, a common surgical correction technique, providing information on the successfulness and on inadequacies in terms of facial normalisation. Growth curves were constructed to further quantify facial abnormalities in Apert syndrome over time along with 3D shape models for intuitive visualisation of the shape variations. Post-operative models were built and compared with age-matched normative data to understand where normalisation is coming short. The findings in this thesis provide markers for future translational research and may accelerate the adoption of the next generation diagnostics and surgical planning tools to further supplement the clinical decision-making process and ultimately to improve patients’ quality of life

    A machine learning approach to statistical shape models with applications to medical image analysis

    Get PDF
    Statistical shape models have become an indispensable tool for image analysis. The use of shape models is especially popular in computer vision and medical image analysis, where they were incorporated as a prior into a wide range of different algorithms. In spite of their big success, the study of statistical shape models has not received much attention in recent years. Shape models are often seen as an isolated technique, which merely consists of applying Principal Component Analysis to a set of example data sets. In this thesis we revisit statistical shape models and discuss their construction and applications from the perspective of machine learning and kernel methods. The shapes that belong to an object class are modeled as a Gaussian Process whose parameters are estimated from example data. This formulation puts statistical shape models in a much wider context and makes the powerful inference tools from learning theory applicable to shape modeling. Furthermore, the formulation is continuous and thus helps to avoid discretization issues, which often arise with discrete models. An important step in building statistical shape models is to establish surface correspondence. We discuss an approach which is based on kernel methods. This formulation allows us to integrate the statistical shape model as an additional prior. It thus unifies the methods of registration and shape model fitting. Using Gaussian Process regression we can integrate shape constraints in our model. These constraints can be used to enforce landmark matching in the fitting or correspondence problem. The same technique also leads directly to a new solution for shape reconstruction from partial data. In addition to experiments on synthetic 2D data sets, we show the applicability of our methods on real 3D medical data of the human head. In particular, we build a 3D model of the human skull, and present its applications for the planning of cranio-facial surgeries

    Generative shape and image analysis by combining Gaussian processes and MCMC sampling

    Get PDF
    Fully automatic analysis of faces is important for automatic access control, human computer interaction or for automatically evaluate surveillance videos. For humans it is easy to look at and interpret faces. Assigning attributes, moods or even intentions to the depicted person seem to happen without any difficulty. In contrast computers struggle even for simple questions and still fail to answer more demanding questions like: "Are these two persons looking at each other?" The interpretation of an image depicting a face is facilitated using a generative model for faces. Modeling the variability between persons, illumination, view angle or occlusions lead to a rich abstract representation. The model state encodes comprehensive information reducing the effort needed to solve a wide variety of tasks. However, to use a generative model, first the model needs to be built and secondly the model has to be adapted to a particular image. There exist many highly tuned algorithms for either of these steps. Most algorithms require more or less user input. These algorithms often lack robustness, full automation or wide applicability to different objects or data modalities. Our main contribution in this PhD-thesis is the presentation of a general, probabilistic framework to build and adapt generative models. Using the framework, we exploit information probabilistically in the domain it originates from, independent of the problem domain. The framework combines Gaussian processes and Data-Driven MCMC sampling. The generative models are built using the Gaussian process formulation. To adapt a model we use the Metropolis Hastings algorithm based on a propose-and-verify strategy. The framework consists of different well separated parts. Model building is separated from the adaptation. The adaptation is further separated into update proposals and a verification layer. This allows to adapt, exchange, remove or integrate individual parts without changes to other parts. The framework is presented in the context of facial data analysis. We introduce a new kernel exploiting the symmetry of faces and augment a learned generative model with additional flexibility. We show how a generative model is rigidly aligned, non-rigidly registered or adapted to 2d images with the same basic algorithm. We exploit information from 2d images to constrain 3d registration. We integrate directed proposal into sampling shifting the algorithm towards stochastic optimization. We show how to handle missing data by adapting the used likelihood model. We integrate a discriminative appearance model into the image likelihood model to handle occlusions. We demonstrate the wide applicability of our framework by solving also medical image analysis problems reusing the parts introduced for faces

    Self-Supervised Shape and Appearance Modeling via Neural Differentiable Graphics

    Get PDF
    Inferring 3D shape and appearance from natural images is a fundamental challenge in computer vision. Despite recent progress using deep learning methods, a key limitation is the availability of annotated training data, as acquisition is often very challenging and expensive, especially at a large scale. This thesis proposes to incorporate physical priors into neural networks that allow for self-supervised learning. As a result, easy-to-access unlabeled data can be used for model training. In particular, novel algorithms in the context of 3D reconstruction and texture/material synthesis are introduced, where only image data is available as supervisory signal. First, a method that learns to reason about 3D shape and appearance solely from unstructured 2D images, achieved via differentiable rendering in an adversarial fashion, is proposed. As shown next, learning from videos significantly improves 3D reconstruction quality. To this end, a novel ray-conditioned warp embedding is proposed that aggregates pixel-wise features from multiple source images. Addressing the challenging task of disentangling shape and appearance, first a method that enables 3D texture synthesis independent of shape or resolution is presented. For this purpose, 3D noise fields of different scales are transformed into stationary textures. The method is able to produce 3D textures, despite only requiring 2D textures for training. Lastly, the surface characteristics of textures under different illumination conditions are modeled in the form of material parameters. Therefore, a self-supervised approach is proposed that has no access to material parameters but only flash images. Similar to the previous method, random noise fields are reshaped to material parameters, which are conditioned to replicate the visual appearance of the input under matching light

    Automatic Landmarking for Non-cooperative 3D Face Recognition

    Get PDF
    This thesis describes a new framework for 3D surface landmarking and evaluates its performance for feature localisation on human faces. This framework has two main parts that can be designed and optimised independently. The first one is a keypoint detection system that returns positions of interest for a given mesh surface by using a learnt dictionary of local shapes. The second one is a labelling system, using model fitting approaches that establish a one-to-one correspondence between the set of unlabelled input points and a learnt representation of the class of object to detect. Our keypoint detection system returns local maxima over score maps that are generated from an arbitrarily large set of local shape descriptors. The distributions of these descriptors (scalars or histograms) are learnt for known landmark positions on a training dataset in order to generate a model. The similarity between the input descriptor value for a given vertex and a model shape is used as a descriptor-related score. Our labelling system can make use of both hypergraph matching techniques and rigid registration techniques to reduce the ambiguity attached to unlabelled input keypoints for which a list of model landmark candidates have been seeded. The soft matching techniques use multi-attributed hyperedges to reduce ambiguity, while the registration techniques use scale-adapted rigid transformation computed from 3 or more points in order to obtain one-to-one correspondences. Our final system achieves better or comparable (depending on the metric) results than the state-of-the-art while being more generic. It does not require pre-processing such as cropping, spike removal and hole filling and is more robust to occlusion of salient local regions, such as those near the nose tip and inner eye corners. It is also fully pose invariant and can be used with kinds of objects other than faces, provided that labelled training data is available

    The role of facial cues to body size on attractiveness and perceived leadership ability

    Get PDF
    Facial appearance has a strong effect on leadership selection. Ratings of perceived leadership ability from facial images have a pronounced influence on leadership selection in politics, from low-level municipal elections to the federal elections of the most powerful countries in the world. Furthermore, ratings of leadership ability from facial images of business leaders correlate with leadership performance as measured by profits earned. Two elements of facial appearance that have reliable effects of perceived leadership ability are perceived dominance and attractiveness. These cues have been predictive of leadership choices, both experimentally and in the real-world. Chapters 1 and 2 review research on face components that affect perceived dominance and attractiveness. Chapter 3 discusses how perceived dominance and attractiveness influence perception of leadership ability. Two characteristics that affect both perceived dominance and attractiveness are height and weight. Chapters 4-9 present empirical studies on two recently-discovered facial parameters: perceived height (how tall someone appears from their face) and facial adiposity (a reliable proxy of body mass index that influences perceived weight). Chapters 4 and 5 demonstrate that these facial parameters alter facial attractiveness. Chapters 6, 7, and 8 examine how perceived height and facial adiposity influence perceived leadership ability. Chapter 9 examines how perceived height alters leadership perception in war and peace contexts. Chapter 10 summarises the empirical research reported in the thesis and draws conclusions from the findings. Chapter 10 also lists proposals for future research that could further enhance our knowledge of how facial cues to perceived body size influence democratic leadership selection
    corecore