8,425 research outputs found

    Recovering facial shape using a statistical model of surface normal direction

    Get PDF
    In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images

    Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

    Full text link
    Modern deep learning systems successfully solve many perception tasks such as object pose estimation when the input image is of high quality. However, in challenging imaging conditions such as on low-resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy. While a loss in performance is unavoidable, we would like our models to quantify their uncertainty in order to achieve robustness against images of varying quality. Probabilistic deep learning models combine the expressive power of deep learning with uncertainty quantification. In this paper, we propose a novel probabilistic deep learning model for the task of angular regression. Our model uses von Mises distributions to predict a distribution over object pose angle. Whereas a single von Mises distribution is making strong assumptions about the shape of the distribution, we extend the basic model to predict a mixture of von Mises distributions. We show how to learn a mixture model using a finite and infinite number of mixture components. Our model allows for likelihood-based training and efficient inference at test time. We demonstrate on a number of challenging pose estimation datasets that our model produces calibrated probability predictions and competitive or superior point estimates compared to the current state-of-the-art

    3D Face Reconstruction from Light Field Images: A Model-free Approach

    Full text link
    Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art

    Classification of Humans into Ayurvedic Prakruti Types using Computer Vision

    Get PDF
    Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine. This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda

    Cerebral differences in explicit and implicit emotional processing - An fMRI study

    Get PDF
    The processing of emotional facial expression is a major part of social communication and understanding. In addition to explicit processing, facial expressions are also processed rapidly and automatically in the absence of explicit awareness. We investigated 12 healthy subjects by presenting them with an implicit and explicit emotional paradigm. The subjects reacted significantly faster in implicit than in explicit trials but did not differ in their error ratio. For the implicit condition increased signals were observed in particular in the thalami, the hippocampi, the frontal inferior gyri and the right middle temporal region. The analysis of the explicit condition showed increased blood-oxygen-level-dependent signals especially in the caudate nucleus, the cingulum and the right prefrontal cortex. The direct comparison of these 2 different processes revealed increased activity for explicit trials in the inferior, superior and middle frontal gyri, the middle cingulum and left parietal regions. Additional signal increases were detected in occipital regions, the cerebellum, and the right angular and lingual gyrus. Our data partially confirm the hypothesis of different neural substrates for the processing of implicit and explicit emotional stimuli. Copyright (c) 2007 S. Karger AG, Basel

    Automatic Face Recognition System Based on Local Fourier-Bessel Features

    Full text link
    We present an automatic face verification system inspired by known properties of biological systems. In the proposed algorithm the whole image is converted from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT). Using the whole image is compared to the case where only face image regions (local analysis) are considered. The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images, and a Pseudo-Fisher discriminator is built. Verification test results on the FERET database showed that the local-based algorithm outperforms the global-FBT version. The local-FBT algorithm performed as state-of-the-art methods under different testing conditions, indicating that the proposed system is highly robust for expression, age, and illumination variations. We also evaluated the performance of the proposed system under strong occlusion conditions and found that it is highly robust for up to 50% of face occlusion. Finally, we automated completely the verification system by implementing face and eye detection algorithms. Under this condition, the local approach was only slightly superior to the global approach.Comment: 2005, Brazilian Symposium on Computer Graphics and Image Processing, 18 (SIBGRAPI

    Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models

    Full text link
    Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we anticipate driving maneuvers a few seconds before they occur. For this purpose we equip a car with cameras and a computing device to capture the driving context from both inside and outside of the car. We propose an Autoregressive Input-Output HMM to model the contextual information alongwith the maneuvers. We evaluate our approach on a diverse data set with 1180 miles of natural freeway and city driving and show that we can anticipate maneuvers 3.5 seconds before they occur with over 80\% F1-score in real-time.Comment: ICCV 2015, http://brain4cars.co

    Spatially dense 3D facial heritability and modules of co-heritability in a father-offspring design

    Get PDF
    Introduction: The human face is a complex trait displaying a strong genetic component as illustrated by various studies on facial heritability. Most of these start from sparse descriptions of facial shape using a limited set of landmarks. Subsequently, facial features are preselected as univariate measurements or principal components and the heritability is estimated for each of these features separately. However, none of these studies investigated multivariate facial features, nor the co-heritability between different facial features. Here we report a spatially dense multivariate analysis of facial heritability and co-heritability starting from data from fathers and their children available within ALSPAC. Additionally, we provide an elaborate overview of related craniofacial heritability studies. Methods: In total, 3D facial images of 762 father-offspring pairs were retained after quality control. An anthropometric mask was applied to these images to establish spatially dense quasi-landmark configurations. Partial least squares regression was performed and the (co-)heritability for all quasi-landmarks (∼7160) was computed as twice the regression coefficient. Subsequently, these were used as input to a hierarchical facial segmentation, resulting in the definition of facial modules that are internally integrated through the biological mechanisms of inheritance. Finally, multivariate heritability estimates were obtained for each of the resulting modules. Results: Nearly all modular estimates reached statistical significance under 1,000,000 permutations and after multiple testing correction (p ≤ 1.3889 × 10-3), displaying low to high heritability scores. Particular facial areas showing the greatest heritability were similar for both sons and daughters. However, higher estimates were obtained in the former. These areas included the global face, upper facial part (encompassing the nasion, zygomas and forehead) and nose, with values reaching 82% in boys and 72% in girls. The lower parts of the face only showed low to moderate levels of heritability. Conclusion: In this work, we refrain from reducing facial variation to a series of individual measurements and analyze the heritability and co-heritability from spatially dense landmark configurations at multiple levels of organization. Finally, a multivariate estimation of heritability for global-to-local facial segments is reported. Knowledge of the genetic determination of facial shape is useful in the identification of genetic variants that underlie normal-range facial variation
    • …
    corecore