18,367 research outputs found

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    Get PDF
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Morphable Face Models - An Open Framework

    Full text link
    In this paper, we present a novel open-source pipeline for face registration based on Gaussian processes as well as an application to face image analysis. Non-rigid registration of faces is significant for many applications in computer vision, such as the construction of 3D Morphable face models (3DMMs). Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models with B-splines and PCA models as examples. GPMM separate problem specific requirements from the registration algorithm by incorporating domain-specific adaptions as a prior model. The novelties of this paper are the following: (i) We present a strategy and modeling technique for face registration that considers symmetry, multi-scale and spatially-varying details. The registration is applied to neutral faces and facial expressions. (ii) We release an open-source software framework for registration and model-building, demonstrated on the publicly available BU3D-FE database. The released pipeline also contains an implementation of an Analysis-by-Synthesis model adaption of 2D face images, tested on the Multi-PIE and LFW database. This enables the community to reproduce, evaluate and compare the individual steps of registration to model-building and 3D/2D model fitting. (iii) Along with the framework release, we publish a new version of the Basel Face Model (BFM-2017) with an improved age distribution and an additional facial expression model

    3D Face Recognition: Feature Extraction Based on Directional Signatures from Range Data and Disparity Maps

    Get PDF
    In this paper, the author presents a work on i) range data and ii) stereo-vision system based disparity map profiling that are used as signatures for 3D face recognition. The signatures capture the intensity variations along a line at sample points on a face in any particular direction. The directional signatures and some of their combinations are compared to study the variability in recognition performances. Two 3D face image datasets namely, a local student database captured with a stereo vision system and the FRGC v1 range dataset are used for performance evaluation

    Aktif görünüm modeline dayalı gürbüz yüz hizalama

    Get PDF
    In building a face recognition system for real-life scenarios, one usually faces the problem that is the selection of a feature-space and preprocessing methods such as alignment under varying illumination conditions and poses. In this study, we developed a robust face alignment approach based on Active Appearance Model (AAM) by inserting an illumination normalization module into the standard AAM searching procedure and inserting different poses of the same identity into the training set. The modified AAM search can now handle both illumination and pose variations in the same epoch, hence it provides better convergence in both point-to-point and point-to-curve senses. We also investigate how face recognition performance is affected by the selection of feature space as well as the proposed alignment method. The experimental results show that the combined pose alignment and illumination normalization methods increase the recognition rates considerably for all feature spaces. In this paper, we focus on the problems induced by varying illumination and poses. Our primary aim is to eliminate the negative effect of illumination and pose on the face recognition system performance through illumination and pose-invariant face alignment based on Active Appearance Model. Pose normalization is required before recognition in order to reach acceptable recognition rates. We developed AAM based pose normalization method which uses only one AAM. There are two important contributions over the previous studies. By using the proposed method: One can synthetically generate appearances for different poses when only frontal face image is available. One can generate frontal appearance of the face when there is only non-frontal face image is available. The same variation in pose imposes similar effect on the face appearance for all individuals. Deformation mostly occurs on the shape whereas the texture is almost constant. Since the number of landmarks in AAM is constant, the wireframe triangles are translated or scaled as pose changes. So as we change pose, only wireframe triangles undergo affine transformation but the gray level distribution within these triangles remains the same. One can easily generate frontal face appearance if AAM is correctly fitted to any given non-frontal face of the same individual provided that there is no self-occlusion on face. Self-occlusion usually is not a problem for angles less than ±45. For 2D pose generation, we first compute how each landmark point translates and scales with respect to the corresponding frontal counterpart landmark point for 8 different poses, and obtain a ratio vector for each pose. We use the ratio vector to create the same pose variation over the shape of another individual. Appearances are also obtained through AAM using synthetically generated landmarks. It is important to note that the generated faces contain no information about the individual used in building the ratio matrix. An AAM model trained by using only frontal faces can only fit into frontal faces well and fail to fit into non-frontal faces. Our purpose here is to enrich the training database by inserting synthetically generated faces at different poses so that AAM model trained by frontal faces can now converge to images at any pose. In this paper we developed AAM based face alignment method which handles illumination and pose variations. The classical AAM fails to model the appearances of the same identity under different illuminations and poses. We solved this problem by inserting histogram fitting based normalization into the searching mechanism and inserting different poses of the same identity into the training set. From the experimental results, we showed that the proposed face restoration scheme for AAM provides higher accuracy for face alignment in point-to-point error sense. Recognition results based on PCA and LDA feature spaces showed that the proposed illumination and pose normalization outperforms standard AAM. Keywords: Face alignment, active appearance models, illumination invariant face recognition.Yüz görünümündeki şekil ve doku değişimine bağlı farklılıklar yüz tanıma problemini oldukça zor hale getirmektedir. Bireyler arası yüz görünüm farklılıklarının fazla olmasına karşın, her bireyin kendi yüz görünümünü farklı hale getirecek değişimlerde mevcuttur. Özellikle aydınlatma ve poz değişimleri yüz tanıma sistemlerinin başarımını etkileyen zorlukların başında gelmektedir. Bu çalışmada otomatik yüz hizalama için aydınlatma ve poz değişimlerine karşı gürbüz yeni bir yöntem tanıtılmıştır. Klasik aktif görünüm modeli (AGM) yapısına yüz için özelleştirilmiş aydınlatma normalizasyonu eklenerek AGM’nin farklı aydınlatma koşullarındaki arama ve yakınsama performansını arttıran yeni bir yöntem önerilmiştir. AGM ile yüz bölütlemede, özgün yüz aydınlatma normalizasyonunu AGM bükme (warping) işleminden hemen sonra ve her çevirimde uygulayarak aydınlatma değişimlerine karşı gürbüz bir model oluşturulmuştur. Yöntem giriş olarak verilen farklı aydınlatılmış ve farklı bir poza sahip yüz görüntüsünü hem iyileştirmeye hem de hizalamaya çalışmaktadır. Ayrıca tam karşıdan çekilmiş tek bir yüz görüntüsünden, o kişinin farklı pozlara sahip görüntülerini sentezleyen bir yöntem tanıtılmış ve sentetik olarak sentezlenen poz verileri ile AGM şekil uzayı güçlendirilerek poz değişimlerine karşı gürbüz bir yöntem önerilmiştir. Önerilen yöntemde, model eğitimi için aynı bireyin farklı aydınlatma ve poza sahip görüntülerine ihtiyaç duyulmamaktadır. Önerilen yöntemde aydınlatma değişimlerine karşı bağışık bir yapı oluşturulması için karmaşık aydınlatma modelleri gerekmemektedir. Deneysel çalışmalardan da görüleceği gibi önerilen yöntem, farklı aydınlatma ve pozlarda bile klasik AGM’ye göre oldukça iyi sonuçlar vermiştir. Anahtar Kelimeler: Yüz hizalama, aktif görünüm modelleri, aydınlatmadan bağımsız yüz tanıma

    End-to-end 3D face reconstruction with deep neural networks

    Full text link
    Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.Comment: Accepted to CVPR1

    2D Face Recognition System Based on Selected Gabor Filters and Linear Discriminant Analysis LDA

    Full text link
    We present a new approach for face recognition system. The method is based on 2D face image features using subset of non-correlated and Orthogonal Gabor Filters instead of using the whole Gabor Filter Bank, then compressing the output feature vector using Linear Discriminant Analysis (LDA). The face image has been enhanced using multi stage image processing technique to normalize it and compensate for illumination variation. Experimental results show that the proposed system is effective for both dimension reduction and good recognition performance when compared to the complete Gabor filter bank. The system has been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and achieved average recognition rate of 98.9 %
    corecore