34,125 research outputs found

    MODELING OF LIGHT ILLUMINATION FIELD ON MICRO-EXPRESSION FOR FACE RECOGNITION APPLICATION

    Get PDF
    the feature points and thereby degrades the recognition rate. Though past techniques in the same area often requires manual setting of thresholding parameters but in this study the presented technique solves the problem of variation in light illumination by modeling the light field on facial micro expression for improving performance of the face recognition. The performance of the method is compared with the other techniques based on two benchmark datasets namely: CMU PIE & Multi-PIE.  Â

    2D Face Database Diversification Based on 3D Face Modeling

    Get PDF
    Pose and illumination are identified as major problems in 2D face recognition (FR). It has been theoretically proven that the more diversified instances in the training phase, the more accurate and adaptable the FR system appears to be. Based on this common awareness, researchers have developed a large number of photographic face databases to meet the demand for data training purposes. In this paper, we propose a novel scheme for 2D face database diversification based on 3D face modeling and computer graphics techniques, which supplies augmented variances of pose and illumination. Based on the existing samples from identical individuals of the database, a synthesized 3D face model is employed to create composited 2D scenarios with extra light and pose variations. The new model is based on a 3D Morphable Model (3DMM) and genetic type of optimization algorithm. The experimental results show that the complemented instances obviously increase diversification of the existing database

    ClusterFace: Joint Clustering and Classification for Set-Based Face Recognition

    Get PDF
    Deep learning technology has enabled successful modeling of complex facial features when high quality images are available. Nonetheless, accurate modeling and recognition of human faces in real world scenarios `on the wild' or under adverse conditions remains an open problem. When unconstrained faces are mapped into deep features, variations such as illumination, pose, occlusion, etc., can create inconsistencies in the resultant feature space. Hence, deriving conclusions based on direct associations could lead to degraded performance. This rises the requirement for a basic feature space analysis prior to face recognition. This paper devises a joint clustering and classification scheme which learns deep face associations in an easy-to-hard way. Our method is based on hierarchical clustering where the early iterations tend to preserve high reliability. The rationale of our method is that a reliable clustering result can provide insights on the distribution of the feature space, that can guide the classification that follows. Experimental evaluations on three tasks, face verification, face identification and rank-order search, demonstrates better or competitive performance compared to the state-of-the-art, on all three experiments

    Unconstrained Face Recognition

    Get PDF
    Although face recognition has been actively studied over the past decade, the state-of-the-art recognition systems yield satisfactory performance only under controlled scenarios and recognition accuracy degrades significantly when confronted with unconstrained situations due to variations such as illumintion, pose, etc. In this dissertation, we propose novel approaches that are able to recognize human faces under unconstrained situations. Part I presents algorithms for face recognition under illumination/pose variations. For face recognition across illuminations, we present a generalized photometric stereo approach by modeling all face appearances belonging to all humans under all lighting conditions. Using a linear generalization, we achieve a factorization of the observation matrix consisting of face appearances of different individuals, each under a different illumination. We resolve ambiguities in factorization using surface integrability and symmetry constraints. In addition, an illumination-invariant identity descriptor is provided to perform face recognition across illuminations. We further extend the generalized photometric stereo approach to an illuminating light field approach, which is able to recognize faces under pose and illumination variations. Face appearance lies in a high-dimensional nonlinear manifold. In Part II, we introduce machine learning approaches based on reproducing kernel Hilbert space (RKHS) to capture higher-order statistical characteristics of the nonlinear appearance manifold. In particular, we analyze principal components of the RKHS in a probabilistic manner and compute distances such as the Chernoff distance, the Kullback-Leibler divergence between two Gaussian densities in RKHS. Part III is on face tracking and recognition from video. We first present an enhanced tracking algorithm that models online appearance changes in a video sequence using a mixture model and produces good tracking results in various challenging scenarios. For video-based face recognition, while conventional approaches treat tracking and recognition separately, we present a simultaneous tracking-and-recognition approach. This simultaneous approach solved using the sequential importance sampling algorithm improves accuracy in both tracking and recognition. Finally, we propose a unifying framework called probabilistic identity characterization able to perform face recognition under registration/illumination/pose variation and from a still image, a group of still images, or a video sequence

    Learning Geometry-free Face Re-lighting

    Get PDF
    The accurate modeling of the variability of illumination in a class of images is a fundamental problem that occurs in many areas of computer vision and graphics. For instance, in computer vision there is the problem of facial recognition. Simply, one would hope to be able to identify a known face under any illumination. On the other hand, in graphics one could imagine a system that, given an image, the illumination model could be identified and then used to create new images. In this thesis we describe a method for learning the illumination model for a class of images. Once the model is learnt it is then used to render new images of the same class under the new illumination. Results are shown for both synthetic and real images. The key contribution of this work is that images of known objects can be re-illuminated using small patches of image data and relatively simple kernel regression models. Additionally, our approach does not require any knowledge of the geometry of the class of objects under consideration making it relatively straightforward to implement. As part of this work we will examine existing geometric and image-based re-lighting techniques; give a detailed description of our geometry-free face re-lighting process; present non-linear regression and basis selection with respect to image synthesis; discuss system limitations; and look at possible extensions and future work
    • …
    corecore