187,129 research outputs found

    Reference face graph for face recognition

    Get PDF
    Face recognition has been studied extensively; however, real-world face recognition still remains a challenging task. The demand for unconstrained practical face recognition is rising with the explosion of online multimedia such as social networks, and video surveillance footage where face analysis is of significant importance. In this paper, we approach face recognition in the context of graph theory. We recognize an unknown face using an external reference face graph (RFG). An RFG is generated and recognition of a given face is achieved by comparing it to the faces in the constructed RFG. Centrality measures are utilized to identify distinctive faces in the reference face graph. The proposed RFG-based face recognition algorithm is robust to the changes in pose and it is also alignment free. The RFG recognition is used in conjunction with DCT locality sensitive hashing for efficient retrieval to ensure scalability. Experiments are conducted on several publicly available databases and the results show that the proposed approach outperforms the state-of-the-art methods without any preprocessing necessities such as face alignment. Due to the richness in the reference set construction, the proposed method can also handle illumination and expression variation

    Creating invariance to "nuisance parameters" in face recognition

    Get PDF
    A major goal for face recognition is to identify faces where the pose of the probe is different from the stored face. Typical feature vectors vary more with pose than with identity, leading to very poor recognition performance. We propose a non-linear many-to-one mapping from a conventional feature space to a new space constructed so that each individual has a unique feature vector regardless of pose. Training data is used to implicitly parameterize the position of the multi-dimensional face manifold by pose. We introduce a co-ordinate transform, which depends on the position on the manifold. This transform is chosen so that different poses of the same face are mapped to the same feature vector. The same approach is applied to illumination changes. We investigate different methods for creating features, which are invariant to both pose and illumination. We provide a metric to assess the discriminability of the resulting features. Our technique increases the discriminability of faces under unknown pose and lighting compared to contemporary methods

    Empirical mode decomposition-based facial pose estimation inside video sequences

    Get PDF
    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions

    3D Face Recognition: Feature Extraction Based on Directional Signatures from Range Data and Disparity Maps

    Get PDF
    In this paper, the author presents a work on i) range data and ii) stereo-vision system based disparity map profiling that are used as signatures for 3D face recognition. The signatures capture the intensity variations along a line at sample points on a face in any particular direction. The directional signatures and some of their combinations are compared to study the variability in recognition performances. Two 3D face image datasets namely, a local student database captured with a stereo vision system and the FRGC v1 range dataset are used for performance evaluation

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method
    • ā€¦
    corecore