20 research outputs found

    AN EXPERIMENTAL STUDY OF FACE RECOGNITION METHOD

    Get PDF
    The increased use of face recognition techniques leads to the development of improved methods with higher accuracy and efficiency. Currently, there are various face recognition techniques based on different algorithm. In this study, a new method of face recognition is proposed based on the idea of wavelet operators for creating spectral graph wavelet transformation. The proposed idea relies on the spectral graph wavelet kernel procedure. In this proposed method, feature extraction is based on transformation into SGWT by means of spatial domain. For recognition purpose, the feature vectors are used for computation of selected training samples which makes the classification. The decomposition of face image is done using the SGWT. The system identifies the test image by calculating the Euclidean distance. Finally, the study conducted an experiment using the ORL face database. The result states that the recognition accuracy is higher in the proposed system which can be further improved using the number of training images. Overall, the result shows that the proposed method has good performance in terms of accuracy of the face recognitio

    Face Recognition based on CNN 2D-3D Reconstruction using Shape and Texture Vectors Combining

    Get PDF
    This study proposes a face recognition model using a combination of shape and texture vectors that are used to produce new face images on 2D-3D reconstruction images. The reconstruction process to produce 3D face images is carried out using the convolutional neural network (CNN) method on 2D face images. Merging shapes and textures vector is used to produce correlation points on new face images that have similarities to the initial image used. Principal Component Analysis (PCA) is used as a feature extraction method, for the classification method we use the Mahalanobis method. The results of the tests can produce a better recognition rate compared to face recognition testing using 2D images

    Compensating inaccurate annotations to train 3D facial landmark localisation models

    Get PDF
    In this paper we investigate the impact of inconsistency in manual annotations when they are used to train automatic models for 3D facial landmark localization. We start by showing that it is possible to objectively measure the consistency of annotations in a database, provided that it contains replicates (i.e. repeated scans from the same person). Applying such measure to the widely used FRGC database we ļ¬nd that manual annotations currently available are suboptimal and can strongly impair the accuracy of automatic models learnt therefrom. To address this issue, we present a simple algorithm to automatically correct a set of annotations and show that it can help to signiļ¬cantly improve the accuracy of the models in terms of landmark localization errors. This improvement is observed even when errors are measured with respect to the original (not corrected) annotations. However, we also show that if errors are computed against an alternative set of manual annotations with higher consistency, the accuracy of the models constructed using the corrections from the presented algorithm tends to converge to the one achieved by building the models on the alternative,more consistent set

    3D facial landmark localization using combinatorial search and shape regression

    Get PDF
    This paper presents a method for the automatic detection of facial landmarks. The algorithm receives a set of 3D candidate points for each landmark (e.g. from a feature detector) and performs combinatorial search constrained by a deformable shape model. A key assumption of our approach is that for some landmarks there might not be an accurate candidate in the input set. This is tackled by detecting partial subsets of landmarks and inferring those that are missing so that the probability of the deformable model is maximized. The ability of the model to work with incomplete information makes it possible to limit the number of candidates that need to be retained, substantially reducing the number of possible combinations to be tested with respect to the alternative of trying to always detect the complete set of landmarks. We demonstrate the accuracy of the proposed method in a set of 144 facial scans acquired by means of a hand-held laser scanner in the context of clinical craniofacial dysmorphology research. Using spin images to describe the geometry and targeting 11 facial landmarks, we obtain an average error below 3 mm, which compares favorably with other state of the art approaches based on geometric descriptors

    Active nonrigid ICP algorithm

    No full text
    Ā© 2015 IEEE.The problem of fitting a 3D facial model to a 3D mesh has received a lot of attention the past 15-20 years. The majority of the techniques fit a general model consisting of a simple parameterisable surface or a mean 3D facial shape. The drawback of this approach is that is rather difficult to describe the non-rigid aspect of the face using just a single facial model. One way to capture the 3D facial deformations is by means of a statistical 3D model of the face or its parts. This is particularly evident when we want to capture the deformations of the mouth region. Even though statistical models of face are generally applied for modelling facial intensity, there are few approaches that fit a statistical model of 3D faces. In this paper, in order to capture and describe the non-rigid nature of facial surfaces we build a part-based statistical model of the 3D facial surface and we combine it with non-rigid iterative closest point algorithms. We show that the proposed algorithm largely outperforms state-of-the-art algorithms for 3D face fitting and alignment especially when it comes to the description of the mouth region

    Fully Automatic Expression-Invariant Face Correspondence

    Full text link
    We consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions. Our fully automatic approach does not require any manually placed markers on the scan. Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan. The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan. To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model. Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions. Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models

    Dense 3D Face Correspondence

    Full text link
    We present an algorithm that automatically establishes dense correspondences between a large number of 3D faces. Starting from automatically detected sparse correspondences on the outer boundary of 3D faces, the algorithm triangulates existing correspondences and expands them iteratively by matching points of distinctive surface curvature along the triangle edges. After exhausting keypoint matches, further correspondences are established by generating evenly distributed points within triangles by evolving level set geodesic curves from the centroids of large triangles. A deformable model (K3DM) is constructed from the dense corresponded faces and an algorithm is proposed for morphing the K3DM to fit unseen faces. This algorithm iterates between rigid alignment of an unseen face followed by regularized morphing of the deformable model. We have extensively evaluated the proposed algorithms on synthetic data and real 3D faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using quantitative and qualitative benchmarks. Our algorithm achieved dense correspondences with a mean localisation error of 1.28mm on synthetic faces and detected 1414 anthropometric landmarks on unseen real faces from the FRGCv2 database with 3mm precision. Furthermore, our deformable model fitting algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on Bosphorus database. Our dense model is also able to generalize to unseen datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm

    Facial Landmark Detection and Estimation under Various Expressions and Occlusions

    Get PDF
    Landmark localization is one of the fundamental approaches to facial expressions recognition, occlusions detection and face alignments. It plays a vital role in many applications in image processing and computer vision. The acquisition conditions such as expression, occlusion and background complexity affect the landmark localization performance, which subsequently lead to system failure. In this paper, the writers bestowed the challenges of various landmark detection techniques, number of landmark points and dataset types been employed from the existing literatures. However, advance technique for facial landmark detection under various expressions and occlusions was presented. This was carried out using Point Distribution Model (PDM) to estimate the occluded part of the facial regions and detect the face. The proposed method was evaluated using University Milano Bicocca Database (UMB). This approach gave more promising result when compared to several previous works. In conclusion, the technique detected images despite varieties of occlusions and expressions. It can further be applied on images with different poses and illumination variations
    corecore