6 research outputs found

    Modified Local Ternary Pattern Based Face Recognition Using SVM

    Get PDF
    Face recognition (FR) has drawn considerable interest and attention in the area of pattern recognition. FR is still a challenging task in real time applications even though they are a number of face recognition algorithms which are available and work in various constrained environment. The paper proposes a FR algorithm using Modified Local Ternary Pattern (MLTP) with multi class Support Vector Machine (SVM) classifier. The MLTP features of the face images are classified by an Error-Correcting Output Code (ECOC) multiclass model with SVM. The proposed method is tested on six standard face databases. The experimental results have been demonstrated that the performance of MLTP with SVM can achieve higher recognition accuracy compared to the conventional methods

    A Multi-Biometric System Based on Feature and Score Level Fusions

    Get PDF
    In general, the information of multiple biometric modalities is fused at a single level, for example, score level or feature level. The recognition accuracy of a multimodal biometric system may not be improved by carrying fusion at a single level, since one matcher may provide a performance lower than that provided by other matchers. In view of this, we propose a new fusion scheme, referred to as the matcher performance-based (MPb) fusion scheme, in which the fusion is carried out at two levels, feature level, and score level, to improve the overall recognition accuracy. First, we consider the performance of the individual matchers in order to find out which of the modalities should be used for fusion at the feature level. Then, the selected modalities are fused at this level by utilizing their encoded features. Next, we fuse the score obtained from the feature-level fusion with that of the modality for which the performance is the highest. In order to carry out this fusion, a new normalization technique referred to as the overlap extrema-variation-based anchored min-max (OEVBAMM) normalization technique, is also proposed. By considering three modalities, namely, fingerprint, palmprint, and earprint, the performance of the proposed fusion scheme as well as that of the single level fusion scheme, both with various normalization and weighting techniques are evaluated in terms of a number of metrics. It is shown that the multi-biometric system based on the proposed fusion scheme provides the best performance when it employs the new normalization technique and the confidence-based weighting (CBW) method

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance

    Reconnaissance Biométrique par Fusion Multimodale de Visages

    Get PDF
    Biometric systems are considered to be one of the most effective methods of protecting and securing private or public life against all types of theft. Facial recognition is one of the most widely used methods, not because it is the most efficient and reliable, but rather because it is natural and non-intrusive and relatively accepted compared to other biometrics such as fingerprint and iris. The goal of developing biometric applications, such as facial recognition, has recently become important in smart cities. Over the past decades, many techniques, the applications of which include videoconferencing systems, facial reconstruction, security, etc. proposed to recognize a face in a 2D or 3D image. Generally, the change in lighting, variations in pose and facial expressions make 2D facial recognition less than reliable. However, 3D models may be able to overcome these constraints, except that most 3D facial recognition methods still treat the human face as a rigid object. This means that these methods are not able to handle facial expressions. In this thesis, we propose a new approach for automatic face verification by encoding the local information of 2D and 3D facial images as a high order tensor. First, the histograms of two local multiscale descriptors (LPQ and BSIF) are used to characterize both 2D and 3D facial images. Next, a tensor-based facial representation is designed to combine all the features extracted from 2D and 3D faces. Moreover, to improve the discrimination of the proposed tensor face representation, we used two multilinear subspace methods (MWPCA and MDA combined with WCCN). In addition, the WCCN technique is applied to face tensors to reduce the effect of intra-class directions using a normalization transform, as well as to improve the discriminating power of MDA. Our experiments were carried out on the three largest databases: FRGC v2.0, Bosphorus and CASIA 3D under different facial expressions, variations in pose and occlusions. The experimental results have shown the superiority of the proposed approach in terms of verification rate compared to the recent state-of-the-art method
    corecore