167,644 research outputs found

    Face Analysis Using Row and Correlation Based Local Directional Pattern

    Get PDF
    Face analysis, which includes face recognition and facial expression recognition, has been attempted by many researchers and gave ideal solutions. The problem is still active and challenging due to an increase in the complexity of the problem viz. due to poor lighting, face occlusion, low-resolution images, etc. Local pattern descriptor methods introduced to overcome these critical issues and improve the recognition rate. These methods extract the discriminant information from the local features of the face image for recognition. In this paper, the local descriptor based two methods, namely row-based local directional pattern and correlation-based local directional pattern proposed by extending an existing descriptor -- local directional pattern (LDP). Further, the two feature vectors obtained by these methods concatenated to form a hybrid descriptor. Experimentation has carried out on benchmark databases and results infer that the proposed hybrid descriptor outperforms the other descriptors in face analysis

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance

    Hybrid Approach for Face Recognition Using DWT and LBP

    Get PDF
    Authentication of individuals plays a vital role to check intrusions in any online digital system. Most commonly and securely used techniques are biometric fingerprint reader and face recognition. Face recognition is the process of identification of individuals by their facial images, as faces are rarely matched. Face recognition technique merely considering test images and compare this with number of trained images stored in database and then conclude whether the test images matches with any trained images. In this paper we have discussed two hybrid techniques local binary pattern (LBP) and Discrete Wavelet Transform (DWT) for face images to extract feature stored in database by applying principal component analysis for fusion and same process is done for test images. Then K-nearest neighbor (KNN) classifier is used to classify images and measure the accuracy. Our proposed model achieved 95% accuracy. The aim of this paper is to develop a robust method for face recognition and classification of individuals to improve the recognition rate, efficiency of the system and for lesser complexity

    Face recognition using multiple features in different color spaces

    Get PDF
    Face recognition as a particular problem of pattern recognition has been attracting substantial attention from researchers in computer vision, pattern recognition, and machine learning. The recent Face Recognition Grand Challenge (FRGC) program reveals that uncontrolled illumination conditions pose grand challenges to face recognition performance. Most of the existing face recognition methods use gray-scale face images, which have been shown insufficient to tackle these challenges. To overcome this challenging problem in face recognition, this dissertation applies multiple features derived from the color images instead of the intensity images only. First, this dissertation presents two face recognition methods, which operate in different color spaces, using frequency features by means of Discrete Fourier Transform (DFT) and spatial features by means of Local Binary Patterns (LBP), respectively. The DFT frequency domain consists of the real part, the imaginary part, the magnitude, and the phase components, which provide the different interpretations of the input face images. The advantage of LBP in face recognition is attributed to its robustness in terms of intensity-level monotonic transformation, as well as its operation in the various scale image spaces. By fusing the frequency components or the multi-resolution LBP histograms, the complementary feature sets can be generated to enhance the capability of facial texture description. This dissertation thus uses the fused DFT and LBP features in two hybrid color spaces, the RIQ and the VIQ color spaces, respectively, for improving face recognition performance. Second, a method that extracts multiple features in the CID color space is presented for face recognition. As different color component images in the CID color space display different characteristics, three different image encoding methods, namely, the patch-based Gabor image representation, the multi-resolution LBP feature fusion, and the DCT-based multiple face encodings, are presented to effectively extract features from the component images for enhancing pattern recognition performance. To further improve classification performance, the similarity scores due to the three color component images are fused for the final decision making. Finally, a novel image representation is also discussed in this dissertation. Unlike a traditional intensity image that is directly derived from a linear combination of the R, G, and B color components, the novel image representation adapted to class separability is generated through a PCA plus FLD learning framework from the hybrid color space instead of the RGB color space. Based upon the novel image representation, a multiple feature fusion method is proposed to address the problem of face recognition under the severe illumination conditions. The aforementioned methods have been evaluated using two large-scale databases, namely, the Face Recognition Grand Challenge (FRGC) version 2 database and the FERET face database. Experimental results have shown that the proposed methods improve face recognition performance upon the traditional methods using the intensity images by large margins and outperform some state-of-the-art methods

    Technique for recognizing faces using a hybrid of moments and a local binary pattern histogram

    Get PDF
    The face recognition process is widely studied, and the researchers made great achievements, but there are still many challenges facing the applications of face detection and recognition systems. This research contributes to overcoming some of those challenges and reducing the gap in the previous systems for identifying and recognizing faces of individuals in images. The research deals with increasing the precision of recognition using a hybrid method of moments and local binary patterns (LBP). The moment technique computed several critical parameters. Those parameters were used as descriptors and classifiers to recognize faces in images. The LBP technique has three phases: representation of a face, feature extraction, and classification. The face in the image was subdivided into variable-size blocks to compute their histograms and discover their features. Fidelity criteria were used to estimate and evaluate the findings. The proposed technique used the standard Olivetti Research Laboratory dataset in the proposed system training and recognition phases. The research experiments showed that adopting a hybrid technique (moments and LBP) recognized the faces in images and provide a suitable representation for identifying those faces. The proposed technique increases accuracy, robustness, and efficiency. The results show enhancement in recognition precision by 3% to reach 98.78%

    Hybrid 2D and 3D face verification

    Get PDF
    Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%

    Neighborhood Defined Feature Selection Strategy for Improved Face Recognition in Different Sensor Modalitie

    Get PDF
    A novel feature selection strategy for improved face recognition in images with variations due to illumination conditions, facial expressions, and partial occlusions is presented in this dissertation. A hybrid face recognition system that uses feature maps of phase congruency and modular kernel spaces is developed. Phase congruency provides a measure that is independent of the overall magnitude of a signal, making it invariant to variations in image illumination and contrast. A novel modular kernel spaces approach is developed and implemented on the phase congruency feature maps. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The unique modularization procedure developed in this research takes into consideration that the facial variations in a real world scenario are confined to local regions. The additional pixel dependencies that are considered based on their importance help in providing additional information for classification. This procedure also helps in robust localization of the variations, further improving classification accuracy. The effectiveness of the new feature selection strategy has been demonstrated by employing it in two specific applications via face authentication in low resolution cameras and face recognition using multiple sensors (visible and infrared). The face authentication system uses low quality images captured by a web camera. The optical sensor of the web camera is very sensitive to environmental illumination variations. It is observed that the feature selection policy overcomes the facial and environmental variations. A methodology based on multiple training images and clustering is also incorporated to overcome the additional challenges of computational efficiency and the subject\u27s non involvement. A multi-sensor image fusion based face recognition methodology that uses the proposed feature selection technique is presented in this dissertation. Research studies have indicated that complementary information from different sensors helps in improving the recognition accuracy compared to individual modalities. A decision level fusion methodology is also developed which provides better performance compared to individual as well as data level fusion modalities. The new decision level fusion technique is also robust to registration discrepancies, which is a very important factor in operational scenarios. Research work is progressing to use the new face recognition technique in multi-view images by employing independent systems for separate views and integrating the results with an appropriate voting procedure

    Face Recognition with Multi-stage Matching Algorithms

    Get PDF
    For every face recognition method, the primary goal is to achieve higher recognition accuracy and spend less computational costs. However, as the gallery size increases, especially when one probe image corresponds to only one training image, face recognition becomes more and more challenging. First, a larger gallery size requires more computational costs and memory usage. Meanwhile, that the large gallery sizes degrade the recognition accuracy becomes an even more significant problem to be solved. A coarse parallel algorithm that equally divides training images and probe images into multiple processors is proposed to deal with the large computational costs and huge memory usage of the Non-Graph Matching (NGM) feature-based method. First, each processor finishes its own training workload and stores the extracted feature information, respectively. And then, each processor simultaneously carries out the matching process for their own probe images by communicating their own stored feature information with each other. Finally, one processor collects the recognition result from the other processors. Due to the well-balanced workload, the speedup increases with the number of processors and thus the efficiency is excellently maintained. Moreover, the memory usage on each processor also evidently reduces as the number of processors increases. In sum, the parallel algorithm simultaneously brings less running time and memory usage for one processor. To solve the recognition degradation problem, a set of multi-stage matching algorithms that determine the recognition result step-by-step are proposed. Each step picks a small proportion of the best similar candidates for the next step and removes the others. The behavior of picking and removing repeats until the number of remaining candidates is small enough to produce the final recognition result. Three multi-stage matching algorithms— n-ary elimination, divide and conquer, and two-stage hybrid— are introduced to the matching process of traditional face recognition methods, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Non-graph Matching (NGM). N-ary elimination accomplishes the multi-stage matching from the global perspective by ranking the similarities and picking the best candidates. Divide and conquer implements the multi-stage matching from the local perspective by dividing the candidates into groups and selecting the best one of each group. For two-stage hybrid, it uses a holistic method to choose a small amount of candidates and then utilizes a feature-based method to find out the final recognition result from them. From the experimental results, three conclusions can be drawn. First, with the multi-stage matching algorithms, higher recognition accuracy can be achieved. Second, the larger the gallery size, the greater the improved accuracy brought by the multi-stage matching algorithms. Finally, the multi-stage matching algorithms achieve little extra computational costs

    A facial depression recognition method based on hybrid multi-head cross attention network

    Get PDF
    IntroductionDeep-learn methods based on convolutional neural networks (CNNs) have demonstrated impressive performance in depression analysis. Nevertheless, some critical challenges need to be resolved in these methods: (1) It is still difficult for CNNs to learn long-range inductive biases in the low-level feature extraction of different facial regions because of the spatial locality. (2) It is difficult for a model with only a single attention head to concentrate on various parts of the face simultaneously, leading to less sensitivity to other important facial regions associated with depression. In the case of facial depression recognition, many of the clues come from a few areas of the face simultaneously, e.g., the mouth and eyes.MethodsTo address these issues, we present an end-to-end integrated framework called Hybrid Multi-head Cross Attention Network (HMHN), which includes two stages. The first stage consists of the Grid-Wise Attention block (GWA) and Deep Feature Fusion block (DFF) for the low-level visual depression feature learning. In the second stage, we obtain the global representation by encoding high-order interactions among local features with Multi-head Cross Attention block (MAB) and Attention Fusion block (AFB).ResultsWe experimented on AVEC2013 and AVEC2014 depression datasets. The results of AVEC 2013 (RMSE = 7.38, MAE = 6.05) and AVEC 2014 (RMSE = 7.60, MAE = 6.01) demonstrated the efficacy of our method and outperformed most of the state-of-the-art video-based depression recognition approaches.DiscussionWe proposed a deep learning hybrid model for depression recognition by capturing the higher-order interactions between the depression features of multiple facial regions, which can effectively reduce the error in depression recognition and gives great potential for clinical experiments
    • …
    corecore