17 research outputs found

    Human face recognition under degraded conditions

    Get PDF
    Comparative studies on the state of the art feature extraction and classification techniques for human face recognition under low resolution problem, are proposed in this work. Also, the effect of applying resolution enhancement, using interpolation techniques, is evaluated. A gradient-based illumination insensitive preprocessing technique is proposed using the ratio between the gradient magnitude and the current intensity level of image which is insensitive against severe level of lighting effect. Also, a combination of multi-scale Weber analysis and enhanced DD-DT-CWT is demonstrated to have a noticeable stability versus illumination variation. Moreover, utilization of the illumination insensitive image descriptors on the preprocessed image leads to further robustness against lighting effect. The proposed block-based face analysis decreases the effect of occlusion by devoting different weights to the image subblocks, according to their discrimination power, in the score or decision level fusion. In addition, a hierarchical structure of global and block-based techniques is proposed to improve the recognition accuracy when different image degraded conditions occur. Complementary performance of global and local techniques leads to considerable improvement in the face recognition accuracy. Effectiveness of the proposed algorithms are evaluated on Extended Yale B, AR, CMU Multi-PIE, LFW, FERET and FRGC databases with large number of images under different degradation conditions. The experimental results show an improved performance under poor illumination, facial expression and, occluded images

    FRDF: face recognition using fusion of DTCWT and FFT features

    Get PDF
    Face recognition is a physiological Biometric trait widely used for personal authentication. In this paper we have proposed a technique for face recognition using Fusion of Dual Tree Complex Wavelet Transform (DTCWT) and Fast Fourier Transform (FFT) features. The Five Level DTCWT and FFT are applied on the pre-processed face image of size 128Ă—512. The Five Level DTCWT features are arranged in a single column vector of size 384 Ă— 1. The absolute values of FFT features are computed and arranged in column vector of size 65, 536 Ă— 1. The DTCWT features are fused with dominant absolute FFT values using arithmetic addition to generate a final set of features. The test image features are compared with database features using Euclidean distance to identify a person. The face recognition is performed for different database such as ORL, JAFFE, L-SPACEK and CMU-PIE having different illumination and pose conditions. It is observed that the performance parameters False Acceptance Rate (FAR), False Rejection Rate (FRR) and True Success Rate (TSR) of proposed method FRDF: Face Recognition using Fusion of DTCWT and FFT are better compared to existing state of the art method

    Face Recognition based on Oriented Complex Wavelets and FFT

    Get PDF
    The Face is a physiological Biometric trait used in Biometric System. In this paper face recognition using oriented complex wavelets and Fast Fourier Transform (FROCF) is proposed. The five-level Dual Tree Complex Wavelet Transform (DTCWT) is applied on face images to get shift invariant and directional features along±15o,±45o and±75o angular directions. The different pose, illumination and expression variations of face images are represented in frequency domain using Fast Fourier Transform (FFT) resulting in FFT features. Features of DTCWT and FFT are fused by arithmetic addition to get final features. Euclidean Distance classifier is applied to the features to recognize the genuine and imposter faces. The Performance analysis of proposed method is tested with ORL, JAFFE, L-SPACEK and CMU-PIE having different illumination and pose conditions. The Results shows that Recognition Rate of proposed FROCF is better compared to Existing Recognition Methods

    OPTIMIZED BIOMETRIC SYSTEM BASED ON COMBINATION OF FACE IMAGES AND LOG TRANSFORMATION

    Get PDF
    The biometrics are used to identify a person effectively. In this paper, we propose optimised Face recognition system based on log transformation and combination of face image features vectors. The face images are preprocessed using Gaussian filter to enhance the quality of an image. The log transformation is applied on enhanced image to generate features. The feature vectors of many images of a single person image are converted into single vector using average arithmetic addition. The Euclidian distance(ED) is used to compare test image feature vector with database feature vectors to identify a person. It is experimented that, the performance of proposed algorithm is better compared to existing algorithms

    Design and Analysis of A New Illumination Invariant Human Face Recognition System

    Get PDF
    In this dissertation we propose the design and analysis of a new illumination invariant face recognition system. We show that the multiscale analysis of facial structure and features of face images leads to superior recognition rates for images under varying illumination. We assume that an image I ( x,y ) is a black box consisting of a combination of illumination and reflectance. A new approximation is proposed to enhance the illumination removal phase. As illumination resides in the low-frequency part of images, a high-performance multiresolution transformation is employed to accurately separate the frequency contents of input images. The procedure is followed by a fine-tuning process. After extracting a mask, feature vector is formed and the principal component analysis (PCA) is used for dimensionality reduction which is then proceeded by the extreme learning machine (ELM) as a classifier. We then analyze the effect of the frequency selectivity of subbands of the transformation on the performance of the proposed face recognition system. In fact, we first propose a method to tune the characteristics of a multiresolution transformation, and then analyze how these specifications may affect the recognition rate. In addition, we show that the proposed face recognition system can be further improved in terms of the computational time and accuracy. The motivation for this progress is related to the fact that although illumination mostly lies in the low-frequency part of images, these low-frequency components may have low- or high-resonance nature. Therefore, for the first time, we introduce the resonance based analysis of face images rather than the traditional frequency domain approaches. We found that energy selectivity of the subbands of the resonance based decomposition can lead to superior results with less computational complexity. The method is free of any prior information about the face shape. It is systematic and can be applied separately on each image. Several experiments are performed employing the well known databases such as the Yale B, Extended-Yale B, CMU-PIE, FERET, AT&T, and LFW. Illustrative examples are given and the results confirm the effectiveness of the method compared to the current results in the literature

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance

    Feature Extraction and Selection in Automatic Sleep Stage Classification

    Get PDF
    Sleep stage classification is vital for diagnosing many sleep related disorders and Polysomnography (PSG) is an important tool in this regard. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. The automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. In this research work, we focused on feature extraction and selection steps. The main goal of this thesis was identifying a robust and reliable feature set that can lead to efficient classification of sleep stages. For achieving this goal, three types of contributions were introduced in feature selection, feature extraction and feature vector quality enhancement. Several feature ranking and rank aggregation methods were evaluated and compared for finding the best feature set. Evaluation results indicated that the decision on the precise feature selection method depends on the system design requirements such as low computational complexity, high stability or high classification accuracy. In addition to conventional feature ranking methods, in this thesis, novel methods such as Stacked Sparse AutoEncoder (SSAE) was used for dimensionality reduction. In feature extration area, new and effective features such as distancebased features were utilized for the first time in sleep stage classification. The results showed that these features contribute positively to the classification performance. For signal quality enhancement, a loss-less EEG artefact removal algorithm was proposed. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy
    corecore