244,182 research outputs found

    Robust Face Representation and Recognition Under Low Resolution and Difficult Lighting Conditions

    Get PDF
    This dissertation focuses on different aspects of face image analysis for accurate face recognition under low resolution and poor lighting conditions. A novel resolution enhancement technique is proposed for enhancing a low resolution face image into a high resolution image for better visualization and improved feature extraction, especially in a video surveillance environment. This method performs kernel regression and component feature learning in local neighborhood of the face images. It uses directional Fourier phase feature component to adaptively lean the regression kernel based on local covariance to estimate the high resolution image. For each patch in the neighborhood, four directional variances are estimated to adapt the interpolated pixels. A Modified Local Binary Pattern (MLBP) methodology for feature extraction is proposed to obtain robust face recognition under varying lighting conditions. Original LBP operator compares pixels in a local neighborhood with the center pixel and converts the resultant binary string to 8-bit integer value. So, it is less effective under difficult lighting conditions where variation between pixels is negligible. The proposed MLBP uses a two stage encoding procedure which is more robust in detecting this variation in a local patch. A novel dimensionality reduction technique called Marginality Preserving Embedding (MPE) is also proposed for enhancing the face recognition accuracy. Unlike Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which project data in a global sense, MPE seeks for a local structure in the manifold. This is similar to other subspace learning techniques but the difference with other manifold learning is that MPE preserves marginality in local reconstruction. Hence it provides better representation in low dimensional space and achieves lower error rates in face recognition. Two new concepts for robust face recognition are also presented in this dissertation. In the first approach, a neural network is used for training the system where input vectors are created by measuring distance from each input to its class mean. In the second approach, half-face symmetry is used, realizing the fact that the face images may contain various expressions such as open/close eye, open/close mouth etc., and classify the top half and bottom half separately and finally fuse the two results. By performing experiments on several standard face datasets, improved results were observed in all the new proposed methodologies. Research is progressing in developing a unified approach for the extraction of features suitable for accurate face recognition in a long range video sequence in complex environments

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance

    Hybrid Approach for Face Recognition Using DWT and LBP

    Get PDF
    Authentication of individuals plays a vital role to check intrusions in any online digital system. Most commonly and securely used techniques are biometric fingerprint reader and face recognition. Face recognition is the process of identification of individuals by their facial images, as faces are rarely matched. Face recognition technique merely considering test images and compare this with number of trained images stored in database and then conclude whether the test images matches with any trained images. In this paper we have discussed two hybrid techniques local binary pattern (LBP) and Discrete Wavelet Transform (DWT) for face images to extract feature stored in database by applying principal component analysis for fusion and same process is done for test images. Then K-nearest neighbor (KNN) classifier is used to classify images and measure the accuracy. Our proposed model achieved 95% accuracy. The aim of this paper is to develop a robust method for face recognition and classification of individuals to improve the recognition rate, efficiency of the system and for lesser complexity

    An evaluation of super-resolution for face recognition

    Get PDF
    We evaluate the performance of face recognition algorithms on images at various resolutions. Then we show to what extent super-resolution (SR) methods can improve the recognition performance when comparing low-resolution (LR) to high-resolution (HR) facial images. Our experiments use both synthetic data (from the FRGC v1.0 database) and surveillance images (from the SCface database). Three face recognition methods are used, namely Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Patterns (LBP). Two SR methods are evaluated. The first method learns the mapping between LR images and the corresponding HR images using a regression model. As a result, the reconstructed SR images are close to the HR images that belong to the same subject and far away from others. The second method compares LR and HR facial images without explicitly constructing SR images. It finds a coherent feature space where the correlation of LR and HR is maximum, and then compute the mapping from LR to HR in this feature space. The performance of the two SR methods are compared to that delivered by the standard face recognition without SR. The results show that LDA is mostly robust to resolution changes while LBP is not suitable for the recognition of LR images. SR methods improve the recognition accuracy when downsampled images are used and the first method provides better results than the second one. However, the improvement for realistic LR surveillance images remains limited

    Homologous multi-points warping: an algorithm for automatic 3D facial landmark

    Get PDF
    Over the decade scientists have been researching to know whether face recognition is performed holistically or with local feature analysis which has led to the proposition of various advanced methods in face recognition, especially using facial landmark. The current facial landmark methods in 3D are mathematically complex, contain insufficient landmarks, lack homology and full of localization error due to manual annotation. This paper proposes an Automatic Homologous Multi-Points Warping (AHMW) for 3D facial landmarking, experimented on three datasets using 500 landmarks (16 anatomical fixed points and 484 sliding semi-landmarks) by building a template mesh as a reference object and thereby applies the template to each of the targets on three datasets. The results show that the method is robust with minimum localization error (Stirling/ESRC:0.077; Bosphorus:0.088; and FRGC v2: 0.083)

    Learning Local Features Using Boosted Trees for Face Recognition

    Get PDF
    Face recognition is fundamental to a number of significant applications that include but not limited to video surveillance and content based image retrieval. Some of the challenges which make this task difficult are variations in faces due to changes in pose, illumination and deformation. This dissertation proposes a face recognition system to overcome these difficulties. We propose methods for different stages of face recognition which will make the system more robust to these variations. We propose a novel method to perform skin segmentation which is fast and able to perform well under different illumination conditions. We also propose a method to transform face images from any given lighting condition to a reference lighting condition using color constancy. Finally we propose methods to extract local features and train classifiers using these features. We developed two algorithms using these local features, modular PCA (Principal Component Analysis) and boosted tree. We present experimental results which show local features improve recognition accuracy when compared to accuracy of methods which use global features. The boosted tree algorithm recursively learns a tree of strong classifiers by splitting the training data in to smaller sets. We apply this method to learn features on the intrapersonal and extra-personal feature space. Once trained each node of the boosted tree will be a strong classifier. We used this method with Gabor features to perform experiments on benchmark face databases. Results clearly show that the proposed method has better face recognition and verification accuracy than the traditional AdaBoost strong classifier

    Out-of-plane action unit recognition using recurrent neural networks

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015.The face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs

    Facial Expression Recognition Utilizing Local Direction-Based Robust Features and Deep Belief Network

    Get PDF
    Emotional health plays very vital role to improve people's quality of lives, especially for the elderly. Negative emotional states can lead to social or mental health problems. To cope with emotional health problems caused by negative emotions in daily life, we propose efficient facial expression recognition system to contribute in emotional healthcare system. Thus, facial expressions play a key role in our daily communications, and recent years have witnessed a great amount of research works for reliable facial expressions recognition (FER) systems. Therefore, facial expression evaluation or analysis from video information is very challenging and its accuracy depends on the extraction of robust features. In this paper, a unique feature extraction method is presented to extract distinguished features from the human face. For person independent expression recognition, depth video data is used as input to the system where in each frame, pixel intensities are distributed based on the distances to the camera. A novel robust feature extraction process is applied in this work which is named as local directional position pattern (LDPP). In LDPP, after extracting local directional strengths for each pixel such as applied in typical local directional pattern (LDP), top directional strength positions are considered in binary along with their strength sign bits. Considering top directional strength positions with strength signs in LDPP can differentiate edge pixels with bright as well as dark regions on their opposite sides by generating different patterns whereas typical LDP only considers directions representing the top strengths irrespective of their signs as well as position orders (i.e., directions with top strengths represent 1 and rest of them 0), which can generate the same patterns in this regard sometimes. Hence, LDP fails to distinguish edge pixels with opposite bright and dark regions in some cases which can be overcome by LDPP. Moreover, the LDPP capabilities are extended through principal component analysis (PCA) and generalized discriminant analysis (GDA) for better face characteristic illustration in expression. The proposed features are finally applied with deep belief network (DBN) for expression training and recognition
    corecore