35 research outputs found

    An efficient multiscale scheme using local zernike moments for face recognition

    Get PDF
    In this study, we propose a face recognition scheme using local Zernike moments (LZM), which can be used for both identification and verification. In this scheme, local patches around the landmarks are extracted from the complex components obtained by LZM transformation. Then, phase magnitude histograms are constructed within these patches to create descriptors for face images. An image pyramid is utilized to extract features at multiple scales, and the descriptors are constructed for each image in this pyramid. We used three different public datasets to examine the performance of the proposed method:Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), and Surveillance Cameras Face (SCface). The results revealed that the proposed method is robust against variations such as illumination, facial expression, and pose. Aside from this, it can be used for low-resolution face images acquired in uncontrolled environments or in the infrared spectrum. Experimental results show that our method outperforms state-of-the-art methods on FERET and SCface datasets.WOS:000437326800174Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ2 - Q3ArticleUluslararası işbirliği ile yapılmayan - HAYIRMayıs2018YÖK - 2017-1

    Local binary pattern network: a deep learning approach for face recognition

    Get PDF
    Deep learning is well known as a method to extract hierarchical representations of data. This method has been widely implemented in many fields, including image classification, speech recognition, natural language processing, etc. Over the past decade, deep learning has made a great progress in solving face recognition problems due to its effectiveness. In this thesis a novel deep learning multilayer hierarchy based methodology, named Local Binary Pattern Network (LBPNet), is proposed. Unlike the shallow LBP method, LBPNet performs multi-scale analysis and gains high-level representations from low-level overlapped features in a systematic manner. The LBPNet deep learning network is generated by retaining the topology of Convolutional Neural Network (CNN) and replacing its trainable kernel with the off-the-shelf computer vision descriptor, the LBP descriptor. This enables LBPNet to achieve a high recognition accuracy without requiring costly model learning approach on massive data. LBPNet progressively extracts features from input images from test and training data through multiple processing layers, pairwisely measures the similarity of extracted features in regional level, and then performs the classification based on the aggregated similarity values. Through extensive numerical experiments using the popular benchmarks (i.e., FERET, LFW and YTF), LBPNet has shown the promising results. Its results out-perform (on FERET) or are comparable (on LFW and FERET) to other methods in the same categories, which are single descriptor based unsupervised learning methods on FERET and LFW, and single descriptor based supervise learning methods with image-restricted no outside data settings on LFW and YTF, respectively. --Leaves i-ii.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b214095

    Uniscale and multiscale gait recognition in realistic scenario

    Get PDF
    The performance of a gait recognition method is affected by numerous challenging factors that degrade its reliability as a behavioural biometrics for subject identification in realistic scenario. Thus for effective visual surveillance, this thesis presents five gait recog- nition methods that address various challenging factors to reliably identify a subject in realistic scenario with low computational complexity. It presents a gait recognition method that analyses spatio-temporal motion of a subject with statistical and physical parameters using Procrustes shape analysis and elliptic Fourier descriptors (EFD). It introduces a part- based EFD analysis to achieve invariance to carrying conditions, and the use of physical parameters enables it to achieve invariance to across-day gait variation. Although spatio- temporal deformation of a subject’s shape in gait sequences provides better discriminative power than its kinematics, inclusion of dynamical motion characteristics improves the iden- tification rate. Therefore, the thesis presents a gait recognition method which combines spatio-temporal shape and dynamic motion characteristics of a subject to achieve robust- ness against the maximum number of challenging factors compared to related state-of-the- art methods. A region-based gait recognition method that analyses a subject’s shape in image and feature spaces is presented to achieve invariance to clothing variation and carry- ing conditions. To take into account of arbitrary moving directions of a subject in realistic scenario, a gait recognition method must be robust against variation in view. Hence, the the- sis presents a robust view-invariant multiscale gait recognition method. Finally, the thesis proposes a gait recognition method based on low spatial and low temporal resolution video sequences captured by a CCTV. The computational complexity of each method is analysed. Experimental analyses on public datasets demonstrate the efficacy of the proposed methods

    Generative Adversarial Networks for Improving Face Classification

    Get PDF
    Master's thesis Information- and communication technology IKT590 - University of Agder 2017Facial recognition can be applied in a wide variety of cases, including entertainment purposes and biometric security. In this thesis we take a look at improving the results of an existing facial recognition approach by utilizing generative adversarial networks to improve the existing dataset. The training data was taken from the LFW dataset[4] and was preprocessed using OpenCV[2] for face detection. The faces in the dataset was cropped and resized so every image is the same size and can easily be passed to a convolutional neural network. To the best of our knowledge no generative adversarial network approach has been applied to facial recognition by generating training data for classification with convolutional neural networks. The proposed approach to improving face classification accuracy is not improving the classification algorithm itself but rather improving the dataset by generating more data. In this thesis we attempt to use generative adversarial networks to generate new data. We achieve an impressive accuracy of 99.42% with 3 classes, which is an improvement of 1.74% compared to not generating any new data

    Robust face recognition using convolutional neural networks combined with Krawtchouk moments

    Get PDF
    Face recognition is a challenging task due to the complexity of pose variations, occlusion and the variety of face expressions performed by distinct subjects. Thus, many features have been proposed, however each feature has its own drawbacks. Therefore, in this paper, we propose a robust model called Krawtchouk moments convolutional neural networks (KMCNN) for face recognition. Our model is divided into two main steps. Firstly, we use 2D discrete orthogonal Krawtchouk moments to represent features. Then, we fed it into convolutional neural networks (CNN) for classification. The main goal of the proposed approach is to improve the classification accuracy of noisy grayscale face images. In fact, Krawtchouk moments are less sensitive to noisy effects. Moreover, they can extract pertinent features from an image using only low orders. To investigate the robustness of the proposed approach, two types of noise (salt and pepper and speckle) are added to three datasets (YaleB extended, our database of faces (ORL), and a subset of labeled faces in the wild (LFW)). Experimental results show that KMCNN is flexible and performs significantly better than using just CNN or when we combine it with other discrete moments such as Tchebichef, Hahn, Racah moments in most densities of noises

    A comprehensive survey on Pose-Invariant Face Recognition

    Full text link
    © 2016 ACM. The capacity to recognize faces under varied poses is a fundamental human ability that presents a unique challenge for computer vision systems. Compared to frontal face recognition, which has been intensively studied and has gradually matured in the past few decades, Pose-Invariant Face Recognition (PIFR) remains a largely unsolved problem. However, PIFR is crucial to realizing the full potential of face recognition for real-world applications, since face recognition is intrinsically a passive biometric technology for recognizing uncooperative subjects. In this article, we discuss the inherent difficulties in PIFR and present a comprehensive review of established techniques. Existing PIFR methods can be grouped into four categories, that is, pose-robust feature extraction approaches, multiview subspace learning approaches, face synthesis approaches, and hybrid approaches. The motivations, strategies, pros/cons, and performance of representative approaches are described and compared. Moreover, promising directions for future research are discussed

    Handbook of Vascular Biometrics

    Get PDF
    corecore