15 research outputs found

    Optimization of deep learning features for age-invariant face recognition

    Get PDF
    This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieves the best Rank-1 recognition rate of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods

    Development of an Illumination Invariant Face Recognition System

    Get PDF
    The Face recognition systems have gained much attention for applications in surveillance, access control, forensics, border control. Face recognition systems encounter challenges due to variation in illumination, pose, expression, occlusion and most importantly, aging. The effect of the intensity of light on recognition image in contract with gallery image significantly affect the face recognition system. In this study, an illumination invariant Face Recognition System is developed using a 4-layered Convolutional Neural Network (CNN). The proposed system was able to recognize the different degree of face Illuminated image, thus making the model Illumination Invariant Face Recognition system. The variations caused by illumination was modelled as a form of light varying noise, and it was validated by computing its error statistics and comparing its performance with existing models found in literature. The result of the study showed that an adaptive and robust face recognition system that is illumination invariant could be achieved with CNN. The recognition accuracy achieved by the study was 99.22% with five (5) epochs and iteration of 85

    An improved age invariant face recognition using data augmentation

    Get PDF
    In spite of the significant advancement in face recognition expertise, accurately recognizing the face of the same individual across different ages still remains an open research question. Face aging causes intra-subject variations (such as geometric changes during childhood & adolescence, wrinkles and saggy skin in old age) which negatively affects the accuracy of face recognition systems. Over the years, researchers have devised different techniques to improve the accuracy of age invariant face recognition (AIFR) systems. In this paper, the face and gesture recognition network (FG-NET) aging dataset was adopted to enable the benchmarking of experimental results. The FG-Net dataset was augmented by adding four different types of noises at the preprocessing phase in order to improve the trait aging face features extraction and the training model used at the classification stages, thus addressing the problem of few available training aging for face recognition dataset. The developed model was an adaptation of a pre-trained convolution neural network architecture (Inception-ResNet-v2) which is a very robust noise. The proposed model on testing achieved a 99.94% recognition accuracy, a mean square error of 0.0158 and a mean absolute error of 0.0637. The results obtained are significant improvements in comparison with related works

    Beyond Disentangled Representations: An Attentive Angular Distillation Approach to Large-scale Lightweight Age-Invariant Face Recognition

    Full text link
    Disentangled representations have been commonly adopted to Age-invariant Face Recognition (AiFR) tasks. However, these methods have reached some limitations with (1) the requirement of large-scale face recognition (FR) training data with age labels, which is limited in practice; (2) heavy deep network architecture for high performance; and (3) their evaluations are usually taken place on age-related face databases while neglecting the standard large-scale FR databases to guarantee its robustness. This work presents a novel Attentive Angular Distillation (AAD) approach to Large-scale Lightweight AiFR that overcomes these limitations. Given two high-performance heavy networks as teachers with different specialized knowledge, AAD introduces a learning paradigm to efficiently distill the age-invariant attentive and angular knowledge from those teachers to a lightweight student network making it more powerful with higher FR accuracy and robust against age factor. Consequently, AAD approach is able to take the advantages of both FR datasets with and without age labels to train an AiFR model. Far apart from prior distillation methods mainly focusing on accuracy and compression ratios in closed-set problems, our AAD aims to solve the open-set problem, i.e. large-scale face recognition. Evaluations on LFW, IJB-B and IJB-C Janus, AgeDB and MegaFace-FGNet with one million distractors have demonstrated the efficiency of the proposed approach. This work also presents a new longitudinal face aging (LogiFace) database for further studies in age-related facial problems in future.Comment: arXiv admin note: substantial text overlap with arXiv:1905.1062

    Invariance Measures for Neural Networks

    Full text link
    Invariances in neural networks are useful and necessary for many tasks. However, the representation of the invariance of most neural network models has not been characterized. We propose measures to quantify the invariance of neural networks in terms of their internal representation. The measures are efficient and interpretable, and can be applied to any neural network model. They are also more sensitive to invariance than previously defined measures. We validate the measures and their properties in the domain of affine transformations and the CIFAR10 and MNIST datasets, including their stability and interpretability. Using the measures, we perform a first analysis of CNN models and show that their internal invariance is remarkably stable to random weight initializations, but not to changes in dataset or transformation. We believe the measures will enable new avenues of research in invariance representation
    corecore