182,300 research outputs found

    Gravitational Deep Convoluted Stacked Kernel Extreme Learning Based Classification for Face Recognition

    Get PDF
    In recent times, researchers have designed several deep learning (DL) algorithms and specifically face recognition (FR) made an extensive crossover. Deep Face Recognition systems took advantage of the hierarchical framework of the DL algorithms to learn discriminative face characterization. However, when handling severe occlusions in a face, the execution of present-day methods reduces appreciably. Several prevailing works regard that, when face recognition is taken into consideration, affinity materializes to be a pivotal recognition feature. However, the rate of affinity changes when the face image for recognition is found to be illuminated, and occluded, with changes in the age of the subject. Motivated by these issues, in this work a novel method called Gravitational Deep Convoluted Stacked Kernel Extreme Learning-based (GDC-SKEL) classification for face recognition is proposed for human face recognition problems in frontal views with varying age, illumination, and occlusion. First, with the face images provided as input, Gravitational Center Loss-based Face Alignment model is proposed to minimize the intra-class difference, which can overcome the influence of occlusion in face images. Second, Deep Convoluted Tikhonov Regularization-based Facial Region Feature extraction is applied to the occlusion-removed face images. Here, by employing the Convoluted Tikhonov Regularization function, salient features are said to be extracted with an age-invariant representation. Finally, Stacked Kernel Extreme Learning-based Classification is designed. The extracted features are given to the Stacked Kernel Extreme Learning-based Classification and to identify testing samples Stacked Kernel is utilized. The performance of GDC-SKEL is evaluated on Cross-Age Celebrity Dataset. Experimental results are compared with other state-of-the-art classifiers in terms of face recognition accuracy, face recognition time, PSNR, and False Positive Rate which shows the effectiveness of the proposed GDC-SKEL classifier

    Facial Landmark Feature Fusion in Transfer Learning of Child Facial Expressions

    Get PDF
    Automatic classification of child facial expressions is challenging due to the scarcity of image samples with annotations. Transfer learning of deep convolutional neural networks (CNNs), pretrained on adult facial expressions, can be effectively finetuned for child facial expression classification using limited facial images of children. Recent work inspired by facial age estimation and age-invariant face recognition proposes a fusion of facial landmark features with deep representation learning to augment facial expression classification performance. We hypothesize that deep transfer learning of child facial expressions may also benefit from fusing facial landmark features. Our proposed model architecture integrates two input branches: a CNN branch for image feature extraction and a fully connected branch for processing landmark-based features. The model-derived features of these two branches are concatenated into a latent feature vector for downstream expression classification. The architecture is trained on an adult facial expression classification task. Then, the trained model is finetuned to perform child facial expression classification. The combined feature fusion and transfer learning approach is compared against multiple models: training on adult expressions only (adult baseline), child expression only (child baseline), and transfer learning from adult to child data. We also evaluate the classification performance of feature fusion without transfer learning on model performance. Training on child data, we find that feature fusion improves the 10-fold cross validation mean accuracy from 80.32% to 83.72% with similar variance. Proposed fine-tuning with landmark feature fusion of child expressions yields the best mean accuracy of 85.14%, a more than 30% improvement over the adult baseline and nearly 5% improvement over the child baseline

    Gender and Age Classification of Human Faces for Automatic Detection of Anomalous Human Behaviour

    Get PDF
    In this paper, we introduce an approach to classify gender and age from images of human faces which is an essential part of our method for autonomous detection of anomalous human behaviour. Human behaviour is often uncertain, and sometimes it is affected by emotion or environment. Automatic detection can help to recognise human behaviour which later can assist in investigating suspicious events. Central to our proposed approach is the recently introduced transfer learning. It was used on the basis of deep learning and successfully applied to image classification area. This paper is a continuous study from previous research on heterogeneous data in which we use images as supporting evidence. We present a method for image classification based on a pre-trained deep model for feature extraction and representation followed by a Support Vector Machine classifier. Because very few data sets with labels of gender and age exist of face images, we build one dataset named GAFace and applied our proposed method to this dataset achieving excellent results and robustness (gender classification: 90.33% and age classification: 80.17% accuracy) approaching human performance
    • …
    corecore