55,412 research outputs found

    Facial Landmark Feature Fusion in Transfer Learning of Child Facial Expressions

    Get PDF
    Automatic classification of child facial expressions is challenging due to the scarcity of image samples with annotations. Transfer learning of deep convolutional neural networks (CNNs), pretrained on adult facial expressions, can be effectively finetuned for child facial expression classification using limited facial images of children. Recent work inspired by facial age estimation and age-invariant face recognition proposes a fusion of facial landmark features with deep representation learning to augment facial expression classification performance. We hypothesize that deep transfer learning of child facial expressions may also benefit from fusing facial landmark features. Our proposed model architecture integrates two input branches: a CNN branch for image feature extraction and a fully connected branch for processing landmark-based features. The model-derived features of these two branches are concatenated into a latent feature vector for downstream expression classification. The architecture is trained on an adult facial expression classification task. Then, the trained model is finetuned to perform child facial expression classification. The combined feature fusion and transfer learning approach is compared against multiple models: training on adult expressions only (adult baseline), child expression only (child baseline), and transfer learning from adult to child data. We also evaluate the classification performance of feature fusion without transfer learning on model performance. Training on child data, we find that feature fusion improves the 10-fold cross validation mean accuracy from 80.32% to 83.72% with similar variance. Proposed fine-tuning with landmark feature fusion of child expressions yields the best mean accuracy of 85.14%, a more than 30% improvement over the adult baseline and nearly 5% improvement over the child baseline

    Facial expression recognition via a jointly-learned dual-branch network

    Get PDF
    Human emotion recognition depends on facial expressions, and essentially on the extraction of relevant features. Accurate feature extraction is generally difficult due to the influence of external interference factors and the mislabelling of some datasets, such as the Fer2013 dataset. Deep learning approaches permit an automatic and intelligent feature extraction based on the input database. But, in the case of poor database distribution or insufficient diversity of database samples, extracted features will be negatively affected. Furthermore, one of the main challenges for efficient facial feature extraction and accurate facial expression recognition is the facial expression datasets, which are usually considerably small compared to other image datasets. To solve these problems, this paper proposes a new approach based on a dual-branch convolutional neural network for facial expression recognition, which is formed by three modules: The two first ones ensure features engineering stage by two branches, and features fusion and classification are performed by the third one. In the first branch, an improved convolutional part of the VGG network is used to benefit from its known robustness, the transfer learning technique with the EfficientNet network is applied in the second branch, to improve the quality of limited training samples in datasets. Finally, and in order to improve the recognition performance, a classification decision will be made based on the fusion of both branches’ feature maps. Based on the experimental results obtained on the Fer2013 and CK+ datasets, the proposed approach shows its superiority compared to several state-of-the-art results as well as using one model at a time. Those results are very competitive, especially for the CK+ dataset, for which the proposed dual branch model reaches an accuracy of 99.32, while for the FER-2013 dataset, the VGG-inspired CNN obtains an accuracy of 67.70, which is considered an acceptable accuracy, given the difficulty of the images of this dataset

    Multi-Network Feature Fusion Facial Emotion Recognition using Nonparametric Method with Augmentation

    Get PDF
    Facial expression emotion identification and prediction is one of the most difficult problems in computer science. Pre-processing and feature extraction are crucial components of the more conventional methods. For the purpose of emotion identification and prediction using 2D facial expressions, this study targets the Face Expression Recognition dataset and shows the real implementation or assessment of learning algorithms such as various CNNs. Due to its vast potential in areas like artificial intelligence, emotion detection from facial expressions has become an essential requirement. Many efforts have been done on the subject since it is both a challenging and fascinating challenge in computer vision. The focus of this study is on using a convolutional neural network supplemented with data to build a facial emotion recognition system. This method may use face images to identify seven fundamental emotions, including anger, contempt, fear, happiness, neutrality, sadness, and surprise. As well as improving upon the validation accuracy of current models, a convolutional neural network that takes use of data augmentation, feature fusion, and the NCA feature selection approach may assist solve some of their drawbacks. Researchers in this area are focused on improving computer predictions by creating methods to read and codify facial expressions. With deep learning's striking success, many architectures within the framework are being used to further the method's efficacy. We highlight the contributions dealt with, the architecture and databases used, and demonstrate the development by contrasting the offered approaches and the outcomes produced. The purpose of this study is to aid and direct future researchers in the subject by reviewing relevant recent studies and offering suggestions on how to further the field. An innovative feature-based transfer learning technique is created using the pre-trained networks MobileNetV2 and DenseNet-201. The suggested system's recognition rate is 75.31%, which is a significant improvement over the results of the prior feature fusion study

    Improving Facial Emotion Recognition with Image processing and Deep Learning

    Get PDF
    Humans often use facial expressions along with words in order to communicate effectively. There has been extensive study of how we can classify facial emotion with computer vision methodologies. These have had varying levels of success given challenges and the limitations of databases, such as static data or facial capture in non-real environments. Given this, we believe that new preprocessing techniques are required to improve the accuracy of facial detection models. In this paper, we propose a new yet simple method for facial expression recognition that enhances accuracy. We conducted our experiments on the FER-2013 dataset that contains static facial images. We utilized Unsharp Mask and Histogram equalization to emphasize texture and details of the images. We implemented Convolution Neural Networks [CNNs] to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. We also employed pre-trained models such as Resnet-50, Senet-50, VGG16, and FaceNet, and applied transfer learning to achieve an accuracy of 76.01% using an ensemble of seven models

    Using Photorealistic Face Synthesis and Domain Adaptation to Improve Facial Expression Analysis

    Full text link
    Cross-domain synthesizing realistic faces to learn deep models has attracted increasing attention for facial expression analysis as it helps to improve the performance of expression recognition accuracy despite having small number of real training images. However, learning from synthetic face images can be problematic due to the distribution discrepancy between low-quality synthetic images and real face images and may not achieve the desired performance when the learned model applies to real world scenarios. To this end, we propose a new attribute guided face image synthesis to perform a translation between multiple image domains using a single model. In addition, we adopt the proposed model to learn from synthetic faces by matching the feature distributions between different domains while preserving each domain's characteristics. We evaluate the effectiveness of the proposed approach on several face datasets on generating realistic face images. We demonstrate that the expression recognition performance can be enhanced by benefiting from our face synthesis model. Moreover, we also conduct experiments on a near-infrared dataset containing facial expression videos of drivers to assess the performance using in-the-wild data for driver emotion recognition.Comment: 8 pages, 8 figures, 5 tables, accepted by FG 2019. arXiv admin note: substantial text overlap with arXiv:1905.0028

    Learn to synthesize and synthesize to learn

    Get PDF
    Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition.Comment: Accepted to Computer Vision and Image Understanding (CVIU
    • …
    corecore