21 research outputs found

    Cross-Cultural Emotion Classification based on Incremental Learning and LBP-Features

    No full text
    A number of studies have shown that facial expression representations are cultural dependent and not universal. Most facial expression recognition (FER) systems use one or two datasets for training and same for testing and show good results. While their performance mortify radically when datasets from different cultures were presented. To keep high accuracy for a long time and for all cultures, a FER system should learn incrementally. We proposed a FER system that can offer incremental learning capability. Local Binary Pattern (LBP) Features are used for Region of Interest (ROI) extraction and classification. We used static images of facial expressions from different cultures for training and testing. The experiments on five different datasets using the incremental learning classification demonstrate promising results

    A modified adaptive differential evolution algorithm for color image segmentation

    No full text
    Image segmentation is an important low-level vision task. It is a perceptual grouping of pixels based on some similarity criteria. In this paper, a new differential evolution (DE) algorithm, modified adaptive differential evolution, is proposed for color image segmentation. The DE/current-to-pbest mutation strategy with optional external archive and opposition-based learning are used to diversify the search space and expedite the convergence process. Control parameters are automatically updated to appropriate values in order to avoid user intervention of parameters setting. To find an optimal number of clusters (the number of regions or segments), the average ratio of fuzzy overlap and fuzzy separation is used as a cluster validity index. The results demonstrate that the proposed technique outperforms state-of-the-art methods

    Multi-Modal CNN Features Fusion for Emotion Recognition: A Modified Xception Model

    No full text
    Facial expression recognition (FER) is advancing human-computer interaction, especially, today, where facial masks are commonly worn due to the COVID-19 pandemic. Traditional unimodal techniques for facial expression recognition may be ineffective under these circumstances. To address this challenge, multimodal approaches that incorporate data from various modalities, such as voice expressions, have emerged as a promising solution. This paper proposed a novel multimodal methodology based on deep learning to recognize facial expressions under masked conditions effectively. The approach utilized two standard datasets, M-LFW-F and CREMA-D, to capture facial and vocal emotional expressions. A multimodal neural network was then trained using fusion techniques, outperforming conventional unimodal methods in facial expression recognition. Experimental evaluations demonstrated that the proposed approach achieved an accuracy of 79.81%, a significant improvement over the 68.81% accuracy attained by the unimodal technique. These results highlight the superior performance of the proposed approach in facial expression recognition under masked conditions

    Hybrid Facial Emotion Recognition Using CNN-Based Features

    No full text
    In computer vision, the convolutional neural network (CNN) is a very popular model used for emotion recognition. It has been successfully applied to detect various objects in digital images with remarkable accuracy. In this paper, we extracted learned features from a pre-trained CNN and evaluated different machine learning (ML) algorithms to perform classification. Our research looks at the impact of replacing the standard SoftMax classifier with other ML algorithms by applying them to the FC6, FC7, and FC8 layers of Deep Convolutional Neural Networks (DCNNs). Experiments were conducted on two well-known CNN architectures, AlexNet and VGG-16, using a dataset of masked facial expressions (MLF-W-FER dataset). The results of our experiments demonstrate that Support Vector Machine (SVM) and Ensemble classifiers outperform the SoftMax classifier on both AlexNet and VGG-16 architectures. These algorithms were able to achieve improved accuracy of between 7% and 9% on each layer, suggesting that replacing the classifier in each layer of a DCNN with SVM or ensemble classifiers can be an efficient method for enhancing image classification performance. Overall, our research demonstrates the potential for combining the strengths of CNNs and other machine learning (ML) algorithms to achieve better results in emotion recognition tasks. By extracting learned features from pre-trained CNNs and applying a variety of classifiers, we provide a framework for investigating alternative methods to improve the accuracy of image classification
    corecore