3 research outputs found

    Facial Expression Recognition Using Uniform Local Binary Pattern with Improved Firefly Feature Selection

    Get PDF
    Facial expressions are essential communication tools in our daily life. In this paper, the uniform local binary pattern is employed to extract features from the face. However, this feature representation is very high in dimensionality. The high dimensionality would not only affect the recognition accuracy but also can impose computational constraints. Hence, to reduce the dimensionality of the feature vector, the firefly algorithm is used to select the optimal subset that leads to better classification accuracy. However, the standard firefly algorithm suffers from the risk of being trapped in local optima after a certain number of generations. Hence, this limitation has been addressed by proposing an improved version of the firefly where the great deluge algorithm (GDA) has been integrated. The great deluge is a local search algorithm that helps to enhance the exploitation ability of the firefly algorithm, thus preventing being trapped in local optima. The improved firefly algorithm has been employed in a facial expression system. Experimental results using the Japanese female facial expression database show that the proposed approach yielded good classification accuracy compared to state-of-the-art methods. The best classification accuracy obtained by the proposed method is 96.7% with 1230 selected features, whereas, Gabor-SRC method achieved 97.6% with 2560 features

    Expression Recognition with Deep Features Extracted from Holistic and Part-based Models

    Get PDF
    International audienceFacial expression recognition aims to accurately interpret facial muscle movements in affective states (emotions). Previous studies have proposed holistic analysis of the face, as well as the extraction of features pertained only to specific facial regions towards expression recognition. While classically the latter have shown better performances, we here explore this in the context of deep learning. In particular, this work provides a performance comparison of holistic and part-based deep learning models for expression recognition. In addition, we showcase the effectiveness of skip connections, which allow a network to infer from both low and high-level feature maps. Our results suggest that holistic models outperform part-based models, in the absence of skip connections. Finally, based on our findings, we propose a data augmentation scheme, which we incorporate in a part-based model. The proposed multi-face multi-part (MFMP) model leverages the wide information from part-based data augmentation, where we train the network using the facial parts extracted from different face samples of the same expression class. Extensive experiments on publicly available datasets show a significant improvement of facial expression classification with the proposed MFMP framework

    Robust facial expression classification using shape and appearance features

    No full text
    corecore