3 research outputs found

    A Comprehensive Study on State-Of-Art Learning Algorithms in Emotion Recognition

    Get PDF
    The potential uses of emotion recognition in domains like human-robot interaction, marketing, emotional gaming, and human-computer interface have made it a prominent research subject. Better user experiences can result from the development of technologies that can accurately interpret and respond to human emotions thanks to a better understanding of emotions. The use of several sensors and computational algorithms is the main emphasis of this paper's thorough analysis of the developments in emotion recognition techniques. Our results show that using more than one modality improves the performance of emotion recognition when a variety of metrics and computational techniques are used. This paper adds to the body of knowledge by thoroughly examining and contrasting several state-of-art computational techniques and measurements for emotion recognition. The study emphasizes how crucial it is to use a variety of modalities along with cutting-edge machine learning algorithms in order to attain more precise and trustworthy emotion assessment. Additionally, we pinpoint prospective avenues for additional investigation and advancement, including the incorporation of multimodal data and the investigation of innovative features and fusion methodologies. This study contributes to the development of technology that can better comprehend and react to human emotions by offering practitioners and academics in the field of emotion recognition insightful advice

    Differentiating Between Spontaneous and Posed Facial Expression using Inception V4

    Get PDF
    Master's thesis Information- and communication technology IKT590 - University of Agder 2018This thesis proposes a way to simplify and make solutions for spontaneous and posed facial expression analysis more efficient. Traditional approaches have been using hand-crafted features and two image frames to be able to differentiate between spontaneous and posed facial expressions. The solution aims to be as flexible as possible and introduces two models to differentiate between posed and spontaneous facial expression. We introduce Inception V4 as an algorithm to solve this task. The results indicate that Inception V4 may be too deep and unable to differentiate between spontaneous and posed facial expression accurately. A shallow CNN model is also introduced. The shallow CNN model performs better than the Inception V4 model. None of the two come close to the state-of-the-art results. This may indicate that to differentiate between spontaneous and posed facial expressions the difference between the onset and apex frame of an expression is needed as input. This thesis, also suggests an alternative algorithm based on our findings. For further work, an algorithm which is not as deep as Inception V4 is needed. However, by using parts of the Inception V4 architecture, we may be able to capture facial features better. The task of differentiating between spontaneous emotion and posed emotion has also been investigated; however, the results do not show great promise. The task does not have any state-of-the-art results to compare our approach with. Our models, although lacking in performance, does seem able to capture relevant facial features from the dataset
    corecore