4 research outputs found
Machine Learning Based Method to Design a Facial Emotion Detection and Chat Bot System
One of the active areas of research trends is recognizing emotions in images. This project aims to identify facial emotions. The research concept in Emotion Recognition is included in the flow of our emotion recognition. These involve image acquisition, image pre-processing, face detection, feature extraction, and classification, with the machine being applied after the emotion have been classified. Our framework relies on already-existing still images. This project aims to improve automated facial emotion recognition and build interaction between the system and the user (bot)
Multimodal Emotion Classification
Most NLP and Computer Vision tasks are limited to scarcity of labelled data.
In social media emotion classification and other related tasks, hashtags have
been used as indicators to label data. With the rapid increase in emoji usage
of social media, emojis are used as an additional feature for major social NLP
tasks. However, this is less explored in case of multimedia posts on social
media where posts are composed of both image and text. At the same time, w.e
have seen a surge in the interest to incorporate domain knowledge to improve
machine understanding of text. In this paper, we investigate whether domain
knowledge for emoji can improve the accuracy of emotion classification task. We
exploit the importance of different modalities from social media post for
emotion classification task using state-of-the-art deep learning architectures.
Our experiments demonstrate that the three modalities (text, emoji and images)
encode different information to express emotion and therefore can complement
each other. Our results also demonstrate that emoji sense depends on the
textual context, and emoji combined with text encodes better information than
considered separately. The highest accuracy of 71.98\% is achieved with a
training data of 550k posts.Comment: Accepted at the 2nd Emoji Workshop co-located with The Web Conference
201
Multimodal Emotion Classification
Most NLP and Computer Vision tasks are limited to scarcity of labelled data. In social media emotion classification and other related tasks, hashtags have been used as indicators to label data. With the rapid increase in emoji usage of social media, emojis are used as an additional feature for major social NLP tasks. However, this is less explored in case of multimedia posts on social media where posts are composed of both image and text. At the same time, w.e have seen a surge in the interest to incorporate domain knowledge to improve machine understanding of text. In this paper, we investigate whether domain knowledge for emoji can improve the accuracy of emotion classification task. We exploit the importance of different modalities from social media post for emotion classification task using state-of-the-art deep learning architectures. Our experiments demonstrate that the three modalities (text, emoji and images) encode different information to express emotion and therefore can complement each other. Our results also demonstrate that emoji sense depends on the textual context, and emoji combined with text encodes better information than considered separately. The highest accuracy of 71.98% is achieved with a training data of 550k posts