Recognizing Emotion in the Wild Using Multimodal Data

Abstract

In this work, I will present our approach of using multi-modal data for recognizing human emotion and behavior in the wild. The study is divided into four tasks: group emotion recognition, driver gaze prediction, student engagement prediction, and emotion recognition using physiological signals. We explore multiple approaches including classical machine learning tools such as random forests, state-of-the-art deep neural networks, and multiple fusion and ensemble-based approaches. We also show that similar approaches can be used across tracks as many of the features generalize well to the different problems (e.g. facial features)

    Similar works