425 research outputs found
Multimodal Human Group Behavior Analysis
Human behaviors in a group setting involve a complex mixture of multiple modalities: audio, visual, linguistic, and human interactions. With the rapid progress of AI, automatic prediction and understanding of these behaviors is no longer a dream. In a negotiation, discovering human relationships and identifying the dominant person can be useful for decision making. In security settings, detecting nervous behaviors can help law enforcement agents spot suspicious people. In adversarial settings such as national elections and court defense, identifying persuasive speakers is a critical task. It is beneficial to build accurate machine learning (ML) models to predict such human group behaviors. There are two elements for successful prediction of group behaviors. The first is to design domain-specific features for each modality. Social and Psychological studies have uncovered various factors including both individual cues and group interactions, which inspire us to extract relevant features computationally. In particular, the group interaction modality plays an important role, since human behaviors influence each other through interactions in a group. Second, effective multimodal ML models are needed to align and integrate the different modalities for accurate predictions. However, most previous work ignored the group interaction modality. Moreover, they only adopt early fusion or late fusion to combine different modalities, which is not optimal. This thesis presents methods to train models taking multimodal inputs in group interaction videos, and to predict human group behaviors. First, we develop an ML algorithm to automatically predict human interactions from videos, which is the basis to extract interaction features and model group behaviors. Second, we propose a multimodal method to identify dominant people in videos from multiple modalities. Third, we study the nervousness in human behavior by a developing hybrid method: group interaction feature engineering combined with individual facial embedding learning. Last, we introduce a multimodal fusion framework that enables us to predict how persuasive speakers are.
Overall, we develop one algorithm to extract group interactions and build three multimodal models to identify three kinds of human behavior in videos: dominance, nervousness and persuasion. The experiments demonstrate the efficacy of the methods and analyze the modality-wise contributions
Deep Emotion Recognition in Textual Conversations: A Survey
While Emotion Recognition in Conversations (ERC) has seen a tremendous
advancement in the last few years, new applications and implementation
scenarios present novel challenges and opportunities. These range from
leveraging the conversational context, speaker and emotion dynamics modelling,
to interpreting common sense expressions, informal language and sarcasm,
addressing challenges of real time ERC, recognizing emotion causes, different
taxonomies across datasets, multilingual ERC to interpretability. This survey
starts by introducing ERC, elaborating on the challenges and opportunities
pertaining to this task. It proceeds with a description of the emotion
taxonomies and a variety of ERC benchmark datasets employing such taxonomies.
This is followed by descriptions of the most prominent works in ERC with
explanations of the Deep Learning architectures employed. Then, it provides
advisable ERC practices towards better frameworks, elaborating on methods to
deal with subjectivity in annotations and modelling and methods to deal with
the typically unbalanced ERC datasets. Finally, it presents systematic review
tables comparing several works regarding the methods used and their
performance. The survey highlights the advantage of leveraging techniques to
address unbalanced data, the exploration of mixed emotions and the benefits of
incorporating annotation subjectivity in the learning phase
- …