7 research outputs found

    The MuSe 2021 Multimodal Sentiment Analysis Challenge: sentiment, emotion, physiological-emotion, and stress

    Get PDF
    Multimodal Sentiment Analysis (MuSe) 2021 is a challenge focusing on the tasks of sentiment and emotion, as well as physiological-emotion and emotion-based stress recognition through more comprehensively integrating the audio-visual, language, and biological signal modalities. The purpose of MuSe 2021 is to bring together communities from different disciplines; mainly, the audio-visual emotion recognition community (signal-based), the sentiment analysis community (symbol-based), and the health informatics community. We present four distinct sub-challenges: MuSe-Wilder and MuSe-Stress which focus on continuous emotion (valence and arousal) prediction; MuSe-Sent, in which participants recognise five classes each for valence and arousal; and MuSe-Physio, in which the novel aspect of 'physiological-emotion' is to be predicted. For this year's challenge, we utilise the MuSe-CaR dataset focusing on user-generated reviews and introduce the Ulm-TSST dataset, which displays people in stressful depositions. This paper also provides detail on the state-of-the-art feature sets extracted from these datasets for utilisation by our baseline model, a Long Short-Term Memory-Recurrent Neural Network. For each sub-challenge, a competitive baseline for participants is set; namely, on test, we report a Concordance Correlation Coefficient (CCC) of .4616 CCC for MuSe-Wilder; .5088 CCC for MuSe-Stress, and .4908 CCC for MuSe-Physio. For MuSe-Sent an F1 score of 32.82% is obtained

    The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress

    Full text link
    The Multimodal Sentiment Analysis Challenge (MuSe) 2022 is dedicated to multimodal sentiment and emotion recognition. For this year's challenge, we feature three datasets: (i) the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset that contains audio-visual recordings of German football coaches, labelled for the presence of humour; (ii) the Hume-Reaction dataset in which reactions of individuals to emotional stimuli have been annotated with respect to seven emotional expression intensities, and (iii) the Ulm-Trier Social Stress Test (Ulm-TSST) dataset comprising of audio-visual data labelled with continuous emotion values (arousal and valence) of people in stressful dispositions. Using the introduced datasets, MuSe 2022 2022 addresses three contemporary affective computing problems: in the Humor Detection Sub-Challenge (MuSe-Humor), spontaneous humour has to be recognised; in the Emotional Reactions Sub-Challenge (MuSe-Reaction), seven fine-grained `in-the-wild' emotions have to be predicted; and in the Emotional Stress Sub-Challenge (MuSe-Stress), a continuous prediction of stressed emotion values is featured. The challenge is designed to attract different research communities, encouraging a fusion of their disciplines. Mainly, MuSe 2022 targets the communities of audio-visual emotion recognition, health informatics, and symbolic sentiment analysis. This baseline paper describes the datasets as well as the feature sets extracted from them. A recurrent neural network with LSTM cells is used to set competitive baseline results on the test partitions for each sub-challenge. We report an Area Under the Curve (AUC) of .8480 for MuSe-Humor; .2801 mean (from 7-classes) Pearson's Correlations Coefficient for MuSe-Reaction, as well as .4931 Concordance Correlation Coefficient (CCC) and .4761 for valence and arousal in MuSe-Stress, respectively.Comment: Preliminary baseline paper for the 3rd Multimodal Sentiment Analysis Challenge (MuSe) 2022, a full-day workshop at ACM Multimedia 202

    Concept, Possibilities and Pilot-Testing of a New Smartphone Application for the Social and Life Sciences to Study Human Behavior Including Validation Data from Personality Psychology

    No full text
    With the advent of the World Wide Web, the smartphone and the Internet of Things, not only society but also the sciences are rapidly changing. In particular, the social sciences can profit from these digital developments, because now scientists have the power to study real-life human behavior via smartphones and other devices connected to the Internet of Things on a large-scale level. Although this sounds easy, scientists often face the problem that no practicable solution exists to participate in such a new scientific movement, due to a lack of an interdisciplinary network. If so, the development time of a new product, such as a smartphone application to get insights into human behavior takes an enormous amount of time and resources. Given this problem, the present work presents an easy way to use a smartphone application, which can be applied by social scientists to study a large range of scientific questions. The application provides measurements of variables via tracking smartphone–use patterns, such as call behavior, application use (e.g., social media), GPS and many others. In addition, the presented Android-based smartphone application, called Insights, can also be used to administer self-report questionnaires for conducting experience sampling and to search for co-variations between smartphone usage/smartphone data and self-report data. Of importance, the present work gives a detailed overview on how to conduct a study using an application such as Insights, starting from designing the study, installing the application to analyzing the data. In the present work, server requirements and privacy issues are also discussed. Furthermore, first validation data from personality psychology are presented. Such validation data are important in establishing trust in the applied technology to track behavior. In sum, the aim of the present work is (i) to provide interested scientists a short overview on how to conduct a study with smartphone app tracking technology, (ii) to present the features of the designed smartphone application and (iii) to demonstrate its validity with a proof of concept study, hence correlating smartphone usage with personality measures

    MuSe 2021 challenge:multimodal emotion, sentiment, physiological-emotion, and stress detection

    Get PDF
    Abstract The 2nd Multimodal Sentiment Analysis (MuSe) 2021 Challenge-based Workshop is held in conjunction with ACM Multimedia’21. Two datasets are provided as part of the challenge. Firstly, the MuSe-CaR dataset, which focuses on user-generated, emotional vehicle reviews from YouTube, and secondly, the novel Ulm-Trier Social Stress (Ulm-TSST) dataset, which shows people in stressful circumstances. Participants are faced with four sub-challenges: predicting arousal and valence in a time- and value-continuous manner on a) MuSe-CaR (MuSe-Wilder) and b) Ulm-TSST (MuSe-Stress); c) predicting unsupervised created emotion classes on MuSe-CaR (MuSe-Sent); d) predicting a fusion of human-annotated arousal and measured galvanic skin response also as a continuous target on Ulm-TSST (MuSe-Physio). In this summary, we describe the motivation, the sub-challenges, the challenge conditions, the participation, and the most successful approaches

    The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and Personalisation

    Full text link
    The MuSe 2023 is a set of shared tasks addressing three different contemporary multimodal affect and sentiment analysis problems: In the Mimicked Emotions Sub-Challenge (MuSe-Mimic), participants predict three continuous emotion targets. This sub-challenge utilises the Hume-Vidmimic dataset comprising of user-generated videos. For the Cross-Cultural Humour Detection Sub-Challenge (MuSe-Humour), an extension of the Passau Spontaneous Football Coach Humour (Passau-SFCH) dataset is provided. Participants predict the presence of spontaneous humour in a cross-cultural setting. The Personalisation Sub-Challenge (MuSe-Personalisation) is based on the Ulm-Trier Social Stress Test (Ulm-TSST) dataset, featuring recordings of subjects in a stressed situation. Here, arousal and valence signals are to be predicted, whereas parts of the test labels are made available in order to facilitate personalisation. MuSe 2023 seeks to bring together a broad audience from different research communities such as audio-visual emotion recognition, natural language processing, signal processing, and health informatics. In this baseline paper, we introduce the datasets, sub-challenges, and provided feature sets. As a competitive baseline system, a Gated Recurrent Unit (GRU)-Recurrent Neural Network (RNN) is employed. On the respective sub-challenges' test datasets, it achieves a mean (across three continuous intensity targets) Pearson's Correlation Coefficient of .4727 for MuSe-Mimic, an Area Under the Curve (AUC) value of .8310 for MuSe-Humor and Concordance Correlation Coefficient (CCC) values of .7482 for arousal and .7827 for valence in the MuSe-Personalisation sub-challenge.Comment: Baseline paper for the 4th Multimodal Sentiment Analysis Challenge (MuSe) 2023, a workshop at ACM Multimedia 202

    Literaturverzeichnis

    No full text
    corecore