23 research outputs found
Recognizing emotions in spoken dialogue with acoustic and lexical cues
Automatic emotion recognition has long been a focus of Affective Computing. It has
become increasingly apparent that awareness of human emotions in Human-Computer
Interaction (HCI) is crucial for advancing related technologies, such as dialogue
systems. However, performance of current automatic emotion recognition is
disappointing compared to human performance. Current research on emotion
recognition in spoken dialogue focuses on identifying better feature representations
and recognition models from a data-driven point of view. The goal of this thesis
is to explore how incorporating prior knowledge of human emotion recognition
in the automatic model can improve state-of-the-art performance of automatic
emotion recognition in spoken dialogue. Specifically, we study this by proposing
knowledge-inspired features representing occurrences of disfluency and non-verbal
vocalisation in speech, and by building a multimodal recognition model that combines
acoustic and lexical features in a knowledge-inspired hierarchical structure. In our
study, emotions are represented with the Arousal, Expectancy, Power, and Valence
emotion dimensions. We build unimodal and multimodal emotion recognition
models to study the proposed features and modelling approach, and perform emotion
recognition on both spontaneous and acted dialogue.
Psycholinguistic studies have suggested that DISfluency and Non-verbal
Vocalisation (DIS-NV) in dialogue is related to emotions. However, these affective
cues in spoken dialogue are overlooked by current automatic emotion recognition
research. Thus, we propose features for recognizing emotions in spoken dialogue
which describe five types of DIS-NV in utterances, namely filled pause, filler, stutter,
laughter, and audible breath. Our experiments show that this small set of features
is predictive of emotions. Our DIS-NV features achieve better performance than
benchmark acoustic and lexical features for recognizing all emotion dimensions in
spontaneous dialogue. Consistent with Psycholinguistic studies, the DIS-NV features
are especially predictive of the Expectancy dimension of emotion, which relates to
speaker uncertainty. Our study illustrates the relationship between DIS-NVs and
emotions in dialogue, which contributes to Psycholinguistic understanding of them
as well. Note that our DIS-NV features are based on manual annotations, yet our
long-term goal is to apply our emotion recognition model to HCI systems. Thus, we
conduct preliminary experiments on automatic detection of DIS-NVs, and on using
automatically detected DIS-NV features for emotion recognition. Our results show
that DIS-NVs can be automatically detected from speech with stable accuracy, and
auto-detected DIS-NV features remain predictive of emotions in spontaneous dialogue.
This suggests that our emotion recognition model can be applied to a fully automatic
system in the future, and holds the potential to improve the quality of emotional
interaction in current HCI systems.
To study the robustness of the DIS-NV features, we conduct cross-corpora
experiments on both spontaneous and acted dialogue. We identify how dialogue
type influences the performance of DIS-NV features and emotion recognition models.
DIS-NVs contain additional information beyond acoustic characteristics or lexical
contents. Thus, we study the gain of modality fusion for emotion recognition with the
DIS-NV features. Previous work combines different feature sets by fusing modalities
at the same level using two types of fusion strategies: Feature-Level (FL) fusion,
which concatenates feature sets before recognition; and Decision-Level (DL) fusion,
which makes the final decision based on outputs of all unimodal models. However,
features from different modalities may describe data at different time scales or levels
of abstraction. Moreover, Cognitive Science research indicates that when perceiving
emotions, humans make use of information from different modalities at different
cognitive levels and time steps. Therefore, we propose a HierarchicaL (HL) fusion
strategy for multimodal emotion recognition, which incorporates features that describe
data at a longer time interval or which are more abstract at higher levels of its
knowledge-inspired hierarchy. Compared to FL and DL fusion, HL fusion incorporates
both inter- and intra-modality differences. Our experiments show that HL fusion
consistently outperforms FL and DL fusion on multimodal emotion recognition in both
spontaneous and acted dialogue. The HL model combining our DIS-NV features with
benchmark acoustic and lexical features improves current performance of multimodal
emotion recognition in spoken dialogue.
To study how other emotion-related tasks of spoken dialogue can benefit from the
proposed approaches, we apply the DIS-NV features and the HL fusion strategy to
recognize movie-induced emotions. Our experiments show that although designed
for recognizing emotions in spoken dialogue, DIS-NV features and HL fusion
remain effective for recognizing movie-induced emotions. This suggests that other
emotion-related tasks can also benefit from the proposed features and model structure
Polarity and Intensity: the Two Aspects of Sentiment Analysis
Current multimodal sentiment analysis frames sentiment score prediction as a
general Machine Learning task. However, what the sentiment score actually
represents has often been overlooked. As a measurement of opinions and
affective states, a sentiment score generally consists of two aspects: polarity
and intensity. We decompose sentiment scores into these two aspects and study
how they are conveyed through individual modalities and combined multimodal
models in a naturalistic monologue setting. In particular, we build unimodal
and multimodal multi-task learning models with sentiment score prediction as
the main task and polarity and/or intensity classification as the auxiliary
tasks. Our experiments show that sentiment analysis benefits from multi-task
learning, and individual modalities differ when conveying the polarity and
intensity aspects of sentiment.Comment: Published at the First Grand Challenge and Workshop on Human
Multimodal Language (Challenge-HML) of ACL 201
Emotion Recognition in Spontaneous and Acted Dialogues
Abstract—In this work, we compare emotion recognition on two types of speech: spontaneous and acted dialogues. Experi-ments were conducted on the AVEC2012 database of spontaneous dialogues and the IEMOCAP database of acted dialogues. We studied the performance of two types of acoustic features for emotion recognition: knowledge-inspired disfluency and non-verbal vocalisation (DIS-NV) features, and statistical Low-Level Descriptor (LLD) based features. Both Support Vector Machines (SVM) and Long Short-Term Memory Recurrent Neural Net-works (LSTM-RNN) were built using each feature set on each emotional database. Our work aims to identify aspects of the data that constrain the effectiveness of models and features. Our results show that the performance of different types of features and models is influenced by the type of dialogue and the amount of training data. Because DIS-NVs are less frequent in acted dialogues than in spontaneous dialogues, the DIS-NV features perform better than the LLD features when recognizing emotions in spontaneous dialogues, but not in acted dialogues. The LSTM-RNN model gives better performance than the SVM model when there is enough training data, but the complex structure of a LSTM-RNN model may limit its performance when there is less training data available, and may also risk over-fitting. Additionally, we find that long distance contexts may be more useful when performing emotion recognition at the word level than at the utterance level. Keywords—emotion recognition, disfluency, LSTM, dialogue I
Crafting with a Robot Assistant: Use Social Cues to Inform Adaptive Handovers in Human-Robot Collaboration
We study human-robot handovers in a naturalistic collaboration scenario,
where a mobile manipulator robot assists a person during a crafting session by
providing and retrieving objects used for wooden piece assembly (functional
activities) and painting (creative activities). We collect quantitative and
qualitative data from 20 participants in a Wizard-of-Oz study, generating the
Functional And Creative Tasks Human-Robot Collaboration dataset (the FACT HRC
dataset), available to the research community. This work illustrates how social
cues and task context inform the temporal-spatial coordination in human-robot
handovers, and how human-robot collaboration is shaped by and in turn
influences people's functional and creative activities.Comment: accepted at HRI 202