2,031 research outputs found
Can a robot laugh with you?: Shared laughter generation for empathetic spoken dialogue
人と一緒に笑う会話ロボットを開発 --人に共感し、人と共生する会話AIの実現に向けて--. 京都大学プレスリリース. 2022-09-29.Spoken dialogue systems must be able to express empathy to achieve natural interaction with human users. However, laughter generation requires a high level of dialogue understanding. Thus, implementing laughter in existing systems, such as in conversational robots, has been challenging. As a first step toward solving this problem, rather than generating laughter from user dialogue, we focus on “shared laughter, ” where a user laughs using either solo or speech laughs (initial laugh), and the system laughs in turn (response laugh). The proposed system consists of three models: 1) initial laugh detection, 2) shared laughter prediction, and 3) laugh type selection. We trained each model using a human-robot speed dating dialogue corpus. For the first model, a recurrent neural network was applied, and the detection performance achieved an F1 score of 82.6%. The second model used the acoustic and prosodic features of the initial laugh and achieved a prediction accuracy above that of the random prediction. The third model selects the type of system’s response laugh as social or mirthful laugh based on the same features of the initial laugh. We then implemented the full shared laughter generation system in an attentive listening dialogue system and conducted a dialogue listening experiment. The proposed system improved the impression of the dialogue system such as empathy perception compared to a naive baseline without laughter and a reactive system that always responded with only social laughs. We propose that our system can be used for situated robot interaction and also emphasize the need for integrating proper empathetic laughs into conversational robots and agents
Speech-based recognition of self-reported and observed emotion in a dimensional space
The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance
Impact of annotation modality on label quality and model performance in the automatic assessment of laughter in-the-wild
Laughter is considered one of the most overt signals of joy. Laughter is
well-recognized as a multimodal phenomenon but is most commonly detected by
sensing the sound of laughter. It is unclear how perception and annotation of
laughter differ when annotated from other modalities like video, via the body
movements of laughter. In this paper we take a first step in this direction by
asking if and how well laughter can be annotated when only audio, only video
(containing full body movement information) or audiovisual modalities are
available to annotators. We ask whether annotations of laughter are congruent
across modalities, and compare the effect that labeling modality has on machine
learning model performance. We compare annotations and models for laughter
detection, intensity estimation, and segmentation, three tasks common in
previous studies of laughter. Our analysis of more than 4000 annotations
acquired from 48 annotators revealed evidence for incongruity in the perception
of laughter, and its intensity between modalities. Further analysis of
annotations against consolidated audiovisual reference annotations revealed
that recall was lower on average for video when compared to the audio
condition, but tended to increase with the intensity of the laughter samples.
Our machine learning experiments compared the performance of state-of-the-art
unimodal (audio-based, video-based and acceleration-based) and multi-modal
models for different combinations of input modalities, training label modality,
and testing label modality. Models with video and acceleration inputs had
similar performance regardless of training label modality, suggesting that it
may be entirely appropriate to train models for laughter detection from body
movements using video-acquired labels, despite their lower inter-rater
agreement
- …