241 research outputs found
Ubiquitous Technologies for Emotion Recognition
Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions
Dimensional Modeling of Emotions in Text with Appraisal Theories: Corpus Creation, Annotation Reliability, and Prediction
The most prominent tasks in emotion analysis are to assign emotions to texts
and to understand how emotions manifest in language. An observation for NLP is
that emotions can be communicated implicitly by referring to events, appealing
to an empathetic, intersubjective understanding of events, even without
explicitly mentioning an emotion name. In psychology, the class of emotion
theories known as appraisal theories aims at explaining the link between events
and emotions. Appraisals can be formalized as variables that measure a
cognitive evaluation by people living through an event that they consider
relevant. They include the assessment if an event is novel, if the person
considers themselves to be responsible, if it is in line with the own goals,
and many others. Such appraisals explain which emotions are developed based on
an event, e.g., that a novel situation can induce surprise or one with
uncertain consequences could evoke fear. We analyze the suitability of
appraisal theories for emotion analysis in text with the goal of understanding
if appraisal concepts can reliably be reconstructed by annotators, if they can
be predicted by text classifiers, and if appraisal concepts help to identify
emotion categories. To achieve that, we compile a corpus by asking people to
textually describe events that triggered particular emotions and to disclose
their appraisals. Then, we ask readers to reconstruct emotions and appraisals
from the text. This setup allows us to measure if emotions and appraisals can
be recovered purely from text and provides a human baseline. Our comparison of
text classification methods to human annotators shows that both can reliably
detect emotions and appraisals with similar performance. Therefore, appraisals
constitute an alternative computational emotion analysis paradigm and further
improve the categorization of emotions in text with joint models.Comment: Computational Linguistics Journal in Issue No 1, March 2023; 71
pages, 13 figures, 19 table
Modeling Emotional Valence Integration From Voice and Touch
In the context of designing multimodal social interactions for Human–Computer Interaction and for Computer–Mediated Communication, we conducted an experimental study to investigate how participants combine voice expressions with tactile stimulation to evaluate emotional valence (EV). In this study, audio and tactile stimuli were presented separately, and then presented together. Audio stimuli comprised positive and negative voice expressions, and tactile stimuli consisted of different levels of air jet tactile stimulation performed on the arm of the participants. Participants were asked to evaluate communicated EV on a continuous scale. Information Integration Theory was used to model multimodal valence perception process. Analyses showed that participants generally integrated both sources of information to evaluate EV. The main integration rule was averaging rule. The predominance of a modality over the other modality was specific to each individual
- …