4,428 research outputs found
The ordinal nature of emotions
Representing computationally everyday emotional
states is a challenging task and, arguably, one of the most fundamental
for affective computing. Standard practice in emotion annotation
is to ask humans to assign an absolute value of intensity
to each emotional behavior they observe. Psychological theories
and evidence from multiple disciplines including neuroscience,
economics and artificial intelligence, however, suggest that the
task of assigning reference-based (relative) values to subjective
notions is better aligned with the underlying representations
than assigning absolute values. Evidence also shows that we
use reference points, or else anchors, against which we evaluate
values such as the emotional state of a stimulus; suggesting
again that ordinal labels are a more suitable way to represent
emotions. This paper draws together the theoretical reasons to
favor relative over absolute labels for representing and annotating
emotion, reviewing the literature across several disciplines. We
go on to discuss good and bad practices of treating ordinal
and other forms of annotation data, and make the case for
preference learning methods as the appropriate approach for
treating ordinal labels. We finally discuss the advantages of
relative annotation with respect to both reliability and validity
through a number of case studies in affective computing, and
address common objections to the use of ordinal data. Overall,
the thesis that emotions are by nature relative is supported by
both theoretical arguments and evidence, and opens new horizons
for the way emotions are viewed, represented and analyzed
computationally.peer-reviewe
Crowdsourcing a Word-Emotion Association Lexicon
Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion
Unsupervised Extractive Summarization of Emotion Triggers
Understanding what leads to emotions during large-scale crises is important
as it can provide groundings for expressed emotions and subsequently improve
the understanding of ongoing disasters. Recent approaches trained supervised
models to both detect emotions and explain emotion triggers (events and
appraisals) via abstractive summarization. However, obtaining timely and
qualitative abstractive summaries is expensive and extremely time-consuming,
requiring highly-trained expert annotators. In time-sensitive, high-stake
contexts, this can block necessary responses. We instead pursue unsupervised
systems that extract triggers from text. First, we introduce CovidET-EXT,
augmenting (Zhan et al. 2022)'s abstractive dataset (in the context of the
COVID-19 crisis) with extractive triggers. Second, we develop new unsupervised
learning models that can jointly detect emotions and summarize their triggers.
Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion
information from external sources combined with a language understanding
module, and outperforms strong baselines. We release our data and code at
https://github.com/tsosea2/CovidET-EXT.Comment: ACL 2023 Camera-Read
A Novel Markovian Framework for Integrating Absolute and Relative Ordinal Emotion Information
There is growing interest in affective computing for the representation and
prediction of emotions along ordinal scales. However, the term ordinal emotion
label has been used to refer to both absolute notions such as low or high
arousal, as well as relation notions such as arousal is higher at one instance
compared to another. In this paper, we introduce the terminology absolute and
relative ordinal labels to make this distinction clear and investigate both
with a view to integrate them and exploit their complementary nature. We
propose a Markovian framework referred to as Dynamic Ordinal Markov Model
(DOMM) that makes use of both absolute and relative ordinal information, to
improve speech based ordinal emotion prediction. Finally, the proposed
framework is validated on two speech corpora commonly used in affective
computing, the RECOLA and the IEMOCAP databases, across a range of system
configurations. The results consistently indicate that integrating relative
ordinal information improves absolute ordinal emotion prediction.Comment: This work has been submitted to IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Recommended from our members
The challenges of viewpoint-taking when learning a sign language: Data from the 'frog story' in British Sign Language
Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously
Neural correlates of emotion word processing: the complex relation between emotional valence and arousal
Poster Session 1: no. 2The Conference's website is located at http://events.unitn.it/en/psb2010Emotion is characterised by a two-dimensional structure: valence describes the extent to which an emotion is positive or negative, whereas arousal refers to the intensity of an emotion, how exciting or calming it is. Emotional content of verbal material influences cognitive processing during lexical decision, naming, emotional Stroop task and many others.
Converging findings showed that emotionally valenced words (positive or negative) are processed faster than neutral words, as shown by reaction time and ERP measures, suggesting a prioritisation of emotional …published_or_final_versio
Comparing the utility of different classification schemes for emotive language analysis
In this paper we investigated the utility of different classification schemes for emotive language analysis with the aim of providing experimental justification for the choice of scheme for classifying emotions in free text. We compared six schemes: (1) Ekman's six basic emotions, (2) Plutchik's wheel of emotion, (3) Watson and Tellegen's Circumplex theory of affect, (4) the Emotion Annotation Representation Language (EARL), (5) WordNet–Affect, and (6) free text. To measure their utility, we investigated their ease of use by human annotators as well as the performance of supervised machine learning. We assembled a corpus of 500 emotionally charged text documents. The corpus was annotated manually using an online crowdsourcing platform with five independent annotators per document. Assuming that classification schemes with a better balance between completeness and complexity are easier to interpret and use, we expect such schemes to be associated with higher inter–annotator agreement. We used Krippendorff's alpha coefficient to measure inter–annotator agreement according to which the six classification schemes were ranked as follows: (1) six basic emotions (a = 0.483), (2) wheel of emotion (a = 0.410), (3) Circumplex (a = 0.312), EARL (a = 0.286), (5) free text (a = 0.205), and (6) WordNet–Affect (a = 0.202). However, correspondence analysis of annotations across the schemes highlighted that basic emotions are oversimplified representations of complex phenomena and as such likely to lead to invalid interpretations, which are not necessarily reflected by high inter-annotator agreement. To complement the result of the quantitative analysis, we used semi–structured interviews to gain a qualitative insight into how annotators interacted with and interpreted the chosen schemes. The size of the classification scheme was highlighted as a significant factor affecting annotation. In particular, the scheme of six basic emotions was perceived as having insufficient coverage of the emotion space forcing annotators to often resort to inferior alternatives, e.g. using happiness as a surrogate for love. On the opposite end of the spectrum, large schemes such as WordNet–Affect were linked to choice fatigue, which incurred significant cognitive effort in choosing the best annotation. In the second part of the study, we used the annotated corpus to create six training datasets, one for each scheme. The training data were used in cross–validation experiments to evaluate classification performance in relation to different schemes. According to the F-measure, the classification schemes were ranked as follows: (1) six basic emotions (F = 0.410), (2) Circumplex (F = 0.341), (3) wheel of emotion (F = 0.293), (4) EARL (F = 0.254), (5) free text (F = 0.159) and (6) WordNet–Affect (F = 0.158). Not surprisingly, the smallest scheme was ranked the highest in both criteria. Therefore, out of the six schemes studied here, six basic emotions are best suited for emotive language analysis. However, both quantitative and qualitative analysis highlighted its major shortcoming – oversimplification of positive emotions, which are all conflated into happiness. Further investigation is needed into ways of better balancing positive and negative emotions. Keywords: annotation, crowdsourcing, text classification, sentiment analysis, supervised machine learnin
Typography Design’s New Trajectory Towards Visual Literacy for Digital Mediums
Typographic elements have a huge impact on how designed mediums affect visual literacy. This paper reviews the principles, perspectives and approaches in the production and function of typographic design visual media (print and digital), with the aim to understand the relationship between typographic design on digital mediums, with the aim of examining its influence on people’s ability to communicate ideas, meaning and messages effectively, while reflecting on its commercial implications for brands and marketers. Research via survey and a focus group implied there is a positive association between literacy and the application of graphic design and typography on digital communication mediums. Findings revealed that type design complements textual word elements to enhance cognition and understanding of messages. The integration of visual and texts facilitates reading, and for digital mediums, both legible layout and engaging typefaces are equally crucial. Graphic typeface for digital media, from smartphones to e-texts for learning, should apply visual hierarchy arrangement to achieve these objectives. Findings show typographic design is an essential aspect of social communication today, and digital designers play a fundamental role to enable audiences to improve their economic and social participation and gain its full advantages
- …