3,645 research outputs found
Computing the Affective-Aesthetic Potential of Literary Texts
In this paper, we compute the affective-aesthetic potential (AAP) of literary texts by using a simple sentiment analysis tool called SentiArt. In contrast to other established tools, SentiArt is based on publicly available vector space models (VSMs) and requires no emotional dictionary, thus making it applicable in any language for which VSMs have been made available (>150 so far) and avoiding issues of low coverage. In a first study, the AAP values of all words of a widely used lexical databank for German were computed and the VSM’s ability in representing concrete and more abstract semantic concepts was demonstrated. In a second study, SentiArt was used to predict ~2800 human word valence ratings and shown to have a high predictive accuracy (R2 > 0.5, p < 0.0001). A third study tested the validity of SentiArt in predicting emotional states over (narrative) time using human liking ratings from reading a story. Again, the predictive accuracy was highly significant: R2adj = 0.46, p < 0.0001, establishing the SentiArt tool as a promising candidate for lexical sentiment analyses at both the micro- and macrolevels, i.e., short and long literary materials. Possibilities and limitations of lexical VSM-based sentiment analyses of diverse complex literary texts are discussed in the light of these results
Crowdsourcing a Word-Emotion Association Lexicon
Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion
An Empirical Analysis of the Role of Amplifiers, Downtoners, and Negations in Emotion Classification in Microblogs
The effect of amplifiers, downtoners, and negations has been studied in
general and particularly in the context of sentiment analysis. However, there
is only limited work which aims at transferring the results and methods to
discrete classes of emotions, e. g., joy, anger, fear, sadness, surprise, and
disgust. For instance, it is not straight-forward to interpret which emotion
the phrase "not happy" expresses. With this paper, we aim at obtaining a better
understanding of such modifiers in the context of emotion-bearing words and
their impact on document-level emotion classification, namely, microposts on
Twitter. We select an appropriate scope detection method for modifiers of
emotion words, incorporate it in a document-level emotion classification model
as additional bag of words and show that this approach improves the performance
of emotion classification. In addition, we build a term weighting approach
based on the different modifiers into a lexical model for the analysis of the
semantics of modifiers and their impact on emotion meaning. We show that
amplifiers separate emotions expressed with an emotion- bearing word more
clearly from other secondary connotations. Downtoners have the opposite effect.
In addition, we discuss the meaning of negations of emotion-bearing words. For
instance we show empirically that "not happy" is closer to sadness than to
anger and that fear-expressing words in the scope of downtoners often express
surprise.Comment: Accepted for publication at The 5th IEEE International Conference on
Data Science and Advanced Analytics (DSAA), https://dsaa2018.isi.it
Generating Music from Literature
We present a system, TransProse, that automatically generates musical pieces
from text. TransProse uses known relations between elements of music such as
tempo and scale, and the emotions they evoke. Further, it uses a novel
mechanism to determine sequences of notes that capture the emotional activity
in the text. The work has applications in information visualization, in
creating audio-visual e-books, and in developing music apps
Word Affect Intensities
Words often convey affect -- emotions, feelings, and attitudes. Lexicons of
word-affect association have applications in automatic emotion analysis and
natural language generation. However, existing lexicons indicate only coarse
categories of affect association. Here, for the first time, we create an affect
intensity lexicon with real-valued scores of association. We use a technique
called best-worst scaling that improves annotation consistency and obtains
reliable fine-grained scores. The lexicon includes terms common from both
general English and terms specific to social media communications. It has close
to 6,000 entries for four basic emotions. We will be adding entries for other
affect dimensions shortly
Towards a Corpus of Historical German Plays with Emotion Annotations
In this paper, we present first work-in-progress annotation results of a project investigating computational methods of emotion analysis for historical German plays around 1800. We report on the development of an annotation scheme focussing on the annotation of emotions that are important from a literary studies perspective for this time span as well as on the annotation process we have developed. We annotate emotions expressed or attributed by characters of the plays in the written texts. The scheme consists of 13 hierarchically structured emotion concepts as well as the source (who experiences or attributes the emotion) and target (who or what is the emotion directed towards). We have conducted the annotation of five example plays of our corpus with two annotators per play and report on annotation distributions and agreement statistics. We were able to collect over 6,500 emotion annotations and identified a fair agreement for most concepts around a ?-value of 0.4. We discuss how we plan to improve annotator consistency and continue our work. The results also have implications for similar projects in the context of Digital Humanities
- …