9,589 research outputs found
Modeling Empathy and Distress in Reaction to News Stories
Computational detection and understanding of empathy is an important factor
in advancing human-computer interaction. Yet to date, text-based empathy
prediction has the following major limitations: It underestimates the
psychological complexity of the phenomenon, adheres to a weak notion of ground
truth where empathic states are ascribed by third parties, and lacks a shared
corpus. In contrast, this contribution presents the first publicly available
gold standard for empathy prediction. It is constructed using a novel
annotation methodology which reliably captures empathy assessments by the
writer of a statement using multi-item scales. This is also the first
computational work distinguishing between multiple forms of empathy, empathic
concern, and personal distress, as recognized throughout psychology. Finally,
we present experimental results for three different predictive models, of which
a CNN performs the best.Comment: To appear at EMNLP 201
Mitigating Gender Bias in Machine Learning Data Sets
Artificial Intelligence has the capacity to amplify and perpetuate societal
biases and presents profound ethical implications for society. Gender bias has
been identified in the context of employment advertising and recruitment tools,
due to their reliance on underlying language processing and recommendation
algorithms. Attempts to address such issues have involved testing learned
associations, integrating concepts of fairness to machine learning and
performing more rigorous analysis of training data. Mitigating bias when
algorithms are trained on textual data is particularly challenging given the
complex way gender ideology is embedded in language. This paper proposes a
framework for the identification of gender bias in training data for machine
learning.The work draws upon gender theory and sociolinguistics to
systematically indicate levels of bias in textual training data and associated
neural word embedding models, thus highlighting pathways for both removing bias
from training data and critically assessing its impact.Comment: 10 pages, 5 figures, 5 Tables, Presented as Bias2020 workshop (as
part of the ECIR Conference) - http://bias.disim.univaq.i
- …