4 research outputs found
Measuring Emotions in the COVID-19 Real World Worry Dataset
The COVID-19 pandemic is having a dramatic impact on societies and economies
around the world. With various measures of lockdowns and social distancing in
place, it becomes important to understand emotional responses on a large scale.
In this paper, we present the first ground truth dataset of emotional responses
to COVID-19. We asked participants to indicate their emotions and express these
in text. This resulted in the Real World Worry Dataset of 5,000 texts (2,500
short + 2,500 long texts). Our analyses suggest that emotional responses
correlated with linguistic measures. Topic modeling further revealed that
people in the UK worry about their family and the economic situation.
Tweet-sized texts functioned as a call for solidarity, while longer texts shed
light on worries and concerns. Using predictive modeling approaches, we were
able to approximate the emotional responses of participants from text within
14% of their actual value. We encourage others to use the dataset and improve
how we can use automated methods to learn about emotional responses and worries
about an urgent problem.Comment: Accepted to ACL 2020 COVID-19 worksho
Adversarial Infidelity Learning for Model Interpretation
Model interpretation is essential in data mining and knowledge discovery. It
can help understand the intrinsic model working mechanism and check if the
model has undesired characteristics. A popular way of performing model
interpretation is Instance-wise Feature Selection (IFS), which provides an
importance score of each feature representing the data samples to explain how
the model generates the specific output. In this paper, we propose a
Model-agnostic Effective Efficient Direct (MEED) IFS framework for model
interpretation, mitigating concerns about sanity, combinatorial shortcuts,
model identifiability, and information transmission. Also, we focus on the
following setting: using selected features to directly predict the output of
the given model, which serves as a primary evaluation metric for
model-interpretation methods. Apart from the features, we involve the output of
the given model as an additional input to learn an explainer based on more
accurate information. To learn the explainer, besides fidelity, we propose an
Adversarial Infidelity Learning (AIL) mechanism to boost the explanation
learning by screening relatively unimportant features. Through theoretical and
experimental analysis, we show that our AIL mechanism can help learn the
desired conditional distribution between selected features and targets.
Moreover, we extend our framework by integrating efficient interpretation
methods as proper priors to provide a warm start. Comprehensive empirical
evaluation results are provided by quantitative metrics and human evaluation to
demonstrate the effectiveness and superiority of our proposed method. Our code
is publicly available online at https://github.com/langlrsw/MEED.Comment: 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
(KDD '20), August 23--27, 2020, Virtual Event, US