66 research outputs found
The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants
Reasoning is a crucial part of natural language argumentation. To comprehend
an argument, one must analyze its warrant, which explains why its claim follows
from its premises. As arguments are highly contextualized, warrants are usually
presupposed and left implicit. Thus, the comprehension does not only require
language understanding and logic skills, but also depends on common sense. In
this paper we develop a methodology for reconstructing warrants systematically.
We operationalize it in a scalable crowdsourcing process, resulting in a freely
licensed dataset with warrants for 2k authentic arguments from news comments.
On this basis, we present a new challenging task, the argument reasoning
comprehension task. Given an argument with a claim and a premise, the goal is
to choose the correct implicit warrant from two options. Both warrants are
plausible and lexically close, but lead to contradicting claims. A solution to
this task will define a substantial step towards automatic warrant
reconstruction. However, experiments with several neural attention and language
models reveal that current approaches do not suffice.Comment: Accepted as NAACL 2018 Long Paper; see details on the front pag
No Word Embedding Model Is Perfect: Evaluating the Representation Accuracy for Social Bias in the Media
News articles both shape and reflect public opinion across the political
spectrum. Analyzing them for social bias can thus provide valuable insights,
such as prevailing stereotypes in society and the media, which are often
adopted by NLP models trained on respective data. Recent work has relied on
word embedding bias measures, such as WEAT. However, several representation
issues of embeddings can harm the measures' accuracy, including low-resource
settings and token frequency differences. In this work, we study what kind of
embedding algorithm serves best to accurately measure types of social bias
known to exist in US online news articles. To cover the whole spectrum of
political bias in the US, we collect 500k articles and review psychology
literature with respect to expected social bias. We then quantify social bias
using WEAT along with embedding algorithms that account for the aforementioned
issues. We compare how models trained with the algorithms on news articles
represent the expected social bias. Our results suggest that the standard way
to quantify bias does not align well with knowledge from psychology. While the
proposed algorithms reduce the~gap, they still do not fully match the
literature.Comment: Accepted to Findings of the Association for Computational
Linguistics: EMNLP 202
Mind the Gap: Automated Corpus Creation for Enthymeme Detection and Reconstruction in Learner Arguments
Writing strong arguments can be challenging for learners. It requires to
select and arrange multiple argumentative discourse units (ADUs) in a logical
and coherent way as well as to decide which ADUs to leave implicit, so called
enthymemes. However, when important ADUs are missing, readers might not be able
to follow the reasoning or understand the argument's main point. This paper
introduces two new tasks for learner arguments: to identify gaps in arguments
(enthymeme detection) and to fill such gaps (enthymeme reconstruction).
Approaches to both tasks may help learners improve their argument quality. We
study how corpora for these tasks can be created automatically by deleting ADUs
from an argumentative text that are central to the argument and its quality,
while maintaining the text's naturalness. Based on the ICLEv3 corpus of
argumentative learner essays, we create 40,089 argument instances for enthymeme
detection and reconstruction. Through manual studies, we provide evidence that
the proposed corpus creation process leads to the desired quality reduction,
and results in arguments that are similarly natural to those written by
learners. Finally, first baseline approaches to enthymeme detection and
reconstruction demonstrate the corpus' usefulness.Comment: Accepted to Findings of EMNLP 202
Modeling Appropriate Language in Argumentation
Online discussion moderators must make ad-hoc decisions about whether the
contributions of discussion participants are appropriate or should be removed
to maintain civility. Existing research on offensive language and the resulting
tools cover only one aspect among many involved in such decisions. The question
of what is considered appropriate in a controversial discussion has not yet
been systematically addressed. In this paper, we operationalize appropriate
language in argumentation for the first time. In particular, we model
appropriateness through the absence of flaws, grounded in research on argument
quality assessment, especially in aspects from rhetoric. From these, we derive
a new taxonomy of 14 dimensions that determine inappropriate language in online
discussions. Building on three argument quality corpora, we then create a
corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses
support that the taxonomy covers the concept of appropriateness
comprehensively, showing several plausible correlations with argument quality
dimensions. Moreover, results of baseline approaches to assessing
appropriateness suggest that all dimensions can be modeled computationally on
the corpus
The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments
An audience's prior beliefs and morals are strong indicators of how likely they will be affected by a given argument. Utilizing such knowledge can help focus on shared values to bring disagreeing parties towards agreement. In argumentation technology, however, this is barely exploited so far. This paper studies the feasibility of automatically generating morally framed arguments as well as their effect on different audiences. Following the moral foundation theory, we propose a system that effectively generates arguments focusing on different morals. In an in-depth user study, we ask liberals and conservatives to evaluate the impact of these arguments. Our results suggest that, particularly when prior beliefs are challenged, an audience becomes more affected by morally framed arguments
Analyzing the Persuasive Effect of Style in News Editorial Argumentation
News editorials argue about political issues in order to challenge or reinforce the stance of readers with different ideologies. Previous research has investigated such persuasive effects for argumentative content. In contrast, this paper studies how important the style of news editorials is to achieve persuasion. To this end, we first compare content- and style-oriented classifiers on editorials from the liberal NYTimes with ideology-specific effect annotations. We find that conservative readers are resistant to NYTimes style, but on liberals, style even has more impact than content. Focusing on liberals, we then cluster the leads, bodies, and endings of editorials, in order to learn about writing style patterns of effective argumentation
What changed your mind : the roles of dynamic topics and discourse in argumentation process
In our world with full of uncertainty, debates and argumentation contribute to the progress of science and society. Despite of the in- creasing attention to characterize human arguments, most progress made so far focus on the debate outcome, largely ignoring the dynamic patterns in argumentation processes. This paper presents a study that automatically analyzes the key factors in argument persuasiveness, beyond simply predicting who will persuade whom. Specifically, we propose a novel neural model that is able to dynamically track the changes of latent topics and discourse in argumentative conversations, allowing the investigation of their roles in influencing the outcomes of persuasion. Extensive experiments have been conducted on argumentative conversations on both social media and supreme court. The results show that our model outperforms state-of-the-art models in identifying persuasive arguments via explicitly exploring dynamic factors of topic and discourse. We further analyze the effects of topics and discourse on persuasiveness, and find that they are both useful -- topics provide concrete evidence while superior discourse styles may bias participants, especially in social media arguments. In addition, we draw some findings from our empirical results, which will help people better engage in future persuasive conversations
- …