343 research outputs found
The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants
Reasoning is a crucial part of natural language argumentation. To comprehend
an argument, one must analyze its warrant, which explains why its claim follows
from its premises. As arguments are highly contextualized, warrants are usually
presupposed and left implicit. Thus, the comprehension does not only require
language understanding and logic skills, but also depends on common sense. In
this paper we develop a methodology for reconstructing warrants systematically.
We operationalize it in a scalable crowdsourcing process, resulting in a freely
licensed dataset with warrants for 2k authentic arguments from news comments.
On this basis, we present a new challenging task, the argument reasoning
comprehension task. Given an argument with a claim and a premise, the goal is
to choose the correct implicit warrant from two options. Both warrants are
plausible and lexically close, but lead to contradicting claims. A solution to
this task will define a substantial step towards automatic warrant
reconstruction. However, experiments with several neural attention and language
models reveal that current approaches do not suffice.Comment: Accepted as NAACL 2018 Long Paper; see details on the front pag
SNU_IDS at SemEval-2018 Task 12: Sentence Encoder with Contextualized Vectors for Argument Reasoning Comprehension
We present a novel neural architecture for the Argument Reasoning
Comprehension task of SemEval 2018. It is a simple neural network consisting of
three parts, collectively judging whether the logic built on a set of given
sentences (a claim, reason, and warrant) is plausible or not. The model
utilizes contextualized word vectors pre-trained on large machine translation
(MT) datasets as a form of transfer learning, which can help to mitigate the
lack of training data. Quantitative analysis shows that simply leveraging LSTMs
trained on MT datasets outperforms several baselines and non-transferred
models, achieving accuracies of about 70% on the development set and about 60%
on the test set.Comment: SemEval 201
Implicit Argument Prediction as Reading Comprehension
Implicit arguments, which cannot be detected solely through syntactic cues,
make it harder to extract predicate-argument tuples. We present a new model for
implicit argument prediction that draws on reading comprehension, casting the
predicate-argument tuple with the missing argument as a query. We also draw on
pointer networks and multi-hop computation. Our model shows good performance on
an argument cloze task as well as on a nominal implicit argument prediction
task.Comment: Accepted at AAAI 201
A Retrospective Analysis of the Fake News Challenge Stance Detection Task
The 2017 Fake News Challenge Stage 1 (FNC-1) shared task addressed a stance
classification task as a crucial first step towards detecting fake news. To
date, there is no in-depth analysis paper to critically discuss FNC-1's
experimental setup, reproduce the results, and draw conclusions for
next-generation stance classification methods. In this paper, we provide such
an in-depth analysis for the three top-performing systems. We first find that
FNC-1's proposed evaluation metric favors the majority class, which can be
easily classified, and thus overestimates the true discriminative power of the
methods. Therefore, we propose a new F1-based metric yielding a changed system
ranking. Next, we compare the features and architectures used, which leads to a
novel feature-rich stacked LSTM model that performs on par with the best
systems, but is superior in predicting minority classes. To understand the
methods' ability to generalize, we derive a new dataset and perform both
in-domain and cross-domain experiments. Our qualitative and quantitative study
helps interpreting the original FNC-1 scores and understand which features help
improving performance and why. Our new dataset and all source code used during
the reproduction study are publicly available for future research
- …