3,994 research outputs found
Parsing Argumentation Structures in Persuasive Essays
In this article, we present a novel approach for parsing argumentation
structures. We identify argument components using sequence labeling at the
token level and apply a new joint model for detecting argumentation structures.
The proposed model globally optimizes argument component types and
argumentative relations using integer linear programming. We show that our
model considerably improves the performance of base classifiers and
significantly outperforms challenging heuristic baselines. Moreover, we
introduce a novel corpus of persuasive essays annotated with argumentation
structures. We show that our annotation scheme and annotation guidelines
successfully guide human annotators to substantial agreement. This corpus and
the annotation guidelines are freely available for ensuring reproducibility and
to encourage future research in computational argumentation.Comment: Under review in Computational Linguistics. First submission: 26
October 2015. Revised submission: 15 July 201
What's Hard in English RST Parsing? Predictive Models for Error Analysis
Despite recent advances in Natural Language Processing (NLP), hierarchical
discourse parsing in the framework of Rhetorical Structure Theory remains
challenging, and our understanding of the reasons for this are as yet limited.
In this paper, we examine and model some of the factors associated with parsing
difficulties in previous work: the existence of implicit discourse relations,
challenges in identifying long-distance relations, out-of-vocabulary items, and
more. In order to assess the relative importance of these variables, we also
release two annotated English test-sets with explicit correct and distracting
discourse markers associated with gold standard RST relations. Our results show
that as in shallow discourse parsing, the explicit/implicit distinction plays a
role, but that long-distance dependencies are the main challenge, while lack of
lexical overlap is less of a problem, at least for in-domain parsing. Our final
model is able to predict where errors will occur with an accuracy of 76.3% for
the bottom-up parser and 76.6% for the top-down parser.Comment: SIGDIAL 2023 camera-ready; 12 page
- …