103 research outputs found
Identifying Justifications in Written Dialogs By Classifying Text as Argumentative
In written dialog, discourse participants need to justify claims they make, to convince the reader the claim is true and/or relevant to the discourse. This paper presents a new task (with an associated corpus), namely detecting such justifications. We investigate the nature of such justifications, and observe that the justifications themselves often contain discourse structure. We therefore develop a method to detect the existence of certain types of discourse relations, which helps us classify whether a segment is a justification or not. Our task is novel, and our work is novel in that it uses a large set of connectives (which we call indicators), and in that it uses a large set of discourse relations, without choosing among them
Argumentation Mining in User-Generated Web Discourse
The goal of argumentation mining, an evolving research field in computational
linguistics, is to design methods capable of analyzing people's argumentation.
In this article, we go beyond the state of the art in several ways. (i) We deal
with actual Web data and take up the challenges given by the variety of
registers, multiple domains, and unrestricted noisy user-generated Web
discourse. (ii) We bridge the gap between normative argumentation theories and
argumentation phenomena encountered in actual data by adapting an argumentation
model tested in an extensive annotation study. (iii) We create a new gold
standard corpus (90k tokens in 340 documents) and experiment with several
machine learning methods to identify argument components. We offer the data,
source codes, and annotation guidelines to the community under free licenses.
Our findings show that argumentation mining in user-generated Web discourse is
a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in
User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17
Parsing Argumentation Structures in Persuasive Essays
In this article, we present a novel approach for parsing argumentation
structures. We identify argument components using sequence labeling at the
token level and apply a new joint model for detecting argumentation structures.
The proposed model globally optimizes argument component types and
argumentative relations using integer linear programming. We show that our
model considerably improves the performance of base classifiers and
significantly outperforms challenging heuristic baselines. Moreover, we
introduce a novel corpus of persuasive essays annotated with argumentation
structures. We show that our annotation scheme and annotation guidelines
successfully guide human annotators to substantial agreement. This corpus and
the annotation guidelines are freely available for ensuring reproducibility and
to encourage future research in computational argumentation.Comment: Under review in Computational Linguistics. First submission: 26
October 2015. Revised submission: 15 July 201
Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion
Americans spend about a third of their time online, with many participating
in online conversations on social and political issues. We hypothesize that
social media arguments on such issues may be more engaging and persuasive than
traditional media summaries, and that particular types of people may be more or
less convinced by particular styles of argument, e.g. emotional arguments may
resonate with some personalities while factual arguments resonate with others.
We report a set of experiments testing at large scale how audience variables
interact with argument style to affect the persuasiveness of an argument, an
under-researched topic within natural language processing. We show that belief
change is affected by personality factors, with conscientious, open and
agreeable people being more convinced by emotional arguments.Comment: European Chapter of the Association for Computational Linguistics
(EACL 2017
Analytic frameworks for assessing dialogic argumentation in online learning environments
Over the last decade, researchers have developed sophisticated online learning environments to support students engaging in argumentation. This review first considers the range of functionalities incorporated within these online environments. The review then presents five categories of analytic frameworks focusing on (1) formal argumentation structure, (2) normative quality, (3) nature and function of contributions within the dialog, (4) epistemic nature of reasoning, and (5) patterns and trajectories of participant interaction. Example analytic frameworks from each category are presented in detail rich enough to illustrate their nature and structure. This rich detail is intended to facilitate researchersâ identification of possible frameworks to draw upon in developing or adopting analytic methods for their own work. Each framework is applied to a shared segment of student dialog to facilitate this illustration and comparison process. Synthetic discussions of each category consider the frameworks in light of the underlying theoretical perspectives on argumentation, pedagogical goals, and online environmental structures. Ultimately the review underscores the diversity of perspectives represented in this research, the importance of clearly specifying theoretical and environmental commitments throughout the process of developing or adopting an analytic framework, and the role of analytic frameworks in the future development of online learning environments for argumentation
Topic Independent Identification of Agreement and Disagreement in Social Media Dialogue
Research on the structure of dialogue has been hampered for years because
large dialogue corpora have not been available. This has impacted the dialogue
research community's ability to develop better theories, as well as good off
the shelf tools for dialogue processing. Happily, an increasing amount of
information and opinion exchange occur in natural dialogue in online forums,
where people share their opinions about a vast range of topics. In particular
we are interested in rejection in dialogue, also called disagreement and
denial, where the size of available dialogue corpora, for the first time,
offers an opportunity to empirically test theoretical accounts of the
expression and inference of rejection in dialogue. In this paper, we test
whether topic-independent features motivated by theoretical predictions can be
used to recognize rejection in online forums in a topic independent way. Our
results show that our theoretically motivated features achieve 66% accuracy, an
improvement over a unigram baseline of an absolute 6%.Comment: @inproceedings{Misra2013TopicII, title={Topic Independent
Identification of Agreement and Disagreement in Social Media Dialogue},
author={Amita Misra and Marilyn A. Walker}, booktitle={SIGDIAL Conference},
year={2013}
Examples and Specifications that Prove a Point: Identifying Elaborative and Argumentative Discourse Relations
Examples and specifications occur frequently in text, but not much is known about how they function in discourse and how readers interpret them. Looking at how theyâre annotated in existing discourse corpora, we find that annotators often disagree on these types of relations; specifically, there is disagreement about whether these relations are elaborative (additive) or argumentative (pragmatic causal). To investigate how readers interpret examples and specifications, we conducted a crowdsourced discourse annotation study. The results show that these relations can indeed have two functions: they can be used to both illustrate/specify a situation and serve as an argument for a claim. These findings suggest that examples and specifications can have multiple simultaneous readings. We discuss the implications of these results for discourse annotation. 
- âŠ