465 research outputs found
Recognizing cited facts and principles in legal judgements
In common law jurisdictions, legal professionals cite facts and legal principles from precedent cases to support their arguments before the court for their intended outcome in a current case. This practice stems from the doctrine of stare decisis, where cases that have similar facts should receive similar decisions with respect to the principles. It is essential for legal professionals to identify such facts and principles in precedent cases, though this is a highly time intensive task. In this paper, we present studies that demonstrate that human annotators can achieve reasonable agreement on which sentences in legal judgements contain cited facts and principles (respectively, κ=0.65 and κ=0.95 for inter- and intra-annotator agreement). We further demonstrate that it is feasible to automatically annotate sentences containing such legal facts and principles in a supervised machine learning framework based on linguistic features, reporting per category precision and recall figures of between 0.79 and 0.89 for classifying sentences in legal judgements as cited facts, principles or neither using a Bayesian classifier, with an overall κ of 0.72 with the human-annotated gold standard
Argumentation Mining in User-Generated Web Discourse
The goal of argumentation mining, an evolving research field in computational
linguistics, is to design methods capable of analyzing people's argumentation.
In this article, we go beyond the state of the art in several ways. (i) We deal
with actual Web data and take up the challenges given by the variety of
registers, multiple domains, and unrestricted noisy user-generated Web
discourse. (ii) We bridge the gap between normative argumentation theories and
argumentation phenomena encountered in actual data by adapting an argumentation
model tested in an extensive annotation study. (iii) We create a new gold
standard corpus (90k tokens in 340 documents) and experiment with several
machine learning methods to identify argument components. We offer the data,
source codes, and annotation guidelines to the community under free licenses.
Our findings show that argumentation mining in user-generated Web discourse is
a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in
User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17
An Overview of Computational Approaches for Interpretation Analysis
It is said that beauty is in the eye of the beholder. But how exactly can we
characterize such discrepancies in interpretation? For example, are there any
specific features of an image that makes person A regard an image as beautiful
while person B finds the same image displeasing? Such questions ultimately aim
at explaining our individual ways of interpretation, an intention that has been
of fundamental importance to the social sciences from the beginning. More
recently, advances in computer science brought up two related questions: First,
can computational tools be adopted for analyzing ways of interpretation?
Second, what if the "beholder" is a computer model, i.e., how can we explain a
computer model's point of view? Numerous efforts have been made regarding both
of these points, while many existing approaches focus on particular aspects and
are still rather separate. With this paper, in order to connect these
approaches we introduce a theoretical framework for analyzing interpretation,
which is applicable to interpretation of both human beings and computer models.
We give an overview of relevant computational approaches from various fields,
and discuss the most common and promising application areas. The focus of this
paper lies on interpretation of text and image data, while many of the
presented approaches are applicable to other types of data as well.Comment: Preprint submitted to Digital Signal Processin
- …