47,756 research outputs found
Recognizing cited facts and principles in legal judgements
In common law jurisdictions, legal professionals cite facts and legal principles from precedent cases to support their arguments before the court for their intended outcome in a current case. This practice stems from the doctrine of stare decisis, where cases that have similar facts should receive similar decisions with respect to the principles. It is essential for legal professionals to identify such facts and principles in precedent cases, though this is a highly time intensive task. In this paper, we present studies that demonstrate that human annotators can achieve reasonable agreement on which sentences in legal judgements contain cited facts and principles (respectively, Îș=0.65 and Îș=0.95 for inter- and intra-annotator agreement). We further demonstrate that it is feasible to automatically annotate sentences containing such legal facts and principles in a supervised machine learning framework based on linguistic features, reporting per category precision and recall figures of between 0.79 and 0.89 for classifying sentences in legal judgements as cited facts, principles or neither using a Bayesian classifier, with an overall Îș of 0.72 with the human-annotated gold standard
Argument mining: A machine learning perspective
Argument mining has recently become a hot topic, attracting the interests of several and diverse research communities, ranging from artificial intelligence, to computational linguistics, natural language processing, social and philosophical sciences. In this paper, we attempt to describe the problems and challenges of argument mining from a machine learning angle. In particular, we advocate that machine learning techniques so far have been under-exploited, and that a more proper standardization of the problem, also with regards to the underlying argument model, could provide a crucial element to develop better systems
Teaching Law and Digital Age Legal Practice with an AI and Law Seminar
This article provides a guide and examples for using a seminar on Artificial Intelligence (AI) and Law to teach lessons about legal reasoning and about legal practice in the digital age. Artificial Intelligence and Law is a subfield of AI/ computer science research that focuses on computationally modeling legal reasoning. In at least a few law schools, the AI and Law seminar has regularly taught students fundamental issues about law and legal reasoning by focusing them on the problems these issues pose for scientists attempting to computationally model legal reasoning. AI and Law researchers have designed programs to reason with legal rules, apply legal precedents, predict case outcomes, argue like a legal advocate and visualize legal arguments. The article illustrates some of the pedagogically important lessons that they have learned in the process.
As the technology of legal practice catches up with the aspirations of AI and Law researchers, the AI and Law seminar can play a new role in legal education. With advances in such areas as e-discovery, legal information retrieval (IR), and semantic processing of web-based information for electronic contracting, the chances are increasing that, in their legal practices, law students will use, and even depend on, systems that employ AI techniques. As explained in the Article, an AI and Law seminar invites students to think about processes of legal reasoning and legal practice and about how those processes employ information. It teaches how the new digital documents technologies work, what they can and cannot do, how to measure performance, how to evaluate claims about the technologies, and how to be savvy consumers and users of the technologies
Artificial intelligence as law:Presidential address to the seventeenth international conference on artificial intelligence and law
Information technology is so ubiquitous and AI's progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that safeguards are needed for AI to be trustworthy, social, responsible, humane, ethical. In short: AI should be good for us. But how to establish proper safeguards for AI? One strong answer readily available is: consider the problems and solutions studied in AI & Law. AI & Law has worked on the design of social, explainable, responsible AI aligned with human values for decades already, AI & Law addresses the hardest problems across the breadth of AI (in reasoning, knowledge, learning and language), and AI & Law inspires new solutions (argumentation, schemes and norms, rules and cases, interpretation). It is argued that the study of AI as Law supports the development of an AI that is good for us, making AI & Law more relevant than ever
Argumentation Mining in User-Generated Web Discourse
The goal of argumentation mining, an evolving research field in computational
linguistics, is to design methods capable of analyzing people's argumentation.
In this article, we go beyond the state of the art in several ways. (i) We deal
with actual Web data and take up the challenges given by the variety of
registers, multiple domains, and unrestricted noisy user-generated Web
discourse. (ii) We bridge the gap between normative argumentation theories and
argumentation phenomena encountered in actual data by adapting an argumentation
model tested in an extensive annotation study. (iii) We create a new gold
standard corpus (90k tokens in 340 documents) and experiment with several
machine learning methods to identify argument components. We offer the data,
source codes, and annotation guidelines to the community under free licenses.
Our findings show that argumentation mining in user-generated Web discourse is
a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in
User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17
LAW SEARCH IN THE AGE OF THE ALGORITHM
The process of searching for relevant legal materials is
fundamental to legal reasoning. However, despite its enormous
practical and theoretical importance, law search has not been given
significant attention by scholars. In this Article, we define the problem
of law search and examine the consequences of new technologies
capable of automating this core lawyerly task. We introduce a theory
of law search in which legal relevance is a sociological phenomenon
that leads to convergence over a shared set of legal materials and
explore the normative stakes of law search. We examine ways in which
law scholars can understand empirically the phenomenon of law
search, argue that computational modeling is a valuable epistemic
tool in this domain, and report the results from a multi-year,
interdisciplinary effort to develop an advanced law search algorithm
based on human-generated data. Finally, we explore how
policymakers can manage the challenges posed by new machine
learning-based search technologies
Science, Technology, Society, and Law
Law and regulation increasingly interact with science, technology, and medicine in contemporary society. Law and social science (LSS) analyses can therefore benefit from rigorous, nuanced social scientific accounts of the nature of scientific knowledge and practice. Over the past two decades, LSS scholars have increasingly turned for such accounts to the field known as science and technology studies (STS). This article reviews the LSS literature that draws on STS. Our discussion is divided into two primary sections. We first discuss LSS literature that draws on STS because it deals with issues in which law and science interact. We then discuss literature that draws on STS because it sees law as analogous to science as a knowledge-producing institution amenable to social science analysis. We suggest that through both of these avenues STS can encourage a newly critical view within LSS scholarship.</jats:p
A Model of Critical Thinking in Higher Education
âCritical thinking in higher educationâ is a phrase that means many things to many
people. It is a broad church. Does it mean a propensity for finding fault? Does it
refer to an analytical method? Does it mean an ethical attitude or a disposition?
Does it mean all of the above? Educating to develop critical intellectuals and the
Marxist concept of critical consciousness are very different from the logicianâs
toolkit of finding fallacies in passages of text, or the practice of identifying and
distinguishing valid from invalid syllogisms. Critical thinking in higher education
can also encompass debates about critical pedagogy, i.e., political critiques of the
role and function of education in society, critical feminist approaches to curriculum,
issues related to what has become known as critical citizenship, or any other
education-related topic that uses the appellation âcriticalâ. Equally, it can, and
usually does, refer to the importance and centrality of developing general skills in
reasoningâskills that we hope all graduates possess. Yet, despite more than four
decades of dedicated scholarly work âcritical thinkingâ remains as elusive as ever.
As a concept, it is, as Raymond Williams has noted, a âmost difficult oneâ (Williams,
1976, p. 74)
Toward Artificial Argumentation
The field of computational models of argument is emerging as an important aspect of artificial intelligence research. The reason for this is based on the recognition that if we are to develop robust intelligent systems, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task. And one of the key ways that humans do this is to use argumentation - either internally, by evaluating arguments and counterarguments - or externally, by for instance entering into a discussion or debate where arguments are exchanged. As we report in this review, recent developments in the field are leading to technology for artificial argumentation, in the legal, medical, and e-government domains, and interesting tools for argument mining, for debating technologies, and for argumentation solvers are emerging
- âŠ