283 research outputs found

    An Examination of the Socio-Economic Determinants of Punishment Using Abductive Polynomial Networks

    Full text link
    The purpose of this research is to examine aspects of the relationship between socio-economic conditions and imprisonment in a particular historical setting. Previous research suggests that this relationship is problematic and situationally variable. The approach taken in this dissertation reflects a belief that earlier studies can be faulted for their failure to take account of the fiscal climate of the state as an influence on the size of prison populations. This analysis will employ the Marxist model, as developed by Rusche and Kirchheimer (1939) and widely applied (though with mixed results) in research conducted over the last half-century. This model will be modified according to the postulates of the model delineating the relationship between state spending and the development of capitalist society specified by O\u27Connor (1973). Although fiscal influences are mentioned by Rusche and Kirchheimer it has not been integrated into a research model either by these authors or those who have followed them. One important object of this research will therefore be to evaluate the usefulness of the Marxist approach to the analysis of the labor supply/imprisonment nexus, as this approach is represented by a modified and supposedly, improved version of a standard model. The project will at the same time attempt to determine the importance of fiscal factors on penal policy. Characteristics of prison populations addressed will include race. This characteristic is important here mainly as an indicator of marginality. Findings in this area will, however, be of additional value in documenting the particular impact of penal policies on minorities

    A Default-Logic Paradigm for Legal Reasoning and Factfinding

    Get PDF
    Unlike research in linguistics and artificial intelligence, legal research has not used advances in logical theory very effectively. This article uses default logic to develop a paradigm for analyzing all aspects of legal reasoning, including factfinding. The article provides a formal model that integrates legal rules and policies with the evaluation of both expert and non-expert evidence – whether the reasoning occurs in courts or administrative agencies, and whether in domestic, foreign, or international legal systems. This paradigm can standardize the representation of legal reasoning, guide empirical research into the dynamics of such reasoning, and put the representations and research results to immediate use through artificial intelligence software. This new model therefore has the potential to transform legal practice and legal education, as well as legal theory

    Research in progress: report on the ICAIL 2017 doctoral consortium

    Get PDF
    This paper arose out of the 2017 international conference on AI and law doctoral consortium. There were five students who presented their Ph.D. work, and each of them has contributed a section to this paper. The paper offers a view of what topics are currently engaging students, and shows the diversity of their interests and influences

    In memoriam Douglas N. Walton: the influence of Doug Walton on AI and law

    Get PDF
    Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work

    Modelling causality in law = Modélisation de la causalité en droit

    Full text link
    L'intérêt en apprentissage machine pour étudier la causalité s'est considérablement accru ces dernières années. Cette approche est cependant encore peu répandue dans le domaine de l’intelligence artificielle (IA) et du droit. Elle devrait l'être. L'approche associative actuelle d’apprentissage machine révèle certaines limites que l'analyse causale peut surmonter. Cette thèse vise à découvrir si les modèles causaux peuvent être utilisés en IA et droit. Nous procédons à une brève revue sur le raisonnement et la causalité en science et en droit. Traditionnellement, les cadres normatifs du raisonnement étaient la logique et la rationalité, mais la théorie duale démontre que la prise de décision humaine dépend de nombreux facteurs qui défient la rationalité. À ce titre, des statistiques et des probabilités étaient nécessaires pour améliorer la prédiction des résultats décisionnels. En droit, les cadres de causalité ont été définis par des décisions historiques, mais la plupart des modèles d’aujourd’hui de l'IA et droit n'impliquent pas d'analyse causale. Nous fournissons un bref résumé de ces modèles, puis appliquons le langage structurel de Judea Pearl et les définitions Halpern-Pearl de la causalité pour modéliser quelques décisions juridiques canadiennes qui impliquent la causalité. Les résultats suggèrent qu'il est non seulement possible d'utiliser des modèles de causalité formels pour décrire les décisions juridiques, mais également utile car un schéma uniforme élimine l'ambiguïté. De plus, les cadres de causalité sont utiles pour promouvoir la responsabilisation et minimiser les biais.The machine learning community’s interest in causality has significantly increased in recent years. This trend has not yet been made popular in AI & Law. It should be because the current associative ML approach reveals certain limitations that causal analysis may overcome. This research paper aims to discover whether formal causal frameworks can be used in AI & Law. We proceed with a brief account of scholarship on reasoning and causality in science and in law. Traditionally, normative frameworks for reasoning have been logic and rationality, but the dual theory has shown that human decision-making depends on many factors that defy rationality. As such, statistics and probability were called for to improve the prediction of decisional outcomes. In law, causal frameworks have been defined by landmark decisions but most of the AI & Law models today do not involve causal analysis. We provide a brief summary of these models and then attempt to apply Judea Pearl’s structural language and the Halpern-Pearl definitions of actual causality to model a few Canadian legal decisions that involve causality. Results suggest that it is not only possible to use formal causal models to describe legal decisions, but also useful because a uniform schema eliminates ambiguity. Also, causal frameworks are helpful in promoting accountability and minimizing biases

    On relationships between the logic of law, legal positivism and semiotics of law

    Get PDF
    The issue of reciprocal relationships between the logic of law, positivistic theory of the logic of law, and legal semiotics is among the most important questions of the modern theoretical jurisprudence. This paper has not attempted to provide any comprehensive account of the modern jurisprudence (and legal logic). Instead, the emphasis has been laid on those aspects of positivist legal theories, logical studies of law and legal semiotics that allow tracing the common points or the differences between these paradigms of legal research. One of the theses of the present work is that, at the comparative methodological level, the limits of legal semiotics and its object of inquiry could only be defined in relation to legal posi tivism and logical studies of law. This paper also argues for a proper position for legal semiotics in between legal positivism and legal logic. The differences between legal positivism, legal logic and legal semiotics are best captured in the issue of referent

    Advancing Computational Models of Narrative

    Get PDF
    Report of a Workshop held at the Wylie Center, Beverly, MA, Oct 8-10 2009Sponsored by the AFOSR under MIT-MURI contract #FA9550-05-1-032

    Plausible Cause : Explanatory Standards in the Age of Powerful Machines

    Get PDF
    Much scholarship in law and political science has long understood the U.S. Supreme Court to be the apex court in the federal judicial system, and so to relate hierarchically to lower federal courts. On that top-down view, exemplified by the work of Alexander Bickel and many subsequent scholars, the Court is the principal, and lower federal courts are its faithful agents. Other scholarship takes a bottom-up approach, viewing lower federal courts as faithless agents or analyzing the percolation of issues in those courts before the Court decides. This Article identifies circumstances in which the relationship between the Court and other federal courts is best viewed as neither top-down nor bottom-up, but side-by-side. When the Court intervenes in fierce political conflicts, it may proceed in stages, interacting with other federal courts in a way that is aimed at enhancing its public legitimacy. First, the Court renders a decision that is interpreted as encouraging, but not requiring, other federal courts to expand the scope of its initial ruling. Then, most federal courts do expand the scope of the ruling, relying upon the Court\u27s initial decision as authority for doing so. Finally, the Court responds by invoking those district and circuit court decisions as authority for its own more definitive resolution. That dialectical process, which this Article calls reciprocal legitimation, was present along the path from Brown v. Board of Education to the unreasoned per curiams, from Baker v. Carr to Reynolds v. Sims, and from United States v. Windsor to Obergefell v. Hodges-as partially captured by Appendix A to the Court\u27s opinion in Obergefell and the opinion\u27s several references to it. This Article identifies the phenomenon of reciprocal legitimation, explains that it may initially be intentional or unintentional, and examines its implications for theories of constitutional change and scholarship in federal courts and judicial politics. Although the Article\u27s primary contribution is descriptive and analytical, it also normatively assesses reciprocal legitimation given the sacrifice of judicial candor that may accompany it. A Coda examines the likelihood and desirability of reciprocal legitimation in response to President Donald Trump\u27s derision of the federal courts as political and so illegitimate

    Plausible Cause : Explanatory Standards in the Age of Powerful Machines

    Get PDF
    The Fourth Amendment\u27s probable cause requirement is not about numbers or statistics. It is about requiring the police to account for their decisions. For a theory of wrongdoing to satisfy probable cause-and warrant a search or seizure-it must be plausible. The police must be able to explain why the observed facts invite an inference of wrongdoing, and judges must have an opportunity to scrutinize that explanation. Until recently, the explanatory aspect of Fourth Amendment suspicion- plausible cause -has been uncontroversial, and central to the Supreme Court\u27s jurisprudence, for a simple reason: explanations have served, in practice, as a guarantor of statistical likelihood. In other words, forcing police to articulate theories of wrongdoing is the means by which courts have traditionally ensured that (roughly) the right persons, houses, papers, and effects are targeted for intrusion. Going forward, however, technological change promises to disrupt the harmony between explanatory standards and statistical accuracy. Powerful machines enable a previously impossible combination: accurate predictions unaccompanied by explanations. As that change takes hold, we will need to think carefully about why explanation-giving matters. When judges assess the sufficiency of explanations offered by police (and other officials), what are they doing? If the answer comes back to error­ reduction-if the point of judicial oversight is simply to maximize the overall number of accurate decisions-machines could theoretically do the job as well as, if not better than, humans. But if the answer involves normative goals beyond error-reduction, automated tools-no matter their power-will remain, at best, partial substitutes for judicial scrutiny. This Article defends the latter view. I argue that statistical accuracy, though important, is not the crux of explanation-giving. Rather, explanatory standards-like probable cause-hold officials accountable to a plurality of sometimes-conflicting constitutional and rule-of-law values that, in our legal system, bound the scope of legitimate authority. Error-reduction is one such value. But there are many others, and sometimes the values work at cross purposes. When judges assess explanations, they navigate a space of value­pluralism: they identify which values are at stake in a given decisional environment and ask, where necessary, if those values have been properly balanced. Unexplained decisions render this process impossible and, in so doing, hobble the judicial role. Ultimately, that role has less to do with analytic power than practiced wisdom. A common argument against replacing judges, and other human experts, with intelligent machines is that machines are not (yet) intelligent enough to take up the mantle. In the age of powerful algorithms, however, this turns out to be a weak-and temporally limited-claim. The better argument, I suggest in closing, is that judging is not solely, or even primarily, about intelligence. It is about prudence
    • …
    corecore