4,908 research outputs found

    Narration in judiciary fact-finding : a probabilistic explication

    Get PDF
    Legal probabilism is the view that juridical fact-finding should be modeled using Bayesian methods. One of the alternatives to it is the narration view, according to which instead we should conceptualize the process in terms of competing narrations of what (allegedly) happened. The goal of this paper is to develop a reconciliatory account, on which the narration view is construed from the Bayesian perspective within the framework of formal Bayesian epistemology

    Arguing about causes in law: a semi-formal framework for causal arguments

    Get PDF
    In legal argumentation and liability attribution, disputes over causes play a central role. Legal discussions about causation often have difficulty with cause-in-fact in complex situations, e.g. overdetermination, preemption, omission. We first assess three theories of causation. Then we introduce a semi-formal framework to model causal arguments using both strict and defeasible rules. We apply the framework to the Althen vaccine injury case. Wrapping up the paper, we motivate a causal argumentation framework and propose to integrate current theories of causation

    Research in progress: report on the ICAIL 2017 doctoral consortium

    Get PDF
    This paper arose out of the 2017 international conference on AI and law doctoral consortium. There were five students who presented their Ph.D. work, and each of them has contributed a section to this paper. The paper offers a view of what topics are currently engaging students, and shows the diversity of their interests and influences

    Towards Safe Artificial General Intelligence

    Get PDF
    The field of artificial intelligence has recently experienced a number of breakthroughs thanks to progress in deep learning and reinforcement learning. Computer algorithms now outperform humans at Go, Jeopardy, image classification, and lip reading, and are becoming very competent at driving cars and interpreting natural language. The rapid development has led many to conjecture that artificial intelligence with greater-than-human ability on a wide range of tasks may not be far. This in turn raises concerns whether we know how to control such systems, in case we were to successfully build them. Indeed, if humanity would find itself in conflict with a system of much greater intelligence than itself, then human society would likely lose. One way to make sure we avoid such a conflict is to ensure that any future AI system with potentially greater-than-human-intelligence has goals that are aligned with the goals of the rest of humanity. For example, it should not wish to kill humans or steal their resources. The main focus of this thesis will therefore be goal alignment, i.e. how to design artificially intelligent agents with goals coinciding with the goals of their designers. Focus will mainly be directed towards variants of reinforcement learning, as reinforcement learning currently seems to be the most promising path towards powerful artificial intelligence. We identify and categorize goal misalignment problems in reinforcement learning agents as designed today, and give examples of how these agents may cause catastrophes in the future. We also suggest a number of reasonably modest modifications that can be used to avoid or mitigate each identified misalignment problem. Finally, we also study various choices of decision algorithms, and conditions for when a powerful reinforcement learning system will permit us to shut it down. The central conclusion is that while reinforcement learning systems as designed today are inherently unsafe to scale to human levels of intelligence, there are ways to potentially address many of these issues without straying too far from the currently so successful reinforcement learning paradigm. Much work remains in turning the high-level proposals suggested in this thesis into practical algorithms, however

    In memoriam Douglas N. Walton: the influence of Doug Walton on AI and law

    Get PDF
    Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work

    The Primacy of Knowledge: A Critical Survey of Timothy Williamson's Views on Knowledge, Assertion and Scepticism

    Get PDF
    The following thesis discusses a range of central aspects in Timothy Williamson’s so-called «knowledge-first» epistemology. In particular, it adresses whether this kind of epistemological framework is apt to answer the challenges of scepticism

    An approach to human-machine teaming in legal investigations using anchored narrative visualisation and machine learning

    Get PDF
    During legal investigations, analysts typically create external representations of an investigated domain as resource for cognitive offloading, reflection and collaboration. For investigations involving very large numbers of documents as evidence, creating such representations can be slow and costly, but essential. We believe that software tools, including interactive visualisation and machine learning, can be transformative in this arena, but that design must be predicated on an understanding of how such tools might support and enhance investigator cognition and team-based collaboration. In this paper, we propose an approach to this problem by: (a) allowing users to visually externalise their evolving mental models of an investigation domain in the form of thematically organized Anchored Narratives; and (b) using such narratives as a (more or less) tacit interface to cooperative, mixed initiative machine learning. We elaborate our approach through a discussion of representational forms significant to legal investigations and discuss the idea of linking such representations to machine learning
    • …
    corecore