567 research outputs found

    Editors' Review and Introduction:Models of Rational Proof in Criminal Law

    Get PDF
    Decisions concerning proof of facts in criminal law must be rational because of what is at stake, but the decision-making process must also be cognitively feasible because of cognitive limitations, and it must obey the relevant legal-procedural constraints. In this topic three approaches to rational reasoning about evidence in criminal law are compared in light of these demands: arguments, probabilities, and scenarios. This is done in six case studies in which different authors analyze a manslaughter case from different theoretical perspectives, plus four commentaries on these case studies. The aim of this topic is to obtain more insight into how the different approaches can be applied in a legal context. This will advance the discussion on rational reasoning about evidence in law and will contribute more widely to cognitive science on a number of topics, including the value of probabilistic accounts of cognition and the problem of dealing with cognitive biases in reasoning under uncertainty in practical contexts

    Probabilities, causation, and logic programming in conditional reasoning: reply to Stenning and Van Lambalgen (2016)

    Get PDF
    Oaksford and Chater (2014, Thinking and Reasoning, 20, 269–295) critiqued the logic programming (LP) approach to nonmonotonicity and proposed that a Bayesian probabilistic approach to conditional reasoning provided a more empirically adequate theory. The current paper is a reply to Stenning and van Lambalgen's rejoinder to this earlier paper entitled ‘Logic programming, probability, and two-system accounts of reasoning: a rejoinder to Oaksford and Chater’ (2016) in Thinking and Reasoning. It is argued that causation is basic in human cognition and that explaining how abnormality lists are created in LP requires causal models. Each specific rejoinder to the original critique is then addressed. While many areas of agreement are identified, with respect to the key differences, it is concluded the current evidence favours the Bayesian approach, at least for the moment

    Confirmation based on analogical inference:Bayes meets Jeffrey

    Get PDF
    Certain hypotheses cannot be directly confirmed for theoretical, practical, or moral reasons. For some of these hypotheses, however, there might be a workaround: confirmation based on analogical reasoning. In this paper we take up Dardashti, Hartmann, Thébault, and Winsberg’s (2019) idea of analyzing confirmation based on analogical inference Bayesian style. We identify three types of confirmation by analogy and show that Dardashti et al.’s approach can cover two of them. We then highlight possible problems with their model as a general approach to analogical inference and argue that these problems can be avoided by supplementing Bayesian update with Jeffrey conditionalization

    Modelling causality in law = Modélisation de la causalité en droit

    Full text link
    L'intérêt en apprentissage machine pour étudier la causalité s'est considérablement accru ces dernières années. Cette approche est cependant encore peu répandue dans le domaine de l’intelligence artificielle (IA) et du droit. Elle devrait l'être. L'approche associative actuelle d’apprentissage machine révèle certaines limites que l'analyse causale peut surmonter. Cette thèse vise à découvrir si les modèles causaux peuvent être utilisés en IA et droit. Nous procédons à une brève revue sur le raisonnement et la causalité en science et en droit. Traditionnellement, les cadres normatifs du raisonnement étaient la logique et la rationalité, mais la théorie duale démontre que la prise de décision humaine dépend de nombreux facteurs qui défient la rationalité. À ce titre, des statistiques et des probabilités étaient nécessaires pour améliorer la prédiction des résultats décisionnels. En droit, les cadres de causalité ont été définis par des décisions historiques, mais la plupart des modèles d’aujourd’hui de l'IA et droit n'impliquent pas d'analyse causale. Nous fournissons un bref résumé de ces modèles, puis appliquons le langage structurel de Judea Pearl et les définitions Halpern-Pearl de la causalité pour modéliser quelques décisions juridiques canadiennes qui impliquent la causalité. Les résultats suggèrent qu'il est non seulement possible d'utiliser des modèles de causalité formels pour décrire les décisions juridiques, mais également utile car un schéma uniforme élimine l'ambiguïté. De plus, les cadres de causalité sont utiles pour promouvoir la responsabilisation et minimiser les biais.The machine learning community’s interest in causality has significantly increased in recent years. This trend has not yet been made popular in AI & Law. It should be because the current associative ML approach reveals certain limitations that causal analysis may overcome. This research paper aims to discover whether formal causal frameworks can be used in AI & Law. We proceed with a brief account of scholarship on reasoning and causality in science and in law. Traditionally, normative frameworks for reasoning have been logic and rationality, but the dual theory has shown that human decision-making depends on many factors that defy rationality. As such, statistics and probability were called for to improve the prediction of decisional outcomes. In law, causal frameworks have been defined by landmark decisions but most of the AI & Law models today do not involve causal analysis. We provide a brief summary of these models and then attempt to apply Judea Pearl’s structural language and the Halpern-Pearl definitions of actual causality to model a few Canadian legal decisions that involve causality. Results suggest that it is not only possible to use formal causal models to describe legal decisions, but also useful because a uniform schema eliminates ambiguity. Also, causal frameworks are helpful in promoting accountability and minimizing biases

    An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making

    Get PDF
    Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to ‘hybrid’ BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted

    A general approach to reasoning with probabilities

    Get PDF
    We propose a general scheme for adding probabilistic reasoning capabilities to a wide variety of knowledge representation formalisms and we study its properties. Syntactically, we consider adding probabilities to the formulas of a given base logic. Semantically, we define a probability distribution over the subsets of a knowledge base by taking the probabilities of the formulas into account accordingly. This gives rise to a probabilistic entailment relation that can be used for uncertain reasoning. Our approach is a generalisation of many concrete probabilistic enrichments of existing approaches, such as ProbLog (an approach to probabilistic logic programming) and the constellation approach to abstract argumentation. We analyse general properties of our approach and provide some insights into novel instantiations that have not been investigated yet
    • …
    corecore