10,321 research outputs found

    A method for explaining Bayesian networks for legal evidence with scenarios

    Get PDF
    In a criminal trial, a judge or jury needs to reason about what happened based on the available evidence, often including statistical evidence. While a probabilistic approach is suitable for analysing the statistical evidence, a judge or jury may be more inclined to use a narrative or argumentative approach when considering the case as a whole. In this paper we propose a combination of two approaches, combining Bayesian networks with scenarios. Whereas a Bayesian network is a popular tool for analysing parts of a case, constructing and understanding a network for an entire case is not straightforward. We propose an explanation method for understanding a Bayesian network in terms of scenarios. This method builds on a previously proposed construction method, which we slightly adapt with the use of scenario schemes for the purpose of explaining. The resulting structure is explained in terms of scenarios, scenario quality and evidential support. A probabilistic interpretation of scenario quality is provided using the concept of scenario schemes. Finally, the method is evaluated by means of a case study

    Calculating and understanding the value of any type of match evidence when there are potential testing errors

    Get PDF
    It is well known that Bayes’ theorem (with likelihood ratios) can be used to calculate the impact of evidence, such as a ‘match’ of some feature of a person. Typically the feature of interest is the DNA profile, but the method applies in principle to any feature of a person or object, including not just DNA, fingerprints, or footprints, but also more basic features such as skin colour, height, hair colour or even name. Notwithstanding concerns about the extensiveness of databases of such features, a serious challenge to the use of Bayes in such legal contexts is that its standard formulaic representations are not readily understandable to non-statisticians. Attempts to get round this problem usually involve representations based around some variation of an event tree. While this approach works well in explaining the most trivial instance of Bayes’ theorem (involving a single hypothesis and a single piece of evidence) it does not scale up to realistic situations. In particular, even with a single piece of match evidence, if we wish to incorporate the possibility that there are potential errors (both false positives and false negatives) introduced at any stage in the investigative process, matters become very complex. As a result we have observed expert witnesses (in different areas of speciality) routinely ignore the possibility of errors when presenting their evidence. To counter this, we produce what we believe is the first full probabilistic solution of the simple case of generic match evidence incorporating both classes of testing errors. Unfortunately, the resultant event tree solution is too complex for intuitive comprehension. And, crucially, the event tree also fails to represent the causal information that underpins the argument. In contrast, we also present a simple-to-construct graphical Bayesian Network (BN) solution that automatically performs the calculations and may also be intuitively simpler to understand. Although there have been multiple previous applications of BNs for analysing forensic evidence—including very detailed models for the DNA matching problem, these models have not widely penetrated the expert witness community. Nor have they addressed the basic generic match problem incorporating the two types of testing error. Hence we believe our basic BN solution provides an important mechanism for convincing experts—and eventually the legal community—that it is possible to rigorously analyse and communicate the full impact of match evidence on a case, in the presence of possible error

    A Taxonomy of Explainable Bayesian Networks

    Get PDF
    Artificial Intelligence (AI), and in particular, the explainability thereof, has gained phenomenal attention over the last few years. Whilst we usually do not question the decision-making process of these systems in situations where only the outcome is of interest, we do however pay close attention when these systems are applied in areas where the decisions directly influence the lives of humans. It is especially noisy and uncertain observations close to the decision boundary which results in predictions which cannot necessarily be explained that may foster mistrust among end-users. This drew attention to AI methods for which the outcomes can be explained. Bayesian networks are probabilistic graphical models that can be used as a tool to manage uncertainty. The probabilistic framework of a Bayesian network allows for explainability in the model, reasoning and evidence. The use of these methods is mostly ad hoc and not as well organised as explainability methods in the wider AI research field. As such, we introduce a taxonomy of explainability in Bayesian networks. We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions. The explanations obtained from the explainability methods are illustrated by means of a simple medical diagnostic scenario. The taxonomy introduced in this paper has the potential not only to encourage end-users to efficiently communicate outcomes obtained, but also support their understanding of how and, more importantly, why certain predictions were made

    Editors' Review and Introduction:Models of Rational Proof in Criminal Law

    Get PDF
    Decisions concerning proof of facts in criminal law must be rational because of what is at stake, but the decision-making process must also be cognitively feasible because of cognitive limitations, and it must obey the relevant legal-procedural constraints. In this topic three approaches to rational reasoning about evidence in criminal law are compared in light of these demands: arguments, probabilities, and scenarios. This is done in six case studies in which different authors analyze a manslaughter case from different theoretical perspectives, plus four commentaries on these case studies. The aim of this topic is to obtain more insight into how the different approaches can be applied in a legal context. This will advance the discussion on rational reasoning about evidence in law and will contribute more widely to cognitive science on a number of topics, including the value of probabilistic accounts of cognition and the problem of dealing with cognitive biases in reasoning under uncertainty in practical contexts

    Modeling crime scenarios in a Bayesian Network

    Get PDF
    Legal cases involve reasoning with evidence and with the development of a software support tool in mind, a formal foundation for evidential reasoning is required. Three approaches to evidential reasoning have been prominent in the literature: argumentation, narrative and probabilistic reasoning. In this paper a combination of the latter two is proposed. In recent research on Bayesian networks applied to legal cases, a number of legal idioms have been developed as recurring structures in legal Bayesian networks. A Bayesian network quantifies how various variables in a case interact. In the narrative approach, scenarios provide a context for the evidence in a case. A method that integrates the quantitative, numerical techniques of Bayesian networks with the qualitative, holistic approach of scenarios is lacking. In this paper, a method is proposed for modeling several scenarios in a single Bayesian network. The method is tested by doing a case study. Two new idioms are introduced: the scenario idiom and the merged scenarios idiom. The resulting network is meant to assist a judge or jury, helping to maintain a good overview of the interactions between relevant variables in a case and preventing tunnel vision by comparing various scenarios

    An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making

    Get PDF
    Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to ‘hybrid’ BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted

    Narration in judiciary fact-finding : a probabilistic explication

    Get PDF
    Legal probabilism is the view that juridical fact-finding should be modeled using Bayesian methods. One of the alternatives to it is the narration view, according to which instead we should conceptualize the process in terms of competing narrations of what (allegedly) happened. The goal of this paper is to develop a reconciliatory account, on which the narration view is construed from the Bayesian perspective within the framework of formal Bayesian epistemology
    corecore