179 research outputs found

    Letter to the Editor: Commentary on “Is it possible to predict the origin of epithelial cells? – A comparison of secondary transfer of skin epithelial cells versus vaginal mucous membrane cells by direct contact, M.M. Bouzga et al., Science & Justice, https://doi.org/10.1016/j.scijus.2020.02.003”

    Get PDF
    This letter to the Editor comments on the paper “Is it possible to predict the origin of epithelial cells? – A comparison of secondary transfer of skin epithelial cells versus vaginal mucous membrane cells by direct contact” by M.M. Bouzga et al. (Science & Justice,https://doi.org/10.1016/j.scijus.20al.20.02.003)

    Subtleties in Bayesian decision-theoretic analysis for forensic findings: Notes on recent discussion of the role of validation study data in rational decision making

    Get PDF
    This technical note extends a recent discussion in this journal of the role of validation study data in rational decision making. One argument that has been made in this context, using elements of Bayesian decision theory, is that further aggregation of validation study data into error rates involves a loss of information that compromises rational inference and decision making and should therefore be discouraged. This technical note seeks to explain that this argument can be developed at different levels of detail, depending on the definition of the propositions of interest, the forensic findings to be evaluated (and hence the form of the likelihood ratio), and the characterization of the relative desirability of decision consequences. The analyses proposed here reveal the cascade of abstractions and assumptions into which discussions about the use of validation study results in forensic science have fallen. This reinforces the conclusion that further aggregation of validation study data into error rates is problematic. It also suggests that even if a definition of error rate(s) could be agreed upon and defensively quantified in a given application, we should rethink and possibly adjust our expectations about what exactly error rates can practically contribute to rational modes of reasoning and decision making in legal contexts

    Assessing AI output in legal decision-making with nearest neighbors

    Get PDF
    Artificial intelligence (“AI”) systems are widely used to assist or automate decision-making. Although there are general metrics for the performance of AI systems, there is, as yet, no well-established gauge to assess the quality of particular AI recommendations or decisions. This presents a serious problem in the emerging use of AI in legal applications because the legal system aims for good performance not only in the aggregate but also in individual cases. This Article presents the concept of using nearest neighbors to assess individual AI output. This nearest neighbor analysis has the benefit of being easy to understand and apply for judges, lawyers, and juries. In addition, it is fundamentally compatible with existing AI methodologies. This Article explains how the concept could be applied for probing AI output in a number of use cases, including civil discovery, risk prediction, and forensic comparison, while also presenting its limitations

    Overview and Meaning of Identification/Individualization

    Get PDF
    This article presents two common accounts of the question of inference of source in forensic science. The first, the classic view, leads to direct opinions about source propositions, either categorically or in terms of graded conclusions. The second account focuses on assessing the value of the findings with respect to competing source propositions. It is based on probability theory and represents a preliminary to the most recent, decision-theoretic conceptualization of individualization. This account allows one to critically expose and resolve shortcomings, limitations and scientific drawbacks of the classic view, and to clarify the distinct duties and roles of experts and fact finders

    Forensic science and the principle of excluded middle: “Inconclusive” decisions and the structure of error rate studies

    Get PDF
    In a paper published recently in this journal, Dror and Scurich (2020) [20] critically discuss the notions of “inconclusive evidence” (i.e., test items for which it is difficult to render a categorical response) and “inconclusive decisions” (i.e., experts’ conclusions or responses) in the context of forensic science error rate studies. They expose several ways in which the understanding and use of “inconclusives” in current forensic science research and practice can adversely affect the outcomes of error rate studies. A main cause of distortion, according to Dror and Scurich, is what they call “erroneous inconclusive” decisions, in particular the lack of acknowledgment of this type of erroneous conclusion in the computation of error rates. To overcome this complication, Dror and Scurich call for a more explicit monitoring of “inconclusives” using a modified error rate study design. Whilst we agree with several well-argued points raised by the authors, we disagree with their framing of “inconclusive decisions” as potential errors. In this paper, we argue that referring to an “inconclusive decision” as an error is a contradiction in terms, runs counter to an analysis based on decision logic and, hence, is questionable as a concept. We also reiterate that the very term “inconclusive decision” disregards the procedural architecture of the criminal justice system across modern jurisdictions, especially the fact that forensic experts have no decisional rights in the criminal process. These positions do not ignore the possibility that “inconclusives” - if used excessively - do raise problems in forensic expert reporting, in particular limited assertiveness (or, overcautiousness). However, these drawbacks derive from inherent limitations of experts rather than from the seemingly erroneous nature of “inconclusives” that needs to be fixed. More fundamentally, we argue that attempts to score “inconclusives” as errors amount to philosophical claims disguised as forensic methodology. Specifically, these attempts interfere with the metaphysical substrate underpinning empirical research. We point this out on the basis of the law of the excluded middle, i.e. the principle of “no third possibility being given” (tertium non datur)

    Bayesian Networks and Influence Diagrams

    Get PDF
    Bayesian networks are graphical models that have been developed in the field of artificial intelligence as a framework to help researchers and practitioners apply probability theory to inference problems of substantive size as encountered in real-world applications. Influence diagrams (Bayesian decision networks) extend Bayesian networks to a modeling environment for coherent decision analysis under uncertainty. This article provides an overview of these methods and explains their contribution to the body of formal methods for the study, development and implementation of probabilistic procedures for assessing the probative value of scientific evidence and the coherent analysis of related questions of decision-making

    Inroads into the Ultimate Issue Rule? Structural Elements of Communication between Experts and Fact-Finders

    Get PDF
    One of the most persistent questions in criminal evidence relates to the use of (unchallenged) expert evidence. What does it mean to accept or reject (unchallenged) expert evidence? To what extent can, and should, an expert enter jurisprudential territory? Is the traditional model of trial by jury viable in our complex world? In order to clarify these pressing questions, we will examine the evidential structure underpinning expert witness testimony. We will show that what we usually and, at the cost of oversimplification, call ‘evidence’, comprises three distinct questions: (i) What does the data show? (ii) What should we believe? (iii) What should we do? From this insight, a number of corollaries fall into place. First, although decisions have to be informed through reasoned inferential procedures, they cannot be reduced to scientific propositions. As a result, fact-finders do not need to cede their decision-making prerogative as some proponents of expert-driven decision-making suggest. Secondly, criminal liability is not a scientific conclusion. Rather, so our argument, it is an individualistic normative construction that involves an inferential leap which is not warranted by any scientific (i.e. general) proposition. For the rectitude of the criminal verdict (or indeed any legal decision) does not map logically onto the possible treatment of scientific findings, that is, acceptance/rejection. Thirdly, our clarification of this evidential structure, which we call coherent decisionalism, provides a conceptual framework to understand and stabilise case law on expert witness testimony

    Letter to the editor: Commentary on “Strategic choice in linear sequential unmasking, Roger Koppl, Science & Justice, https://doi.org/10.1016/j.scijus.2018.10.010”

    Get PDF
    This letter to the Editor comments on the paper ‘Strategic choice in linear sequential unmasking’ by Roger Koppl (Science & Justice, https://doi.org/10.1016/j.scijus.2018.10.010)
    corecore