1,723 research outputs found

    A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer

    Get PDF
    Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based

    Calculating and understanding the value of any type of match evidence when there are potential testing errors

    Get PDF
    It is well known that Bayes’ theorem (with likelihood ratios) can be used to calculate the impact of evidence, such as a ‘match’ of some feature of a person. Typically the feature of interest is the DNA profile, but the method applies in principle to any feature of a person or object, including not just DNA, fingerprints, or footprints, but also more basic features such as skin colour, height, hair colour or even name. Notwithstanding concerns about the extensiveness of databases of such features, a serious challenge to the use of Bayes in such legal contexts is that its standard formulaic representations are not readily understandable to non-statisticians. Attempts to get round this problem usually involve representations based around some variation of an event tree. While this approach works well in explaining the most trivial instance of Bayes’ theorem (involving a single hypothesis and a single piece of evidence) it does not scale up to realistic situations. In particular, even with a single piece of match evidence, if we wish to incorporate the possibility that there are potential errors (both false positives and false negatives) introduced at any stage in the investigative process, matters become very complex. As a result we have observed expert witnesses (in different areas of speciality) routinely ignore the possibility of errors when presenting their evidence. To counter this, we produce what we believe is the first full probabilistic solution of the simple case of generic match evidence incorporating both classes of testing errors. Unfortunately, the resultant event tree solution is too complex for intuitive comprehension. And, crucially, the event tree also fails to represent the causal information that underpins the argument. In contrast, we also present a simple-to-construct graphical Bayesian Network (BN) solution that automatically performs the calculations and may also be intuitively simpler to understand. Although there have been multiple previous applications of BNs for analysing forensic evidence—including very detailed models for the DNA matching problem, these models have not widely penetrated the expert witness community. Nor have they addressed the basic generic match problem incorporating the two types of testing error. Hence we believe our basic BN solution provides an important mechanism for convincing experts—and eventually the legal community—that it is possible to rigorously analyse and communicate the full impact of match evidence on a case, in the presence of possible error

    Resolving the so-called "probabilistic paradoxes in legal reasoning" with Bayesian networks

    Get PDF
    Examples of reasoning problems such as the twins problem and poison paradox have been proposed by legal scholars to demonstrate the limitations of probability theory in legal reasoning. Specifically, such problems are intended to show that use of probability theory results in legal paradoxes. As such, these problems have been a powerful detriment to the use of probability theory – and particularly Bayes theorem – in the law. However, the examples only lead to ‘paradoxes’ under an artificially constrained view of probability theory and the use of the so-called likelihood ratio, in which multiple related hypotheses and pieces of evidence are squeezed into a single hypothesis variable and a single evidence variable. When the distinct relevant hypotheses and evidence are described properly in a causal model (a Bayesian network), the paradoxes vanish. In addition to the twins problem and poison paradox, we demonstrate this for the food tray example, the abuse paradox and the small town murder problem. Moreover, the resulting Bayesian networks provide a powerful framework for legal reasoning

    Finding the way forward for forensic science in the US:a commentary on the PCAST report

    Get PDF
    A recent report by the US President’s Council of Advisors on Science and Technology (PCAST) [1] has made a number of recommendations for the future development of forensic science. Whereas we all agree that there is much need for change, we find that the PCAST report recommendations are founded on serious misunderstandings. We explain the traditional forensic paradigms of match and identification and the more recent foundation of the logical approach to evidence evaluation. This forms the groundwork for exposing many sources of confusion in the PCAST report. We explain how the notion of treating the scientist as a black box and the assignment of evidential weight through error rates is overly restrictive and misconceived. Our own view sees inferential logic, the development of calibrated knowledge and understanding of scientists as the core of the advance of the profession

    Managing Vulnerabilities of Tactical Wireless RF Network Systems: A Case Study

    Get PDF
    Organisations and individuals benefit when wireless networks are protected. After assessing the risks associated with wireless technologies, organisations can reduce the risks by applying countermeasures to address specific threats and vulnerabilities. These countermeasures include management, operational and technical controls. While these countermeasures will not prevent all penetrations and adverse events, they can be effective in reducing many of the common risks associated with wireless RF networks. Among engineers dealing with different scaled and interconnected engineering systems, such as tactical wireless RF communication systems, there is a growing need for a means of analysing complex adaptive systems. We propose a methodology based on the systematic resolution of complex issues to manage the vulnerabilities of tactical wireless RF systems. There are is a need to assemble and balance the results of any successful measure, showing how well each solution meets the system’s objectives. The uncertain arguments used and other test results are combined using a form of mathematical theory for their analysis. Systems engineering thinking supports design decisions and enables decision‐makers to manage and assess the support for each solution. In these circumstances, complexity management arises from the many interacting and conflicting requirements of an increasing range of possible parameters. There may not be a single ‘right’ solution, only a satisfactory set of resolutions which this system helps to facilitate. Smart and innovative performance matrixes are introduced using a mathematical Bayesian network to manage, model, calculate and analyse all the potential vulnerability paths in wireless RF networks

    Selection effects in forensic science

    Get PDF
    In this report we consider the following question: does a forensic expert need to know exactly how the evidential material was selected? We set up a few simple models of situations in which the way evidence is selected may influence its value in court. Although reality is far from a probabilistic model, and one should be very careful when applying theoretical results to real life situations, we believe that the results in our models indicate how the selection of evidence affects its value. We conclude that selection effects in forensic science can be quite important, and that from a statistical point of view, improvements can be made to court room practice

    The problem of evaluating automated large-scale evidence aggregators

    Get PDF
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy
    corecore