11 research outputs found

    Letter to the editor: Commentary on “Strategic choice in linear sequential unmasking, Roger Koppl, Science & Justice, https://doi.org/10.1016/j.scijus.2018.10.010”

    Get PDF
    This letter to the Editor comments on the paper ‘Strategic choice in linear sequential unmasking’ by Roger Koppl (Science & Justice, https://doi.org/10.1016/j.scijus.2018.10.010)

    Modeling the forensic two-trace problem with Bayesian networks

    Get PDF
    The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375-381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727-732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this proble

    Forensic interpretation framework for body and gait analysis:feature extraction, frequency and distinctiveness

    Get PDF
    Surveillance is ubiquitous in modern society, allowing continuous monitoring of areas that results in capturing criminal (or suspicious) activity as footage. This type of trace is usually examined, assessed and evaluated by a forensic examiner to ultimately help the court make inferences about who was on the footage. The purpose of this study was to develop an analytical model that ensures applicability of morphometric (both anthropometric and morphological) techniques for photo-comparative analyses of body and gait of individuals in CCTV images, and then to assign a likelihood ratio. This is the first paper of a series: This paper will contain feature extraction to observe repeatability procedures from a single observer, in turn, producing the frequency and distinctiveness of the feature set within the given population. To achieve this, an Australian population database of 383 subjects (stance) and 268 subjects (gait) from both sexes, all ages above 18 and ancestries was generated. Features were extracted, defined, and their rarity viewed among the developed database. Repeatability studies were completed in which stance and gait (static and dynamic) features contained low levels of repeatability error (0.2%–1.5 TEM%). For morphological examination, finger flexion and feet placement were observed to have high observer performance.</p

    A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer

    Get PDF
    Recently, Lund and Iyer (L&amp;I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&amp;I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&amp;I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based

    A comment on the PCAST report:skip the “match”/“non-match” stage

    Get PDF
    This letter comments on the report “Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods” recently released by the President's Council of Advisors on Science and Technology (PCAST). The report advocates a procedure for evaluation of forensic evidence that is a two-stage procedure in which the first stage is “match”/“non-match” and the second stage is empirical assessment of sensitivity (correct acceptance) and false alarm (false acceptance) rates. Almost always, quantitative data from feature-comparison methods are continuously-valued and have within-source variability. We explain why a two-stage procedure is not appropriate for this type of data, and recommend use of statistical procedures which are appropriate

    Expected net gain data of low-template DNA analyses

    Get PDF
    Low-template DNA analyses are affected by stochastic effects which can produce a configuration of peaks in the electropherogram (EPG) that is different from the genotype of the DNA׳s donor. A probabilistic and decision-theoretic model can quantify the expected net gain (ENG) of performing a DNA analysis by the difference between the expected value of information (EVOI) and the cost of performing the analysis. This article presents data on the ENG of performing DNA analyses of low-template DNA for a single amplification, two replicate amplifications, and for a second replicate amplification given the result of a first analysis. The data were obtained using amplification kits AmpFlSTR Identifiler Plus and Promega׳s PowerPlex 16 HS, an ABI 3130xl genetic sequencer, and Applied Biosystem׳s GeneMapper ID-X software. These data are supplementary to an original research article investigating whether a forensic DNA analyst should perform a single DNA analysis or two replicate analyses from a decision-theoretic point of view, entitled “Low-template DNA: a single DNA analysis or two replicates?” (Gittelson et al., 2016) [1]. Keywords: Forensic science, LT-DNA, Replicate

    Breaking the barriers between intelligence, investigation and evaluation: a continuous approach to define the contribution and scope of forensic science

    Get PDF
    Forensic science has been evolving towards a separation of more and more specialised tasks, with forensic practitioners increasingly identifying themselves with only one sub-discipline or task of forensic science. Such divisions are viewed as a threat to the advancement of science because they tend to polarise researchers and tear apart scientific communities. The objective of this article is to highlight that a piece of information is not either intelligence or evidence, and that a forensic scientist is not either an investigator or an evaluator, but that these notions must all be applied in conjunction to successfully understand a criminal problem or solve a case. To capture the scope, strength and contribution of forensic science, this paper proposes a progressive but non-linear continuous model that could serve as a guide for forensic reasoning and processes. In this approach, hypothetico-deductive reasoning, iterative thinking and the notion of entropy are used to frame the continuum, situate forensic scientists’ operating contexts and decision points. Situations and examples drawn from experience and practice are used to illustrate the approach. The authors argue that forensic science, as a discipline, should not be defined according to the context it serves (i.e. an investigation, a court decision or an intelligence process), but as a general, scientific and holistic trace-focused practice that contributes to a broad range of goals in various contexts. Since forensic science does not work in isolation, the approach also provides a useful basis as to how forensic scientists should contribute to collective and collaborative problem-solving to improve justice and security

    Forensic interpretation framework for body and gait analysis: feature extraction, frequency and distinctiveness

    Full text link
    Surveillance is ubiquitous in modern society, allowing continuous monitoring of areas that results in capturing criminal (or suspicious) activity as footage. This type of trace is usually examined, assessed and evaluated by a forensic examiner to ultimately help the court make inferences about who was on the footage. The purpose of this study was to develop an analytical model that ensures applicability of morphometric (both anthropometric and morphological) tech-niques for photo-comparative analyses of body and gait of indivi-duals in CCTV images, and then to assign a likelihood ratio. This is the first paper of a series: This paper will contain feature extraction to observe repeatability procedures from a single observer, in turn, producing the frequency and distinctiveness of the feature set within the given population. To achieve this, an Australian popula-tion database of 383 subjects (stance) and 268 subjects (gait) from both sexes, all ages above 18 and ancestries was generated. Features were extracted, defined, and their rarity viewed among the developed database. Repeatability studies were completed in which stance and gait (static and dynamic) features contained low levels of repeatability error (0.2%–1.5 TEM%). For morphological examination, finger flexion and feet placement were observed to have high observer performance
    corecore