11 research outputs found

    Estimation of Parameters in DNA Mixture Analysis

    Full text link
    In Cowell et al. (2007), a Bayesian network for analysis of mixed traces of DNA was presented using gamma distributions for modelling peak sizes in the electropherogram. It was demonstrated that the analysis was sensitive to the choice of a variance factor and hence this should be adapted to any new trace analysed. In the present paper we discuss how the variance parameter can be estimated by maximum likelihood to achieve this. The unknown proportions of DNA from each contributor can similarly be estimated by maximum likelihood jointly with the variance parameter. Furthermore we discuss how to incorporate prior knowledge about the parameters in a Bayesian analysis. The proposed estimation methods are illustrated through a few examples of applications for calculating evidential value in casework and for mixture deconvolution

    The low-template-DNA (stochastic) threshold - Its determination relative to risk analysis for national DNA databases

    No full text
    Although the low-template or stochastic threshold is in widespread use and is typically set to 150-200 rfu peak height, there has been no consideration on its determination and meaning. In this paper we propose a definition that is based upon the specific risk of wrongful designation of a heterozygous genotype as a homozygote which could lead to a false exclusion. Conversely, it is possible that a homozygote fa,a) could be designated as {a,F} where 'F is a 'wild card', and this could lead to increased risk of false inclusion. To determine these risk levels, we analysed an experimental dataset that exhibited extreme drop-out using logistic regression. The derived probabilities are employed in a graphical model to determine the relative risks of wrongful designations that may cause false inclusions and exclusions. The methods described in this paper provide a preliminary solution of risk evaluation for any DNA process that employs a stochastic threshold

    Practical determination of the low template DNA threshold

    No full text
    The low template stochastic DNA threshold is used to infer the genotype of a single STR allelic peak. For example, within the context of the UK National DNA Database, the stochastic threshold is used to decide whether a DNA profile, consisting of a peak in position of allele a, is uploaded as aF or as an aa homozygote. The F designation acts as a ‘fail-safe’ wild card that is designed to capture the possibility of allele drop-out and to do this it will match any allele. If a profile is wrongly designated as an aa homozygote, the database search will be unnecessarily restricted and may fail to match a perpetrator reference sample on the database. If the stochastic threshold is too high, then this increases the number of adventitious matches, which in turn compromises the utility of the national DNA database. There are many different methods used to process DNA profiles. Often, the same stochastic threshold is used for each process (typically 150 rfu). But this means that more sensitive methods will have a threshold that is too low (and vice versa) and the risks of a wrongful designation are correspondingly greater. Recently, it was suggested that logistic regression could be used to relate the stochastic threshold to a defined probability of drop-out in order to properly evaluate the risks associated with a given stochastic threshold. In this article we introduce a new methodology to calculate the stochastic threshold that a practitioner could easily implement. The threshold depends on the sensitivity of the method employed, and is adjusted to be equivalent across all methods used to analyse DNA profiles. This ensures that risks associated with misdesignation are equivalent across all methods. In effect a uniformity of methods, underpinned by an analysis of risks associated with misdesignation can be achieved

    Investigation of the Reproductibility of Third-Level Characteristics

    No full text
    The process of comparing a fingermark recovered from a crime scene with the fingerprint taken from a known individual involves the characterization and comparison of different ridge details on both the mark and the print. Fingerprints examiners commonly classify these characteristics into three different groups, depending on their level of discriminating power. It is commonly considered that the general pattern of the ridge flow constitutes first-level detail, specific ridge flow and minutiaes (e.g. ending ridges, bifurcations) constitutes second-level detail, and fine ridge details (e. g. pore positions and shapes) are described as third-level details.In this study, the reproducibility of a selection of third-level characteristics is investigated. The reproducibility of these features is examined on serveral recordings of a same finger, first acquired using only optical visualization techniques and second on impressions developed using common firngermark development techniques. Prior to the evaluation of the reproducibility of the considered characteristics, digital images of the fingerprints were recorded at two different resolutions (1000 and 2000 ppi). This allowed the study to also examine the influence of higher resolution on the considered characteristics. It was observed that the increase in the resolution did not result in better feature detection or comparison between images.The examination of the reproducibility of a selection of third-level characteristics showed that the most reproducible features observed were minutiae shapes and pore positions along the ridges

    Expressing evaluative opinions: a position statement

    No full text
    The judgment of the Court of Appeal in R v T [1] raises several issues relating to the evaluation of scientific evidence that, we believe, require a response. We, the undersigned, oppose any response to the judgment that would result in a movement away from the use of logical methods for evidence evaluation. A paper in this issue of the Journal [2] re-iterates logical principles of evidence interpretation that are accepted by a broad range of those who have an interest in forensic reasoning. The divergence between those principles of interpretation and the apparent implications of the R v T ruling are epitomised by the following issues that represent our collective position with regard to the evaluation of evidence within the context of a criminal trial. 1) The interpretation of scientific evidence invokes reasoning in the face of uncertainty. Probability theory provides the only coherent logical foundation for such reasoning. 2) To form an evaluative opinion from a set of observations, it is necessary for the forensic scientist to consider those observations in the light of propositions that represent the positions of the different participants in the legal process. In a criminal trial, the propositions will represent the positions of prosecution and defence, respectively. 3) It is necessary for the scientist to consider the probability of the observations given each of the stated propositions. Not only is it not appropriate for the scientist to consider the probability of the proposition given the observations, there is a danger that in doing so the jury will be misled. 4) The ratio of the probability of the observations given the prosecution proposition to the probability of the observations given the defence proposition, which is known as the likelihood ratio, provides the most appropriate foundation for assisting the court in establishing the weight that should be assigned to those observations. 5) A verbal scale based on the notion of the likelihood ratio is the most appropriate basis for communication of an evaluative expert opinion to the court. It can be phrased in terms of support for one of a pair of clearly stated propositions. 6) Not only are phrases such as “could have come from” or “is consistent with” ineffective for communicating the scientist's opinion with regard to the weight that should be assigned to a set of observations, but there is also a danger that they may be misleading. 7) Probabilities should be informed by data, knowledge and experience. All data collections are imperfect and incomplete and it necessarily follows that different experts might legitimately assign different probabilities to the same set of observations. 8) The logical approach to evaluating evidence implicit in the foregoing points has come to be known as the “Bayesian approach”. The ideas behind this approach are not novel. Indeed, they were first applied to resolving a serious miscarriage of justice in the Dreyfus case in 1908. 9) It is regrettable that the judgment confuses the Bayesian approach with the use of Bayes' Theorem. The Bayesian approach does not necessarily involve the use of Bayes' Theorem. 10) While we are fully in agreement with the principle of disclosure, candour and full disclosure in court can undermine comprehensibility when scientific evaluations involve technicalities. Pre-trial hearings should be used to explore the basis of expert opinions and to resolve if possible any differences between experts
    corecore