600 research outputs found

    Evaluation of low-template DNA profiles using peak heights

    Get PDF
    In recent years statistical models for the analysis of complex (low-template and/or mixed) DNA profiles have moved from using only presence/absence information about allelic peaks in an electropherogram, to quantitative use of peak heights. This is challenging because peak heights are very variable and affected by a number of factors. We present a new peak-height model with important novel features, including over- and double-stutter, and a new approach to dropin. Our model is incorporated in open-source R code likeLTD. We apply it to 108 laboratory-generated crime-scene profiles and demonstrate techniques of model validation that are novel in the field. We use the results to explore the benefits of modeling peak heights, finding that it is not always advantageous, and to assess the merits of pre-extraction replication. We also introduce an approximation that can reduce computational complexity when there are multiple low-level contributors who are not of interest to the investigation, and we present a simple approximate adjustment for linkage between loci, making it possible to accommodate linkage when evaluating complex DNA profiles

    Assessing the forensic value of DNA evidence from Y chromosomes and mitogenomes

    Get PDF
    Y-chromosomal and mitochondrial DNA profiles have been used as evidence in courts for decades, yet the problem of evaluating the weight of evidence has not been adequately resolved. Both are lineage markers (inherited from just one parent), which presents different interpretation challenges compared with standard autosomal DNA profiles (inherited from both parents), for which recombination increases profile diversity and weakens the effects of relatedness. We review approaches to the evaluation of lineage marker profiles for forensic identification, focussing on the key roles of profile mutation rate and relatedness. Higher mutation rates imply fewer individuals matching the profile of an alleged contributor, but they will be more closely related. This makes it challenging to evaluate the possibility that one of these matching individuals could be the true source, because relatedness may make them more plausible alternative contributors than less-related individuals, and they may not be well mixed in the population. These issues reduce the usefulness of profile databases drawn from a broad population: the larger the population, the lower the profile relative frequency because of lower relatedness with the alleged contributor. Many evaluation methods do not adequately take account of relatedness, but its effects have become more pronounced with the latest generation of high-mutation-rate Y profiles

    Diffusional Relaxation in Random Sequential Deposition

    Full text link
    The effect of diffusional relaxation on the random sequential deposition process is studied in the limit of fast deposition. Expression for the coverage as a function of time are analytically derived for both the short-time and long-time regimes. These results are tested and compared with numerical simulations.Comment: 9 pages + 2 figure

    Integrating dynamic mixed-effect modelling and penalized regression to explore genetic association with pharmacokinetics

    Get PDF
    In a previous work, we have shown that penalized regression approaches can allow many genetic variants to be incorporated into sophisticated pharmacokinetic (PK) models in a way that is both computationally and statistically efficient. The phenotypes were the individual model parameter estimates, obtained a posteriori of the model fit and known to be sensitive to the study design

    Primary Production and Carbon Allocation in Creosotebush

    Get PDF

    Model of Cluster Growth and Phase Separation: Exact Results in One Dimension

    Full text link
    We present exact results for a lattice model of cluster growth in 1D. The growth mechanism involves interface hopping and pairwise annihilation supplemented by spontaneous creation of the stable-phase, +1, regions by overturning the unstable-phase, -1, spins with probability p. For cluster coarsening at phase coexistence, p=0, the conventional structure-factor scaling applies. In this limit our model falls in the class of diffusion-limited reactions A+A->inert. The +1 cluster size grows diffusively, ~t**(1/2), and the two-point correlation function obeys scaling. However, for p>0, i.e., for the dynamics of formation of stable phase from unstable phase, we find that structure-factor scaling breaks down; the length scale associated with the size of the growing +1 clusters reflects only the short-distance properties of the two-point correlations.Comment: 12 page

    Optimizing genomic medicine in epilepsy through a gene-customized approach to missense variant interpretation

    Get PDF
    Gene panel and exome sequencing have revealed a high rate of molecular diagnoses among diseases where the genetic architecture has proven suitable for sequencing approaches, with a large number of distinct and highly penetrant causal variants identified among a growing list of disease genes. The challenge is, given the DNA sequence of a new patient, to distinguish disease-causing from benign variants. Large samples of human standing variation data highlight regional variation in the tolerance to missense variation within the protein-coding sequence of genes. This information is not well captured by existing bioinformatic tools, but is effective in improving variant interpretation. To address this limitation in existing tools, we introduce the missense tolerance ratio (MTR), which summarizes available human standing variation data within genes to encapsulate population level genetic variation. We find that patient-ascertained pathogenic variants preferentially cluster in low MTR regions (P < 0.005) of well-informed genes. By evaluating 20 publicly available predictive tools across genes linked to epilepsy, we also highlight the importance of understanding the empirical null distribution of existing prediction tools, as these vary across genes. Subsequently integrating the MTR with the empirically selected bioinformatic tools in a gene-specific approach demonstrates a clear improvement in the ability to predict pathogenic missense variants from background missense variation in disease genes. Among an independent test sample of case and control missense variants, case variants (0.83 median score) consistently achieve higher pathogenicity prediction probabilities than control variants (0.02 median score; Mann-Whitney U test, P < 1 × 10(-16)). We focus on the application to epilepsy genes; however, the framework is applicable to disease genes beyond epilepsy

    Group testing with Random Pools: Phase Transitions and Optimal Strategy

    Full text link
    The problem of Group Testing is to identify defective items out of a set of objects by means of pool queries of the form "Does the pool contain at least a defective?". The aim is of course to perform detection with the fewest possible queries, a problem which has relevant practical applications in different fields including molecular biology and computer science. Here we study GT in the probabilistic setting focusing on the regime of small defective probability and large number of objects, p0p \to 0 and NN \to \infty. We construct and analyze one-stage algorithms for which we establish the occurrence of a non-detection/detection phase transition resulting in a sharp threshold, Mˉ\bar M, for the number of tests. By optimizing the pool design we construct algorithms whose detection threshold follows the optimal scaling MˉNplogp\bar M\propto Np|\log p|. Then we consider two-stages algorithms and analyze their performance for different choices of the first stage pools. In particular, via a proper random choice of the pools, we construct algorithms which attain the optimal value (previously determined in Ref. [16]) for the mean number of tests required for complete detection. We finally discuss the optimal pool design in the case of finite pp

    The Rise and Fall of BritainsDNA: A Tale of Misleading Claims, Media Manipulation and Threats to Academic Freedom

    Get PDF
    Direct-to-consumer genetic ancestry testing is a new and growing industry that has gained widespread media coverage and public interest. Its scientific base is in the fields of population and evolutionary genetics and it has benefitted considerably from recent advances in rapid and cost-effective DNA typing technologies. There is a considerable body of scientific literature on the use of genetic data to make inferences about human population history, although publications on inferring the ancestry of specific individuals are rarer. Population geneticists have questioned the scientific validity of some population history inference approaches, particularly those of a more interpretative nature. These controversies have spilled over into commercial genetic ancestry testing, with some companies making sensational claims about their products. One such company—BritainsDNA—made a number of dubious claims both directly to its customers and in the media. Here we outline our scientific concerns, document the exchanges between us, BritainsDNA and the BBC, and discuss the issues raised about media promotion of commercial enterprises, academic freedom of expression, science and pseudoscience and the genetic ancestry testing industry. We provide a detailed account of this case as a resource for historians and sociologists of science, and to shape public understanding, media reporting and scientific scrutiny of the commercial use of population and evolutionary genetics

    A renormalization group study of a class of reaction-diffusion model, with particles input

    Full text link
    We study a class of reaction-diffusion model extrapolating continuously between the pure coagulation-diffusion case (A+AAA+A\to A) and the pure annihilation-diffusion one (A+AA+A\to\emptyset) with particles input (A\emptyset\to A) at a rate JJ. For dimension d2d\leq 2, the dynamics strongly depends on the fluctuations while, for d>2d >2, the behaviour is mean-field like. The models are mapped onto a field theory which properties are studied in a renormalization group approach. Simple relations are found between the time-dependent correlation functions of the different models of the class. For the pure coagulation-diffusion model the time-dependent density is found to be of the form c(t,J,D)=(J/D)1/δF[(J/D)ΔDt]c(t,J,D) = (J/D)^{1/\delta}{\cal F}[(J/D)^{\Delta} Dt], where DD is the diffusion constant. The critical exponent δ\delta and Δ\Delta are computed to all orders in ϵ=2d\epsilon=2-d, where dd is the dimension of the system, while the scaling function F\cal F is computed to second order in ϵ\epsilon. For the one-dimensional case an exact analytical solution is provided which predictions are compared with the results of the renormalization group approach for ϵ=1\epsilon=1.Comment: Ten pages, using Latex and IOP macro. Two latex figures. Submitted to Journal of Physics A. Also available at http://mykonos.unige.ch/~rey/publi.htm
    corecore