84 research outputs found

    Inference of Markovian Properties of Molecular Sequences from NGS Data and Applications to Comparative Genomics

    Full text link
    Next Generation Sequencing (NGS) technologies generate large amounts of short read data for many different organisms. The fact that NGS reads are generally short makes it challenging to assemble the reads and reconstruct the original genome sequence. For clustering genomes using such NGS data, word-count based alignment-free sequence comparison is a promising approach, but for this approach, the underlying expected word counts are essential. A plausible model for this underlying distribution of word counts is given through modelling the DNA sequence as a Markov chain (MC). For single long sequences, efficient statistics are available to estimate the order of MCs and the transition probability matrix for the sequences. As NGS data do not provide a single long sequence, inference methods on Markovian properties of sequences based on single long sequences cannot be directly used for NGS short read data. Here we derive a normal approximation for such word counts. We also show that the traditional Chi-square statistic has an approximate gamma distribution, using the Lander-Waterman model for physical mapping. We propose several methods to estimate the order of the MC based on NGS reads and evaluate them using simulations. We illustrate the applications of our results by clustering genomic sequences of several vertebrate and tree species based on NGS reads using alignment-free sequence dissimilarity measures. We find that the estimated order of the MC has a considerable effect on the clustering results, and that the clustering results that use a MC of the estimated order give a plausible clustering of the species.Comment: accepted by RECOMB-SEQ 201

    Likelihood-based methods for analysis of copy number variation using next generation sequencing data.

    Get PDF
    A Copy Number Variation (CNV) detection problem is considered using Circular Binary Segmentation (CBS) procedures, including newly developed procedures based on likelihood ratio tests with the parametric bootstrap for models based on discrete distributions for count data (Poisson and negative binomial) and a widely-used DNAcopy package. Results from the literature concerning maximum likelihood estimation for the negative binomial distribution are reviewed. The Newton-Raphson method is used to find the root of the derivative of the profile log likelihood function when applicable, and it is proven that this method converges to the true Maximum Likeihood Estimate (MLE), if the starting point for the Newton-Raphson is selected appropriately and the MLE exists

    On the Analysis of DNA Methylation

    Get PDF
    Recent genome-wide studies lend support to the idea that the patterns of DNA methylation are in some way related either causally or as a readout of cell-type specific protein binding. We lay the groundwork for a framework to test whether the pattern of DNA methylation levels in a cell combined with protein binding models is sufficient to completely describe the location of the component of proteins binding to its genome in an assayed context. There is only one method, whole-genome bisulfite sequencing, WGBS, available to study DNA methylation genome-wide at such high resolution, however its accuracy has not been determined on the scale of individual binding locations. We address this with a two-fold approach. First, we developed an alternative high-resolution, whole-genome assay using a combination of an enrichment-based and a restriction-enzyme-based assay of methylation, methylCRF. While both assays are considered inferior to WGBS, by using two distinct assays, this method has the advantage that each assay in part cancels out the biases of the other. Additionally, this method is up to 15 times lower in cost than WGBS. By formulating the estimation of methylation from the two methods as a structured prediction problem using a conditional random field, this work will also address the general problem of incorporating data of varying qualities -a common characteristic of biological data- for the purpose of prediction. We show that methylCRF is concordant with WGBS within the range of two WGBS methylomes. Due to the lower cost, we were able to analyze at high-resolution, methylation across more cell-types than previously possible and estimate that 28% of CpGs, in regions comprising 11% of the genome, show variable methylation and are enriched in regulatory regions. Secondly, we show that WGBS has inherent resulution limitations in a read count dependent manner and that the identification of unmethylated regions is highly affected by GC-bias in the underlying protocol suggesting simple estimate procedures may not be sufficient for high-resolution analysis. To address this, we propose a novel approach to DNA methylation analysis using change point detection instead of estimating methylation level directly. However, we show that current change-point detection methods are not robust to methylation signal, we therefore explore how to extend current non-parametric methods to simultaneously find change-points as well as characteristic methylation levels. We believe this framework may have the power to examine the connection between changes in methylation and transcription factor binding in the context of cell-type specific behaviors

    Quality value based models and methods for sequencing data

    Get PDF
    First isolated by Friedrich Miescher in 1869 and then identified by James Watson and Francis Crick in 1953, the double stranded DeoxyriboNucleic Acid (DNA) molecule of Homo sapiens took fifty years to be completely reconstructed and to finally be at disposal to researchers for deep studies and analyses. The first technologies for DNA sequencing appeared around the mid-1970s; among them the most successful has been chain termination method, usually referred to as Sanger method. They remained de-facto standard for sequencing until, at the beginning of the 2000s, Next Generation Sequencing (NGS) technologies started to be developed. These technologies are able to produce huge amount of data with competitive costs in terms of dollars per base, but now further advances are revealing themselves in form of Single Molecule Real Time (SMRT) based sequencer, like Pacific Biosciences, that promises to produce fragments of length never been available before. However, none of above technologies are able to read an entire DNA, they can only produce short fragments (called reads) of the sample in a process referred to as sequencing. Although all these technologies have different characteristics, one recurrent trend in their evolution has been represented by the constant grow of the fraction of errors injected into the final reads. While Sanger machines produce as low as 1 erroneous base in 1000, the recent PacBio sequencers have an average error rate of 15%; NGS machines place themselves roughly in the middle with the expected error rate around 1%. With such a heterogeneity of error profiles and, as more and more data is produced every day, algorithms being able to cope with different sequencing technologies are becoming fundamental; at the same time also models for the description of sequencing with the inclusion of error profiling are gaining importance. A key feature that can make these approaches really effective is the ability of sequencers of producing quality scores which measure the probability of observing a sequencing error. In this thesis we present a stochastic model for the sequencing process and show its application to the problems of clustering and filtering of reads. The novel idea is to use quality scores to build a probabilistic framework that models the entire process of sequencing. Although relatively straightforward, the developing of such a model goes through the proper definition of probability spaces and events on such spaces. To keep the model simple and tractable several simplification hypotheses need to be introduce, each of them, however, must be explicitly stated and extensively discussed. The final result is a model for sequencing process that can be used: to give probabilistic interpretation of the problems defined on sequencing data and to characterize corresponding probabilistic answers (i.e., solutions). To experimentally validate the aforementioned model, we apply it to two different problems: reads clustering and reads filtering. The first set of experiments goes through the introduction of a set of novel alignment-free measures D2 resulting from the extension of the well known D2 -type measures to incorporate quality values. More precisely, instead of adding a unit contribution to the k-mers count statistic (as for D2 statistics), each k- mer contributes with an additive term corresponding to its probability of being correct as defined by our stochastic model. We show that this new measures are effective when applied to clustering of reads, by employing clusters produced with D2 as input to the problems of metagenomic binning and de-novo assembly. In the second set of experiments conducted to validate our stochastic model, we applied the same definition of correct read to the problem of reads filtering. We first define rank filtering which is a lossless filtering technique that sorts reads based on a given criterion; then we used the sorted list of reads as input of algorithms for reads mapping and de-novo assembly. The idea is that, on the reordered set, reads ranking higher should have better quality than the ones at lower ranks. To test this conjecture, we use such filtering as pre-processing step of reads mapping and de-novo assembly; in both cases we observe improvements when our rank filtering approach is used

    Study designs and statistical methods for pharmacogenomics and drug interaction studies

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Adverse drug events (ADEs) are injuries resulting from drug-related medical interventions. ADEs can be either induced by a single drug or a drug-drug interaction (DDI). In order to prevent unnecessary ADEs, many regulatory agencies in public health maintain pharmacovigilance databases for detecting novel drug-ADE associations. However, pharmacovigilance databases usually contain a significant portion of false associations due to their nature structure (i.e. false drug-ADE associations caused by co-medications). Besides pharmacovigilance studies, the risks of ADEs can be minimized by understating their mechanisms, which include abnormal pharmacokinetics/pharmacodynamics due to genetic factors and synergistic effects between drugs. During the past decade, pharmacogenomics studies have successfully identified several predictive markers to reduce ADE risks. While, pharmacogenomics studies are usually limited by the sample size and budget. In this dissertation, we develop statistical methods for pharmacovigilance and pharmacogenomics studies. Firstly, we propose an empirical Bayes mixture model to identify significant drug-ADE associations. The proposed approach can be used for both signal generation and ranking. Following this approach, the portion of false associations from the detected signals can be well controlled. Secondly, we propose a mixture dose response model to investigate the functional relationship between increased dimensionality of drug combinations and the ADE risks. Moreover, this approach can be used to identify high-dimensional drug combinations that are associated with escalated ADE risks at a significantly low local false discovery rates. Finally, we proposed a cost-efficient design for pharmacogenomics studies. In order to pursue a further cost-efficiency, the proposed design involves both DNA pooling and two-stage design approach. Compared to traditional design, the cost under the proposed design will be reduced dramatically with an acceptable compromise on statistical power. The proposed methods are examined by extensive simulation studies. Furthermore, the proposed methods to analyze pharmacovigilance databases are applied to the FDA’s Adverse Reporting System database and a local electronic medical record (EMR) database. For different scenarios of pharmacogenomics study, optimized designs to detect a functioning rare allele are given as well

    From condition-specific interactions towards the differential complexome of proteins

    Get PDF
    While capturing the transcriptomic state of a cell is a comparably simple effort with modern sequencing techniques, mapping protein interactomes and complexomes in a sample-specific manner is currently not feasible on a large scale. To understand crucial biological processes, however, knowledge on the physical interplay between proteins can be more interesting than just their mere expression. In this thesis, we present and demonstrate four software tools that unlock the cellular wiring in a condition-specific manner and promise a deeper understanding of what happens upon cell fate transitions. PPIXpress allows to exploit the abundance of existing expression data to generate specific interactomes, which can even consider alternative splicing events when protein isoforms can be related to the presence of causative protein domain interactions of an underlying model. As an addition to this work, we developed the convenient differential analysis tool PPICompare to determine rewiring events and their causes within the inferred interaction networks between grouped samples. Furthermore, we present a new implementation of the combinatorial protein complex prediction algorithm DACO that features a significantly reduced runtime. This improvement facilitates an application of the method for a large number of samples and the resulting sample-specific complexes can ultimately be assessed quantitatively with our novel differential protein complex analysis tool CompleXChange.Das Transkriptom einer Zelle ist mit modernen Sequenzierungstechniken vergleichsweise einfach zu erfassen. Die Ermittlung von Proteininteraktionen und -komplexen wiederum ist in großem Maßstab derzeit nicht möglich. Um wichtige biologische Prozesse zu verstehen, kann das Zusammenspiel von Proteinen jedoch erheblich interessanter sein als deren reine Expression. In dieser Arbeit stellen wir vier Software-Tools vor, die es ermöglichen solche Interaktionen zustandsbezogen zu betrachten und damit ein tieferes Verständnis darüber versprechen, was in der Zelle bei Veränderungen passiert. PPIXpress ermöglicht es vorhandene Expressionsdaten zu nutzen, um die aktiven Interaktionen in einem biologischen Kontext zu ermitteln. Wenn Proteinvarianten mit Interaktionen von Proteindomänen in Verbindung gebracht werden können, kann hierbei sogar alternatives Spleißen berücksichtigen werden. Als Ergänzung dazu haben wir das komfortable Differenzialanalyse-Tool PPICompare entwickelt, welches Veränderungen des Interaktoms und deren Ursachen zwischen gruppierten Proben bestimmen kann. Darüber hinaus stellen wir eine neue Implementierung des Proteinkomplex-Vorhersagealgorithmus DACO vor, die eine deutlich reduzierte Laufzeit aufweist. Diese Verbesserung ermöglicht die Anwendung der Methode auf eine große Anzahl von Proben. Die damit bestimmten probenspezifischen Komplexe können schließlich mit unserem neuartigen Differenzialanalyse-Tool CompleXChange quantitativ bewertet werden

    A systems biology perspective on the consequences of aneuploidy in human cells

    Get PDF

    Multidrug Resistant Clostridioides difficile: The Presence of Antimicrobial Resistance Determinants in Historical and Contemporaneous Isolates, and the Impact Of Fluoroquinolone Resistance Development on PCR Ribotype 027 Fitness

    Get PDF
    Clostridioides difficile is the major cause of infectious antibiotic-associated diarrhoea, imparting a substantial clinical and financial burden on healthcare facilities. Resistance development, particularly to fluoroquinolones, has been implicated in major, international epidemics, predominantly associated with the hyper-virulent ribotype 027. The development of multiple antimicrobial resistance may contribute significantly to the considerable clinical challenges associated with this organism. In this study, optimised germination environments, antimicrobial susceptibility testing and next generation sequencing were utilised in the recovery and characterisation of an historical C. difficile collection (1980-86). Epidemiological comparisons of ribotype distribution and susceptibility patterns with modern surveillance data (2012-2016) sought to reveal antimicrobial resistance variance between two distinct periods. By correlating phenotypic resistance and genetic determinants, the dissemination of resistance genes was evaluated. Contributions of bacterial mutability to resistance propagation were investigated in response to fluoroquinolone exposure amongst seven prevalent, clinical ribotypes; (n=44). Through in vitro batch and continuous co-culture modelling, the impact of resistance-conferring gyrA and gyrB mutations on the fitness of ribotype 027 (n=7) was assessed. The majority of test antimicrobials (n=8/9) were less active against modern vs historical isolates. This is potentially due to increased antimicrobial exposure and subsequent selection/expansion of resistant strains. Moxifloxacin testing demonstrated the largest increase in resistant populations, reinforcing the notion of reduced susceptibility to modern fluoroquinolones as a potential contributory factor in disease. Phylogenetic analyses highlighted the complexity of molecular clock predictions, with 69% of historical genomes correlating with a 95% prediction interval. Elevated mutability was observed amongst ribotype 027, suggesting greater propensity for resistance evolution in this type. Common fluoroquinolone resistance-conferring substitutions revealed advantages to bacterial fitness. Continuous, competitive co-culture modelling of a Thr82>Ile mutant, 027 strain emphasised the fitness benefits of this polymorphism, retained in the absence of fluoroquinolone pressure. These findings indicate a potential contribution to the success of this ribotype
    corecore