277 research outputs found

    Predicting the Physiological Role of Circadian Metabolic Regulation in the Green Alga Chlamydomonas reinhardtii

    Get PDF
    Although the number of reconstructed metabolic networks is steadily growing, experimental data integration into these networks is still challenging. Based on elementary flux mode analysis, we combine sequence information with metabolic pathway analysis and include, as a novel aspect, circadian regulation. While minimizing the need of assumptions, we are able to predict changes in the metabolic state and can hypothesise on the physiological role of circadian control in nitrogen metabolism of the green alga Chlamydomonas reinhardtii

    Genome-wide inference of ancestral recombination graphs

    Get PDF
    The complex correlation structure of a collection of orthologous DNA sequences is uniquely captured by the "ancestral recombination graph" (ARG), a complete record of coalescence and recombination events in the history of the sample. However, existing methods for ARG inference are computationally intensive, highly approximate, or limited to small numbers of sequences, and, as a consequence, explicit ARG inference is rarely used in applied population genomics. Here, we introduce a new algorithm for ARG inference that is efficient enough to apply to dozens of complete mammalian genomes. The key idea of our approach is to sample an ARG of n chromosomes conditional on an ARG of n-1 chromosomes, an operation we call "threading." Using techniques based on hidden Markov models, we can perform this threading operation exactly, up to the assumptions of the sequentially Markov coalescent and a discretization of time. An extension allows for threading of subtrees instead of individual sequences. Repeated application of these threading operations results in highly efficient Markov chain Monte Carlo samplers for ARGs. We have implemented these methods in a computer program called ARGweaver. Experiments with simulated data indicate that ARGweaver converges rapidly to the true posterior distribution and is effective in recovering various features of the ARG for dozens of sequences generated under realistic parameters for human populations. In applications of ARGweaver to 54 human genome sequences from Complete Genomics, we find clear signatures of natural selection, including regions of unusually ancient ancestry associated with balancing selection and reductions in allele age in sites under directional selection. Preliminary results also indicate that our methods can be used to gain insight into complex features of human population structure, even with a noninformative prior distribution.Comment: 88 pages, 7 main figures, 22 supplementary figures. This version contains a substantially expanded genomic data analysi

    Phasage d’haplotypes par ASP à partir de longues lectures : une approche d’optimisation flexible

    Get PDF
    Version non corrigée. Une nouvelle version sera disponible d'ici mars 2023.Each chromosome of a di- or polyploid organism has several haplotypes, which are highly similar but diverge on a certain number of positions. However, most of the reference genomes only provide a single sequence for each chromosome, and therefore do not reflect the biological reality.Yet, it is crucial to have access to this information, which is useful in medicine, agronomy and population studies. The recent development of third generation technologies, especially PacBio and Oxford Nanopore Technologies sequencers, has allowed for the production of long reads that facilitate haplotype sequence reconstruction.Bioinformatics methods exist for this task, but they provide only a single solution. This thesis introduces an approach for haplotype phasing based on the search of connected components in a read similarity graph to identify haplotypes. This method uses Answer Set Programming to work on the set ofoptimal solutions. This phasing algorithm has been used to reconstruct haplotypes of the diploid rotifer Adineta vaga.Chaque chromosome d’organisme di- ou polyploïde présente plusieurs haplotypes, qui sont fortement similaires mais divergent sur un certain nombre de positions. Cependant, la majorité des génomes de référence ne renseignent qu’une seule séquence pour chaque chromosome, et ne reflètent donc pas la réalité biologique. Or, il est crucial d’avoir accès à ces informations, qui sont utiles en médecine, en agronomie ou encore dans l’étude des populations. Le récent développement des technologies de troisième génération, notamment des séquenceurs PacBio et Oxford NanoporeTechnologies, a permis la production de lectures longues facilitant la reconstruction des séquences d’haplotypes. Il existe pour cela des méthodes bioinformatiques, mais elles ne fournissent qu’une unique solution. Cette thèse propose une méthode de phasage d’haplotype basée sur la recherchede composantes connexes dans un graph de similarité des lectures pour identifier les haplotypes. Cette méthode utilise l’Answer Set Programming pour travailler sur l’ensemble des solutions optimales. L’algorithme de phasage a permis de reconstruire les haplotypes du rotifère diploïde Adineta vaga

    Modeling complex cellular systems: from differential equations to constraint-based models

    Get PDF
    In the beginning of the 20th century, scientists realized the necessity of purifying enzymes to unravel their mechanistic nature. A century and tremendous progresses in the natural sciences later, molecular and systems biology became fundamental pillars of modern biology. Moreover, natural scientists developed an increasing interest in theoretical models. In the first part of my thesis, I present my contribution to the field of studying the dynamics of biological phenomena. I present fundamental issues arising, when neglecting substrate inhibition in kinetic modeling. Furthermore, I describe a model that considers experimental data to simulate the transition of normal proliferating into cellular senescent cells. Since large-scaled models are more comprehensive, they commonly prohibit a mechanistic modeling approach. In order to analyze such models, nevertheless, constraint-based methods proved to be suitable tools. In the second part of my thesis, I contribute three studies to constraint-based modeling. I describe the established concept of elementary flux modes, which resemble non-decomposable and theoretically feasible pathways of metabolic networks. Subsequently, I present the analysis of the nitrogen metabolism network of Chlamydomonas reinhardtii with respect to circadian regulation, which gives rise to about three million elementary flux modes. In the last study, I present a comprehensive work on metabolic costs of amino acid and protein production in Escherichia coli. These costs were manually calculated as well as based on a flux balance analysis of an E. coli genome-scale metabolic model. Both approaches, either dynamic or constraint-based modeling, proved to be suitable strategies to describe biological processes at different levels. Whereas dynamic modeling allowed for a precise description of the temporal behavior of biological species, constraint-based modeling enabled studies, where the complexity of the investigated phenomena prohibits kinetic modeling

    Microsatellite markers in genetic improvement of livestock

    Get PDF

    Focus: A Graph Approach for Data-Mining and Domain-Specific Assembly of Next Generation Sequencing Data

    Get PDF
    Next Generation Sequencing (NGS) has emerged as a key technology leading to revolutionary breakthroughs in numerous biomedical research areas. These technologies produce millions to billions of short DNA reads that represent a small fraction of the original target DNA sequence. These short reads contain little information individually but are produced at a high coverage of the original sequence such that many reads overlap. Overlap relationships allow for the reads to be linearly ordered and merged by computational programs called assemblers into long stretches of contiguous sequence called contigs that can be used for research applications. Although the assembly of the reads produced by NGS remains a difficult task, it is the process of extracting useful knowledge from these relatively short sequences that has become one of the most exciting and challenging problems in Bioinformatics. The assembly of short reads is an aggregative process where critical information is lost as reads are merged into contigs. In addition, the assembly process is treated as a black box, with generic assembler tools that do not adapt to input data set characteristics. Finally, as NGS data throughput continues to increase, there is an increasing need for smart parallel assembler implementations. In this dissertation, a new assembly approach called Focus is proposed. Unlike previous assemblers, Focus relies on a novel hybrid graph constructed from multiple graphs at different levels of granularity to represent the assembly problem, facilitating information capture and dynamic adjustment to input data set characteristics. This work is composed of four specific aims: 1) The implementation of a robust assembly and analysis tool built on the hybrid graph platform 2) The development and application of graph mining to extract biologically relevant features in NGS data sets 3) The integration of domain specific knowledge to improve the assembly and analysis process. 4) The construction of smart parallel computing approaches, including the application of energy-aware computing for NGS assembly and knowledge integration to improve algorithm performance. In conclusion, this dissertation presents a complete parallel assembler called Focus that is capable of extracting biologically relevant features directly from its hybrid assembly graph

    Discovery of Unconventional Patterns for Sequence Analysis: Theory and Algorithms

    Get PDF
    The biology community is collecting a large amount of raw data, such as the genome sequences of organisms, microarray data, interaction data such as gene-protein interactions, protein-protein interactions, etc. This amount is rapidly increasing and the process of understanding the data is lagging behind the process of acquiring it. An inevitable first step towards making sense of the data is to study their regularities focusing on the non-random structures appearing surprisingly often in the input sequences: patterns. In this thesis we discuss three incarnations of the pattern discovery task, exploring three types of patterns that can model different regularities of the input dataset. While mask patterns have been designed to model short repeated biological sequences, showing a high conservation of their content at some specific positions, permutation patterns have been designed to detect repeated patterns whose parts maintain their physical adjacency but not their ordering in all the pattern occurrences. Transposons, instead, model mobile sequences in the input dataset, which can be discovered by comparing different copies of the same input string, detecting large insertions and deletions in their alignment
    • …
    corecore