328 research outputs found

    Methods for Epigenetic Analyses from Long-Read Sequencing Data

    Get PDF
    Epigenetics, particularly the study of DNA methylation, is a cornerstone field for our understanding of human development and disease. DNA methylation has been included in the "hallmarks of cancer" due to its important function as a biomarker and its contribution to carcinogenesis and cancer cell plasticity. Long-read sequencing technologies, such as the Oxford Nanopore Technologies platform, have evolved the study of structural variations, while at the same time allowing direct measurement of DNA methylation on the same reads. With this, new avenues of analysis have opened up, such as long-range allele-specific methylation analysis, methylation analysis on structural variations, or relating nearby epigenetic modalities on the same read to another. Basecalling and methylation calling of Nanopore reads is a computationally expensive task which requires complex machine learning architectures. Read-level methylation calls require different approaches to data management and analysis than ones developed for methylation frequencies measured from short-read technologies or array data. The 2-dimensional nature of read and genome associated DNA methylation calls, including methylation caller uncertainties, are much more storage costly than 1-dimensional methylation frequencies. Methods for storage, retrieval, and analysis of such data therefore require careful consideration. Downstream analysis tasks, such as methylation segmentation or differential methylation calling, have the potential of benefiting from read information and allow uncertainty propagation. These avenues had not been considered in existing tools. In my work, I explored the potential of long-read DNA methylation analysis and tackled some of the challenges of data management and downstream analysis using state of the art software architecture and machine learning methods. I defined a storage standard for reference anchored and read assigned DNA methylation calls, including methylation calling uncertainties and read annotations such as haplotype or sample information. This storage container is defined as a schema for the hierarchical data format version 5, includes an index for rapid access to genomic coordinates, and is optimized for parallel computing with even load balancing. It further includes a python API for creation, modification, and data access, including convenience functions for the extraction of important quality statistics via a command line interface. Furthermore, I developed software solutions for the segmentation and differential methylation testing of DNA methylation calls from Nanopore sequencing. This implementation takes advantage of the performance benefits provided by my high performance storage container. It includes a Bayesian methylome segmentation algorithm which allows for the consensus instance segmentation of multiple sample and/or haplotype assigned DNA methylation profiles, while considering methylation calling uncertainties. Based on this segmentation, the software can then perform differential methylation testing and provides a large number of options for statistical testing and multiple testing correction. I benchmarked all tools on both simulated and publicly available real data, and show the performance benefits compared to previously existing and concurrently developed solutions. Next, I applied the methods to a cancer study on a chromothriptic cancer sample from a patient with Sonic Hedgehog Medulloblastoma. I here report regulatory genomic regions differentially methylated before and after treatment, allele-specific methylation in the tumor, as well as methylation on chromothriptic structures. Finally, I developed specialized methylation callers for the combined DNA methylation profiling of CpG, GpC, and context-free adenine methylation. These callers can be used to measure chromatin accessibility in a NOMe-seq like setup, showing the potential of long-read sequencing for the profiling of transcription factor co-binding. In conclusion, this thesis presents and subsequently benchmarks new algorithmic and infrastructural solutions for the analysis of DNA methylation data from long-read sequencing

    EXACT ALGORITHMS FOR GENOME ASSEMBLY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    From genomes to metagenomes: Development of a rapid-aligner for genome assembly and application of macroecological models to microbiology

    Get PDF
    Since the development of the modern computer, many scientific fields have undergone paradigm shifts due to an increasing facility in data collection and analysis. Microbiology has been impacted by computational advances, especially in DNA sequencing applications, and this has led to an interesting problem: there is too much raw data for any person to understand. It is important to have tools that are able to process and analyze these vast amounts of data, so that microbiologists can robustly test hypotheses and predict patterns. Long-read sequencers are capable of sequencing entire genomes with very few reads, but exhibit much higher error rates compared to short-read sequencing platforms. Most current genome assemblers were developed for highly accurate short-read data, and so there is a need to build new tools that can handle these long, error-filled reads. Here, we developed an alignment algorithm in the C programming language for error-prone long reads, as part of a larger genome assembler. This alignment algorithm creates a profile of ordered kmers representing all of the reads, then clusters these kmers to generate a consensus sequence. We show that the alignment algorithm can handle long-read error rates and produce useful results. Using a low-coverage test data set, the algorithm was able to produce a consensus sequence with 85.3% identity to a reference sequence built with extremely high coverage data. Future work will aim to improve this accuracy by error correcting kmers and identifying close repeats of kmers. The field of metagenomics is entering a new state of maturation. Isolation of total community DNA, shotgun sequencing, and assembly of draft genomes for populations has become standard practice in many microbial ecology labs, and many pipelines for manipulating metagenomic sequence data exist. What is not as well understood, however, is how to analyze the growing databases of metagenomic datasets with statistical rigour. To examine the relationships and interactions of different groups of microorganisms across the planet requires strong statistical models that can be used to assess hypotheses. We borrowed occupancy modelling from the macroecological toolbox, and adapted it to microbial metagenomic datasets. Occupancy models are designed to assess the occupancy states of sample sites, while accounting for possible missed detections by re-sampling these sites. We emulate re-sampling by searching for multiple genes associated with functions of interest, where each gene is considered an independent sampling event. We use detection of these genes as proxies for presence of functional potential within environments, and can assess occurrence and, importantly, co-occurrence patterns. We applied this method to nearly 10,000 metagenomes to assess global occupancy patterns for methanogens and methanotrophs, key contributors to the methane cycle. To assess the occupancy patterns of methane cyclers, we looked for genes encoding the subunits for the methyl coenzyme M reductase complex (MCR) and the methane monooxygenases (MMO), biological markers of methanogenesis and methanotrophy, respectively. Our models predicted that occupancy probabilities for both functional groups changed with ecosystem type, latitude, and the date that the data were deposited to the database. The explanatory power of the models was relatively low, which is likely due to a lack of metadata that could be used to better inform models. Occupancy models have the potential to be powerful tools, but microbial ecologists will need to embrace better standards for metadata collection and reporting for metagenomes. This metadata could include the collection of data such as pH, temperature, and other key environmental factors. Future work should focus on establishing and enforcing these metadata requirements to enable statistical assessment of functionally important groups across environments

    A large-scale computational framework for comparative analyses in population genetics and metagenomics

    Get PDF
    Population genetics is the study of spatio-temporal genetic variants among individuals. Its purpose is to understand evolution: the change in frequency of alleles over time. The effects of these alleles are expressed on different levels of biological organization, from molecular com-plexes to entire organisms. Eventually, they will affect traits that can influence the survival and reproduction of organisms. Fitness is a probability of transferring alleles to subsequent genera-tions with respect to successful survival and reproduction. Due to differential fitness, any phe-notypic properties that confer beneficial effects on survival and reproduction may presumably become prevalent in a population. Random mutations introduce new alleles in a population. The underlying changes in DNA sequences can be caused by replication errors, failures in DNA repair processes, or insertion and deletion of transposable elements. For sexual organisms, genetic recombination randomly mixes up the alleles in chromosomes, in turn, yielding a new composition of alleles though it does not change the allele frequencies. On the molecular level, mutations on a set of loci may cause a gain or loss of function resulting in totally different phenotypes, hereby influencing the survival of an organism. Despite the dominance of neutral mutations, the accumulation of small changes over time may affect the fitness, and further contribute to evolution. The goal of this study is to provide a framework for a comparative analysis on large-scale genomic datasets, especially, of a population within a species such as the 1001 Genomes Project of Arabidopsis thaliana, the 1000 Genomes Project of humans, or metagenomics datasets. Algo-rithms have been developed to provide following features: 1) denoising and improving the ef-fective coverage of raw genomic datasets (Trowel), 2) performing multiple whole genome alignments (WGAs) and detecting small variations in a population (Kairos), 3) identifying struc-tural variants (SVs) (Apollo), and 4) classifying microorganisms in metagenomics datasets (Po-seidon). The algorithms do not furnish any interpretation of raw genomic data but provide anal-yses as basis for biological hypotheses. With the advances in distributed and parallel computing, many modern bioinformatics algo-rithms have come to utilize multi-core processing on CPUs or GPUs. Having increased computa-tional capacity allows us to solve bigger and more complex problems. However, such hardware advances do not spontaneously give rise to the improved utilization of large-size datasets and do not bring insights by themselves to biological questions. Smart data structures and algorithms are required in order to exploit the enhanced computing power and to extract high quality infor-mation. For population genetics, an efficient representation for a pan genome and relevant for-mulas should be manifested. On top of such representation, sequence alignments play pivotal roles in solving biological problems such that one may calculate allele frequencies, detect rare variants, associate genotypes to phenotypes, and infer causality of certain diseases. To detect mutations in a population, the conventional alignment method is enhanced as multiple genomes are simultaneously aligned. The number of complete genome sequences has steadily increased, but the analysis of large, complex datasets remains challenging. Next Generation Sequencing (NGS) technology is consid-ered one of the great advances in modern biology, and has led to a dramatically more precise and detailed understanding of genomes and their activities. The contiguity and accuracy of se-quencing reads have been improving so that a complete genome sequence of a single cell may become obtainable from a sequencing library in the future. Though chemical and optical engi-neering are main drivers to advance sequencing technology, informatics and computer engineer-ing have significantly influenced the quality of sequences. Genomic sequencing data contain errors in forms of substitution, insertion, and deletion of nucleotides. The read length is far shorter than a given genome. These problems can be alleviated by means of error corrections and genome assemblies, leading to more accurate downstream analyses. Short read aligners have been the key ingredient for measuring and observing genetic muta-tions using Illumina sequencing technology, the dominant technology in the last decade. As long reads from newer methods or assembled contigs become accessible, mapping schemes capturing long-range context, but not lingering in local matches should be devised. Parameters for short read aligners such as the number of mismatches, gap-opening and -extending penalty are not directly applicable to long read alignments. At the other end of the spectrum, whole genome aligners (WGA) attempt to solve the alignment problem in a much longer context, providing es-sential data for comparative studies. However, available WGA algorithms are not yet optimized concerning practical uses in population genetics due to high computing demands. Moreover, too little attention has been paid to define an ideal data format for applications in comparative ge-nomics. To deal with datasets representing a large population of diverse individuals, multiple se-quence alignment (MSA) algorithms should be combined with WGA methods, known as multi-ple whole genome alignment (MWGA). Though several MWGA algorithms have been proposed, the accuracy of algorithms has not been clearly measured. In fact, known quality assessment tools have yielded highly fluctuating results dependent on the selection of organisms, and se-quencing profiles. Of even more serious concern, experiments to measure the performance of MWGA methods have been only ambiguously described. In turn, it has been difficult to inter-pret the multiple alignment results. With known precise locations of variants from simulations and standardized statistics, I present a far more comprehensive method to measure the accuracy of a MWGA algorithm. Metagenomics is a study of the genetic composition in a given community (often, predomi-nantly microbial). It overcomes the limitation of having to culture each organism for genome sequencing and also provides quantitative information on the composition of a community. Though an environmental sample provides more natural genetic material, the complexity of analyses is greatly increased. The number of species can be very large and only small portions of a genome may be sampled. I provide an algorithm, Poseidon, classifying sequencing reads to taxonomy identifiers at a species resolution and helping to quantify their relative abundances in the samples. The interactions among individual bacteria in a certain population can result in both conflict and cooperation. Thus, a mixture of diverse bacteria species shows a set of functional adaptations to a particular environment. The composition of species would be changed by dis-tinct biotic or abiotic factors, which may lead to a successive alteration in susceptibility of a host to a certain disease. In turn, basic concerns for a metagenomics study are an accurate quantifica-tion of species and deciphering their functional role in a given environment. In summary, this work presents advanced bioinformatics methods: Trowel, Kairos, Apollo, and Poseidon. Trowel corrects sequencing errors in reads by utilizing a piece of high-quality k-mer information. Kairos aligns query sequences against multiple genomes in a population of a single species. Apollo characterizes genome-wide genetic variants from point mutations to large structural variants on top of the alignments of Kairos. Poseidon classifies metagenomics datasets to taxonomy identifiers. Though the work does not directly address any specific biological ques-tions, it would provide preliminary materials for further downstream analyses.In der Populationsgenetik werden die räumlichen und zeitlichen Verteilungen von genetischen Varianten in Individuen einer Population untersucht. Über die Generationen ändert sich die Frequenz von Genen und Allelen. Die Auswirkungen der durch diese evolutionären Mechanismen gebildete Diversität zeigt sich auf verschiedenen Stufen biologischer Organisation, von einzelnen Molekülen bis hin zu gesamten Organismen. Sind Eigenschaften betroffen welche einen Einfluss auf die Überlebens- und Reproduktionsrate haben, werden die zugrundeliegenden Allele mit höherer Wahrscheinlichkeit in die nachfolgende Generation übetragen werden. Allele mit positiver Auswirkungen auf die Fitness eines Organismus könnten sich so in einer Population verbreiten. Zufällige Mutationen sind eine Quelle für neue Allele in einer Population. Die zugrundeliegenden Veränderungen der DNA-Sequenzen können durch Fehler bei der DNA-Replikation oder von DNA-Reparaturmechanismen, sowie Insertionen und Deletionen von mobilen genetischen Elementen entstehen. In sich sexuell fortpflanzenden Organismen sorgt genetische Rekombination für eine Vermischung der Allele auf den Chromosomen. Obwohl die Allelfrequenzen nicht verändert werden, entstehen dadurch neue Kombinationen von Allelen. Auf der molekularen Ebene können Genloci durch Mutationen an Aktivität gewinnen oder funktionslos werden, was wiederum eine Auswirkung auf den entstehenden Phänotyp und die Überlebensfähigkeit des Organismus hat. Trotz der höherer Verbreitung neutraler Mutationen, kann das Ansammeln von kleinen Veränderungen im Laufe der Zeit die Fitness beeinflussen und weiter der Evolution beitragen. Das Ziel dieser Arbeit war es ein Rahmenwerk für die vergleichende Analyse großer genomischer Datensets zur Verfügung zu stellen. Im Besonderen für Datensätze mit vielen Individuen einer Spezies wie im 1001 Genomes Project (Arabidopsis thaliana), im 1000 Genomes Project (Homo sapiens) sowie in metagenomische Datensätzen. Für die folgenden Problemstellungen wurden Algorithmen entwickelt: 1) Fehlerkorrektur und Verbesserung der effektiven Coverage von genomischen Rohdaten (Trowel), 2) multiple Gesamt-Genomalinierungen (whole genome alignments; WGAs) und die Detektion kleiner Unterschiede innerhalb einer Population (Kairos), 3) Identifikation struktureller Varianten (SV) (Apollo), und 4) Klassifikation von Mikroorganismen in metagenomischen Datensätzen (Poseidon). Diese Algorithmen nehmen keine Interpretation biologischer Rohdaten vor sondern stellen Ausgangspunkte für biologische Hypothesen zur Verfügung. Auf Grund der Fortschritte in verteiltem und paralellem Rechnen nutzen viele moderne Bioinformatikalgorithmen Paralellisierung auf CPUs oder GPUs. Diese erhöhte Rechenkapazität erlaubt es uns größere und komplexere Probleme zu lösen. Allerdings machen diese technische Fortschritte allein es noch nicht möglich, sehr große Datensätze zu nutzen und bringen auch keine Antworten auf biologische Fragen. Um von diesen Fortschritten zu profitieren und hochqualitative Informationen aus Rohdaten extrahieren zu können, sind gut durchdachte Datenstrukturen und Algorithmen notwendig. Für die Populationsgenetik sollte eine effiziente Repräsentation eines Pan-Genoms und dazugehöriger Formeln geschaffen werden. Zusätzlich zu einer solchen Repräsentation spielen Sequenzalalinierungen eine entscheidende Rolle im Lösen biologischer Probleme wie der Berechnung von Allelfrequenzen, der Detektion seltener Varianten, der Assoziation von Genotypen und Phänotypen und Inferenz von Kausalität bezüglich bestimmter Krankheiten. Um Mutationen in einer Population zu detektieren wird die konventionelle Alinierungsmethode verbessert da mehrere Genome gleichzeitig aliniert werden. Obwohl die Anzahl vollständiger Genomsequenzen stetig gestiegen ist, ist die Analyse dieser großen und komplexen Datensätze immer noch schwierig. Die Hochdurchsatz-Sequenzierung (Next Generation Sequencing; NGS), die ein präziseres und detaillierteres Bild der Genomik geliefert hat, ist einer der großen Fortschritte in der Biotechnologie. Die Länge und Genauigkeit der Sequenzier-Abschnitte (Reads) hat sich so weit verbessert, dass in Zukunft wahrscheinlich ein vollständiges Genom von nur einer einzelnen Zelle als Ausgangsmaterial rekonstruiert werden kann. Obwohl die wichtigsten Schritte zur Realisierung von Sequenzierungsfortschritten eine Domäne der Verfahrenstechnik sind, haben auch die Informatik und Computertechnik die Qualität der Sequenzen entscheidend beeinflusst. Sequenzierdaten enthalten Fehler in Form von Substitutionen, Insertionen oder Deletionen von Nukleotiden. Außerdem ist die Länge der erzeugten Reads deutlich kürzer als die eines vollständigen Genoms. Diese Schwierigkeiten können durch Fehlerkorrekturen und Genomassemblierung verringert werden, wodurch nachfolgende Analysen genauer werden. Programme zur Alinierung kurzer Reads waren bisher die wichtigste Methode um genetische Mutationen zu detektieren. Da nun duch neue Technologien häufig längere Reads oder auch Contigs verfügbar sind, werden Kartierungsmethoden benötigt die sich an langen Ähnlichkeiten orientieren und sich nicht von kurzen lokalen Übereinstimmungen fehlleiten lassen. Die Parameter für Programme zur Alinierung von kurzen Reads welche nichtübereinstimmende Basen und das Eröffnen und Verlängern von Lücken bestrafen, sind nicht direkt auf die Alinierung längerer Reads anwendbar. Alternativ können WGA-Algorithmen verwendet werden, die das Alinierungsproblem in einem längeren Kontext lösen und dadurch essentielle Daten für vergleichende Studien liefern. Allerdings haben bisherige WGA-Algorithmen noch Probleme in der praktischen Anwendung für die Populationsgenetik wegen ihrer hohen Zeit- und Speicherkomplexität. Außerdem wurde der Definition idealer Datenformate für Anwendungen der komparativen Genomik nur wenig Aufmerksamkeit gewidmet. Um Datensätze großer Populationen verarbeiten zu können sollten Algorithmen für multiple Sequenzalinierung (MSA) mit WGA-Methoden zur multiplen Gesamtgenomalinierung (MWGA) kombiniert werden. Obwohl bereits viele MWGA-Methoden vorgestellt wurden, wurde ihre Genauigkeit noch nicht aussagekräftig überprüft. Vielmehr lieferten Qualitätskontrollen sehr unterschiedliche Ergebnisse, abhängig von der Auswahl von Organismen und verwendeten Sequenzen. Ein noch größeres Problem ist die ungenaue Beschreibung von Experimenten zur Messung der Funktionalität von MWGA-Methoden. Daher war es schwierig die multiplen Alinierungs-Ergebnisse zu interpretieren. Ich beschreibe in dieser Arbeit eine deutlich umfassendere Methode um die Genauigkeit eines MWGA-Algorithmus zu bestimmen. Sie macht von vorab bekannten Positionen der Varianten Gebrauch wozu Simulationen und standardisierte Statistiken herangezogen werden. Die Metagenomik untersucht die genetische Zusammensetzung einer (oft hauptsächlich mikrobiellen) natürlichen Organismen-Gemeinschaft. Sie ist unabhängig von der Kultivierung einzelner Mikroben und liefert auch quantitative Informationen zur Zusammensetzung der Gemeinschaft. Während Proben aus der Umwelt ein natürlicheres Ausgangsmaterial liefern ist gleichzeitig auch die Komplexität der Analysen deutlich höher: die Anzahl der enthaltenen Arten kann sehr groß sein, so dass nur ein Bruchteil der Genome tatsächlich analysiert wird. Ich stelle einen Algorithmus vor, Poseidon, der Reads zur taxonomischen Identifikation mit Arten-genauer Auflösung zuordnet und damit hilft deren relative Häufigkeit in einer Probe zu quantifizieren. Die Interaktionen zwischen Bakterien kann Konflikte und auch Kooperationen hervorrufen. Die spezielle Mischung unterschiedlicher Artem kann daher eine Reihe funktionaler Anpassungen an eine bestimmte Umgebung aufzeigen. Die Zusammensetzung der Arten könnte durch biotische oder abiotische Faktoren verändert werden, was im Kontext eines Krankheitsbildes zu einer Veränderung der Anfälligkeit eines Wirts bezüglich eines bestimmten Erregers führen kann. Daher sind die genaue Quantifizierung von Arten und die Entschlüsselung ihrer funktionalen Rolle in einer bestimmten Umgebung grundlegend für metagenomische Studien. Zusammenfassend stelle ich in dieser Arbeit fortgeschrittene bioinformatische Methoden, Trowel, Kairos, Apollo und Poseidon vor. Trowel korrigiert Fehler in Sequenzabschnitten mit Hilfe von k-mer Informationen von hoher Qualität. Kairos führt die Alinierung einer Sequenz zu multiplen Genomen einer Art durch. Apollo charakterisiert genomweit genetische Varianten basierend auf den Alinierungen von Kairos, und erfasst sowohl Punktmutationen als auch große strukturelle Varianten. Poseidon ordnet metagenomische Datensätze taxonomischen Identifikatoren zu. Auch wenn keine spezifischen biologischen Fragestellungen beantwortet werden, wird die Basis für zukünftige Fragen geschaffen

    Novel computational techniques for mapping and classifying Next-Generation Sequencing data

    Get PDF
    Since their emergence around 2006, Next-Generation Sequencing technologies have been revolutionizing biological and medical research. Quickly obtaining an extensive amount of short or long reads of DNA sequence from almost any biological sample enables detecting genomic variants, revealing the composition of species in a metagenome, deciphering cancer biology, decoding the evolution of living or extinct species, or understanding human migration patterns and human history in general. The pace at which the throughput of sequencing technologies is increasing surpasses the growth of storage and computer capacities, which creates new computational challenges in NGS data processing. In this thesis, we present novel computational techniques for read mapping and taxonomic classification. With more than a hundred of published mappers, read mapping might be considered fully solved. However, the vast majority of mappers follow the same paradigm and only little attention has been paid to non-standard mapping approaches. Here, we propound the so-called dynamic mapping that we show to significantly improve the resulting alignments compared to traditional mapping approaches. Dynamic mapping is based on exploiting the information from previously computed alignments, helping to improve the mapping of subsequent reads. We provide the first comprehensive overview of this method and demonstrate its qualities using Dynamic Mapping Simulator, a pipeline that compares various dynamic mapping scenarios to static mapping and iterative referencing. An important component of a dynamic mapper is an online consensus caller, i.e., a program collecting alignment statistics and guiding updates of the reference in the online fashion. We provide Ococo, the first online consensus caller that implements a smart statistics for individual genomic positions using compact bit counters. Beyond its application to dynamic mapping, Ococo can be employed as an online SNP caller in various analysis pipelines, enabling SNP calling from a stream without saving the alignments on disk. Metagenomic classification of NGS reads is another major topic studied in the thesis. Having a database with thousands of reference genomes placed on a taxonomic tree, the task is to rapidly assign a huge amount of NGS reads to tree nodes, and possibly estimate the relative abundance of involved species. In this thesis, we propose improved computational techniques for this task. In a series of experiments, we show that spaced seeds consistently improve the classification accuracy. We provide Seed-Kraken, a spaced seed extension of Kraken, the most popular classifier at present. Furthermore, we suggest ProPhyle, a new indexing strategy based on a BWT-index, obtaining a much smaller and more informative index compared to Kraken. We provide a modified version of BWA that improves the BWT-index for a quick k-mer look-up

    Alternative Splicing and Protein Structure Evolution

    Get PDF
    In den letzten Jahren gab es in verschiedensten Bereichen der Biologie einen dramatischen Anstieg verfügbarer, experimenteller Daten. Diese erlauben zum ersten Mal eine detailierte Analyse der Funktionsweisen von zellulären Komponenten wie Genen und Proteinen, die Analyse ihrer Verknüpfung in zellulären Netzwerken sowie der Geschichte ihrer Evolution. Insbesondere der Bioinformatik kommt hier eine wichtige Rolle in der Datenaufbereitung und ihrer biologischen Interpretation zu. In der vorliegenden Doktorarbeit werden zwei wichtige Bereiche der aktuellen bioinformatischen Forschung untersucht, nämlich die Analyse von Proteinstrukturevolution und Ähnlichkeiten zwischen Proteinstrukturen, sowie die Analyse von alternativem Splicing, einem integralen Prozess in eukaryotischen Zellen, der zur funktionellen Diversität beiträgt. Insbesondere führen wir mit dieser Arbeit die Idee einer kombinierten Analyse der beiden Mechanismen (Strukturevolution und Splicing) ein. Wir zeigen, dass sich durch eine kombinierte Betrachtung neue Einsichten gewinnen lassen, wie Strukturevolution und alternatives Splicing sowie eine Kopplung beider Mechanismen zu funktioneller und struktureller Komplexität in höheren Organismen beitragen. Die in der Arbeit vorgestellten Methoden, Hypothesen und Ergebnisse können dabei einen Beitrag zu unserem Verständnis der Funktionsweise von Strukturevolution und alternativem Splicing bei der Entstehung komplexer Organismen leisten wodurch beide, traditionell getrennte Bereiche der Bioinformatik in Zukunft voneinander profitieren können

    Evolutionary Inference from Admixed Genomes: Implications of Hybridization for Biodiversity Dynamics and Conservation

    Get PDF
    Hybridization as a macroevolutionary mechanism has been historically underappreciated among vertebrate biologists. Yet, the advent and subsequent proliferation of next-generation sequencing methods has increasingly shown hybridization to be a pervasive agent influencing evolution in many branches of the Tree of Life (to include ancestral hominids). Despite this, the dynamics of hybridization with regards to speciation and extinction remain poorly understood. To this end, I here examine the role of hybridization in the context of historical divergence and contemporary decline of several threatened and endangered North American taxa, with the goal to illuminate implications of hybridization for promoting—or impeding—population persistence in a shifting adaptive landscape. Chapter I employed population genomic approaches to examine potential effects of habitat modification on species boundary stability in co-occurring endemic fishes of the Colorado River basin (Gila robusta and G. cypha). Results showed how one potential outcome of hybridization might drive species decline: via a breakdown in selection against interspecific heterozygotes and subsequent genetic erosion of parental species. Chapter II explored long-term contributions of hybridization in an evolutionarily recent species complex (Gila) using a combination of phylogenomic and phylogeographic modelling approaches. Massively parallel computational methods were developed (and so deployed) to categorize sources of phylogenetic discordance as drivers of systematic bias among a panel of species tree inference algorithms. Contrary to past evidence, we found that hypotheses of hybrid origin (excluding one notable example) were instead explained by gene-tree discordance driven by a rapid radiation. Chapter III examined patterns of local ancestry in the endangered red wolf genome (Canis rufus) – a controversial taxon of a long-standing debate about the origin of the species. Analyses show how pervasive autosomal introgression served to mask signatures of prior isolation—in turn misleading analyses that led the species to be interpreted as of recent hybrid origin. Analyses also showed how recombination interacts with selection to create a non-random, structured genomic landscape of ancestries with, in the case of the red wolf, the ‘original’ species tree being retained only in low-recombination ‘refugia’ of the X chromosome. The final three chapters present bioinformatic software that I developed for my dissertation research to facilitate molecular approaches and analyses presented in Chapters I–III. Chapter IV details an in-silico method for optimizing similar genomic methods as used herein (RADseq of reduced representation libraries) for other non-model organisms. Chapter V describes a method for parsing genomic datasets for elements of interest, either as a filtering mechanism for downstream analysis, or as a precursor to targeted-enrichment reduced-representation genomic sequencing. Chapter VI presents a rapid algorithm for the definition of a ‘most parsimonious’ set of recombinational breakpoints in genomic datasets, as a method promoting local ancestry analyses as utilized in Chapter III. My three case studies and accompanying software promote three trajectories in modern hybridization research: How does hybridization impact short-term population persistence? How does hybridization drive macroevolutionary trends? and How do outcomes of hybridization vary in the genome? In so doing, my research promotes a deeper understanding of the role that hybridization has and will continue to play in governing the evolutionary fates of lineages at both contemporary and historic timescales

    Compressão e análise de dados genómicos

    Get PDF
    Doutoramento em InformáticaGenomic sequences are large codi ed messages describing most of the structure of all known living organisms. Since the presentation of the rst genomic sequence, a huge amount of genomics data have been generated, with diversi ed characteristics, rendering the data deluge phenomenon a serious problem in most genomics centers. As such, most of the data are discarded (when possible), while other are compressed using general purpose algorithms, often attaining modest data reduction results. Several speci c algorithms have been proposed for the compression of genomic data, but unfortunately only a few of them have been made available as usable and reliable compression tools. From those, most have been developed to some speci c purpose. In this thesis, we propose a compressor for genomic sequences of multiple natures, able to function in a reference or reference-free mode. Besides, it is very exible and can cope with diverse hardware speci cations. It uses a mixture of nite-context models (FCMs) and eXtended FCMs. The results show improvements over state-of-the-art compressors. Since the compressor can be seen as a unsupervised alignment-free method to estimate algorithmic complexity of genomic sequences, it is the ideal candidate to perform analysis of and between sequences. Accordingly, we de ne a way to approximate directly the Normalized Information Distance, aiming to identify evolutionary similarities in intra- and inter-species. Moreover, we introduce a new concept, the Normalized Relative Compression, that is able to quantify and infer new characteristics of the data, previously undetected by other methods. We also investigate local measures, being able to locate speci c events, using complexity pro les. Furthermore, we present and explore a method based on complexity pro les to detect and visualize genomic rearrangements between sequences, identifying several insights of the genomic evolution of humans. Finally, we introduce the concept of relative uniqueness and apply it to the Ebolavirus, identifying three regions that appear in all the virus sequences outbreak but nowhere in the human genome. In fact, we show that these sequences are su cient to classify di erent sub-species. Also, we identify regions in human chromosomes that are absent from close primates DNA, specifying novel traits in human uniqueness.As sequências genómicas podem ser vistas como grandes mensagens codificadas, descrevendo a maior parte da estrutura de todos os organismos vivos. Desde a apresentação da primeira sequência, um enorme número de dados genómicos tem sido gerado, com diversas características, originando um sério problema de excesso de dados nos principais centros de genómica. Por esta razão, a maioria dos dados é descartada (quando possível), enquanto outros são comprimidos usando algoritmos genéricos, quase sempre obtendo resultados de compressão modestos. Têm também sido propostos alguns algoritmos de compressão para sequências genómicas, mas infelizmente apenas alguns estão disponíveis como ferramentas eficientes e prontas para utilização. Destes, a maioria tem sido utilizada para propósitos específicos. Nesta tese, propomos um compressor para sequências genómicas de natureza múltipla, capaz de funcionar em modo referencial ou sem referência. Além disso, é bastante flexível e pode lidar com diversas especificações de hardware. O compressor usa uma mistura de modelos de contexto-finito (FCMs) e FCMs estendidos. Os resultados mostram melhorias relativamente a compressores estado-dearte. Uma vez que o compressor pode ser visto como um método não supervisionado, que não utiliza alinhamentos para estimar a complexidade algortímica das sequências genómicas, ele é o candidato ideal para realizar análise de e entre sequências. Em conformidade, definimos uma maneira de aproximar directamente a distância de informação normalizada (NID), visando a identificação evolucionária de similaridades em intra e interespécies. Além disso, introduzimos um novo conceito, a compressão relativa normalizada (NRC), que é capaz de quantificar e inferir novas características nos dados, anteriormente indetectados por outros métodos. Investigamos também medidas locais, localizando eventos específicos, usando perfis de complexidade. Propomos e exploramos um novo método baseado em perfis de complexidade para detectar e visualizar rearranjos genómicos entre sequências, identificando algumas características da evolução genómica humana. Por último, introduzimos um novo conceito de singularidade relativa e aplicamo-lo ao Ebolavirus, identificando três regiões presentes em todas as sequências do surto viral, mas ausentes do genoma humano. De facto, mostramos que as três sequências são suficientes para classificar diferentes sub-espécies. Também identificamos regiões nos cromossomas humanos que estão ausentes do ADN de primatas próximos, especificando novas características da singularidade humana

    Analysis and application of hash-based similarity estimation techniques for biological sequence analysis

    Get PDF
    In Bioinformatics, a large group of problems requires the computation or estimation of sequence similarity. However, the analysis of biological sequence data has, among many others, three capital challenges: a large amount generated data which contains technology-specific errors (that can be mistaken for biological signals), and that might need to be analyzed without access to a reference genome. Through the use of locality sensitive hashing methods, both the efficient estimation of sequence similarity and tolerance against the errors specific to biological data can be achieved. We developed a variant of the winnowing algorithm for local minimizer computation, which is specifically geared to deal with repetitive regions within biological sequences. Through compressing redundant information, we can both reduce the size of the hash tables required to save minimizer sketches, as well as reduce the amount of redundant low quality alignment candidates. Analyzing the distribution of segment lengths generated by this approach, we can better judge the size of required data structures, as well as identify hash functions feasible for this technique. Our evaluation could verify that simple and fast hash functions, even when using small hash value spaces (hash functions with small codomain), are sufficient to compute compressed minimizers and perform comparable to uniformly randomly chosen hash values. We also outlined an index for a taxonomic protein database using multiple compressed winnowings to identify alignment candidates. To store MinHash values, we present a cache-optimized implementation of a hash table using Hopscotch hashing to resolve collisions. As a biological application of similarity based analysis, we describe the analysis of double digest restriction site associated DNA sequencing (ddRADseq). We implemented a simulation software able to model the biological and technological influences of this technology to allow better development and testing of ddRADseq analysis software. Using datasets generated by our software, as well as data obtained from population genetic experiments, we developed an analysis workflow for ddRADseq data, based on the Stacks software. Since the quality of results generated by Stacks strongly depends on how well the used parameters are adapted to the specific dataset, we developed a Snakemake workflow that automates preprocessing tasks while also allowing the automatic exploration of different parameter sets. As part of this workflow, we developed a PCR deduplication approach able to generate consensus reads incorporating the base quality values (as reported by the sequencing device), without performing an alignment first. As an outlook, we outline a MinHashing approach that can be used for a faster and more robust clustering, while addressing incomplete digestion and null alleles, two effects specific for ddRADseq that current analysis tools cannot reliably detect

    Learning to Behave: Internalising Knowledge

    Get PDF
    corecore