293 research outputs found

    Extension of Lander-Waterman theory for sequencing filtered DNA libraries

    Get PDF
    BACKGROUND: The degree to which conventional DNA sequencing techniques will be successful for highly repetitive genomes is unclear. Investigators are therefore considering various filtering methods to select against high-copy sequence in DNA clone libraries. The standard model for random sequencing, Lander-Waterman theory, does not account for two important issues in such libraries, discontinuities and position-based sampling biases (the so-called "edge effect"). We report an extension of the theory for analyzing such configurations. RESULTS: The edge effect cannot be neglected in most cases. Specifically, rates of coverage and gap reduction are appreciably lower than those for conventional libraries, as predicted by standard theory. Performance decreases as read length increases relative to island size. Although opposite of what happens in a conventional library, this apparent paradox is readily explained in terms of the edge effect. The model agrees well with prototype gene-tagging experiments for Zea mays and Sorghum bicolor. Moreover, the associated density function suggests well-defined probabilistic milestones for the number of reads necessary to capture a given fraction of the gene space. An exception for applying standard theory arises if sequence redundancy is less than about 1-fold. Here, evolution of the random quantities is independent of library gaps and edge effects. This observation effectively validates the practice of using standard theory to estimate the genic enrichment of a library based on light shotgun sequencing. CONCLUSION: Coverage performance using a filtered library is significantly lower than that for an equivalent-sized conventional library, suggesting that directed methods may be more critical for the former. The proposed model should be useful for analyzing future projects

    Coverage statistics for sequence census methods

    Get PDF
    Background: We study the statistical properties of fragment coverage in genome sequencing experiments. In an extension of the classic Lander-Waterman model, we consider the effect of the length distribution of fragments. We also introduce the notion of the shape of a coverage function, which can be used to detect abberations in coverage. The probability theory underlying these problems is essential for constructing models of current high-throughput sequencing experiments, where both sample preparation protocols and sequencing technology particulars can affect fragment length distributions. Results: We show that regardless of fragment length distribution and under the mild assumption that fragment start sites are Poisson distributed, the fragments produced in a sequencing experiment can be viewed as resulting from a two-dimensional spatial Poisson process. We then study the jump skeleton of the the coverage function, and show that the induced trees are Galton-Watson trees whose parameters can be computed. Conclusions: Our results extend standard analyses of shotgun sequencing that focus on coverage statistics at individual sites, and provide a null model for detecting deviations from random coverage in high-throughput sequence census based experiments. By focusing on fragments, we are also led to a new approach for visualizing sequencing data that should be of independent interest.Comment: 10 pages, 4 figure

    A general coverage theory for shotgun DNA sequencing

    Get PDF

    Occupancy Modeling, Maximum Contig Size Probabilities and Designing Metagenomics Experiments

    Get PDF
    Mathematical aspects of coverage and gaps in genome assembly have received substantial attention by bioinformaticians. Typical problems under consideration suppose that reads can be experimentally obtained from a single genome and that the number of reads will be set to cover a large percentage of that genome at a desired depth. In metagenomics experiments genomes from multiple species are simultaneously analyzed and obtaining large numbers of reads per genome is unlikely. We propose the probability of obtaining at least one contig of a desired minimum size from each novel genome in the pool without restriction based on depth of coverage as a metric for metagenomic experimental design. We derive an approximation to the distribution of maximum contig size for single genome assemblies using relatively few reads. This approximation is verified in simulation studies and applied to a number of different metagenomic experimental design problems, ranging in difficulty from detecting a single novel genome in a pool of known species to detecting each of a random number of novel genomes collectively sized and with abundances corresponding to given distributions in a single pool

    Aspects of coverage in medical DNA sequencing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>DNA sequencing is now emerging as an important component in biomedical studies of diseases like cancer. Short-read, highly parallel sequencing instruments are expected to be used heavily for such projects, but many design specifications have yet to be conclusively established. Perhaps the most fundamental of these is the redundancy required to detect sequence variations, which bears directly upon genomic coverage and the consequent resolving power for discerning somatic mutations.</p> <p>Results</p> <p>We address the medical sequencing coverage problem via an extension of the standard mathematical theory of haploid coverage. The expected diploid multi-fold coverage, as well as its generalization for aneuploidy are derived and these expressions can be readily evaluated for any project. The resulting theory is used as a scaling law to calibrate performance to that of standard BAC sequencing at 8Γ— to 10Γ— redundancy, i.e. for expected coverages that exceed 99% of the unique sequence. A differential strategy is formalized for tumor/normal studies wherein tumor samples are sequenced more deeply than normal ones. In particular, both tumor alleles should be detected at least twice, while both normal alleles are detected at least once. Our theory predicts these requirements can be met for tumor and normal redundancies of approximately 26Γ— and 21Γ—, respectively. We explain why these values do not differ by a factor of 2, as might intuitively be expected. Future technology developments should prompt even deeper sequencing of tumors, but the 21Γ— value for normal samples is essentially a constant.</p> <p>Conclusion</p> <p>Given the assumptions of standard coverage theory, our model gives pragmatic estimates for required redundancy. The differential strategy should be an efficient means of identifying potential somatic mutations for further study.</p

    A General Coverage Theory for Shotgun DNA Sequencing

    Full text link

    Understanding host-microbe interactions in maize kernel and sweetpotato leaf metagenomic profiles.

    Get PDF
    Functional and quantitative metagenomic profiling remains challenging and limits our understanding of host-microbe interactions. This body of work aims to mediate these challenges by using a novel quantitative reduced representation sequencing strategy (OmeSeq-qRRS), development of a fully automated software for quantitative metagenomic/microbiome profiling (Qmatey: quantitative metagenomic alignment and taxonomic identification using exact-matching) and implementing these tools for understanding plant-microbe-pathogen interactions in maize and sweetpotato. The next generation sequencing-based OmeSeq-qRRS leverages the strengths of shotgun whole genome sequencing and costs lower that the more affordable amplicon sequencing method. The novel FASTQ data compression/indexing and enhanced-multithreading of the MegaBLAST in Qmatey allows for computational speeds several thousand-folds faster than typical runs. Regardless of sample number, the analytical pipeline can be completed within days for genome-wide sequence data and provides broad-spectrum taxonomic profiling (virus to eukaryotes). As a proof of concept, these protocols and novel analytical pipelines were implemented to characterize the viruses within the leaf microbiome of a sweetpotato population that represents the global genetic diversity and the kernel microbiomes of genetically modified (GMO) and nonGMO maize hybrids. The metagenome profiles and high-density SNP data were integrated to identify host genetic factors (disease resistance and intracellular transport candidate genes) that underpin sweetpotato-virus interactions Additionally, microbial community dynamics were observed in the presence of pathogens, leading to the identification of multipartite interactions that modulate disease severity through co-infection and species competition. This study highlights a low-cost, quantitative and strain/species-level metagenomic profiling approach, new tools that complement the assay’s novel features and provide fast computation, and the potential for advancing functional metagenomic studies

    The theory of discovering rare variants via DNA sequencing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Rare population variants are known to have important biomedical implications, but their systematic discovery has only recently been enabled by advances in DNA sequencing. The design process of a discovery project remains formidable, being limited to <it>ad hoc </it>mixtures of extensive computer simulation and pilot sequencing. Here, the task is examined from a general mathematical perspective.</p> <p>Results</p> <p>We pose and solve the population sequencing design problem and subsequently apply standard optimization techniques that maximize the discovery probability. Emphasis is placed on cases whose discovery thresholds place them within reach of current technologies. We find that parameter values characteristic of rare-variant projects lead to a general, yet remarkably simple set of optimization rules. Specifically, optimal processing occurs at constant values of the per-sample redundancy, refuting current notions that sample size should be selected outright. Optimal project-wide redundancy and sample size are then shown to be inversely proportional to the desired variant frequency. A second family of constants governs these relationships, permitting one to immediately establish the most efficient settings for a given set of discovery conditions. Our results largely concur with the empirical design of the Thousand Genomes Project, though they furnish some additional refinement.</p> <p>Conclusion</p> <p>The optimization principles reported here dramatically simplify the design process and should be broadly useful as rare-variant projects become both more important and routine in the future.</p

    Lessons learned from the initial sequencing of the pig genome: comparative analysis of an 8 Mb region of pig chromosome 17

    Get PDF
    The sequencing, annotation and comparative analysis of an 8Mb region of pig chromosome 17 allows the coverage and quality of the pig genome sequencing project to be assesse
    • …
    corecore