11,683 research outputs found
Digital PCR methods improve detection sensitivity and measurement precision of low abundance mtDNA deletions
Mitochondrial DNA (mtDNA) mutations are a common cause of primary mitochondrial disorders, and have also been implicated in a broad collection of conditions, including aging, neurodegeneration, and cancer. Prevalent among these pathogenic variants are mtDNA deletions, which show a strong bias for the loss of sequence in the major arc between, but not including, the heavy and light strand origins of replication. Because individual mtDNA deletions can accumulate focally, occur with multiple mixed breakpoints, and in the presence of normal mtDNA sequences, methods that detect broad-spectrum mutations with enhanced sensitivity and limited costs have both research and clinical applications. In this study, we evaluated semi-quantitative and digital PCR-based methods of mtDNA deletion detection using double-stranded reference templates or biological samples. Our aim was to describe key experimental assay parameters that will enable the analysis of low levels or small differences in mtDNA deletion load during disease progression, with limited false-positive detection. We determined that the digital PCR method significantly improved mtDNA deletion detection sensitivity through absolute quantitation, improved precision and reduced assay standard error
Diminishing Return for Increased Mappability with Longer Sequencing Reads: Implications of the k-mer Distributions in the Human Genome
The amount of non-unique sequence (non-singletons) in a genome directly
affects the difficulty of read alignment to a reference assembly for high
throughput-sequencing data. Although a greater length increases the chance for
reads being uniquely mapped to the reference genome, a quantitative analysis of
the influence of read lengths on mappability has been lacking. To address this
question, we evaluate the k-mer distribution of the human reference genome. The
k-mer frequency is determined for k ranging from 20 to 1000 basepairs. We use
the proportion of non-singleton k-mers to evaluate the mappability of reads for
a corresponding read length. We observe that the proportion of non-singletons
decreases slowly with increasing k, and can be fitted by piecewise power-law
functions with different exponents at different k ranges. A faster decay at
smaller values for k indicates more limited gains for read lengths > 200
basepairs. The frequency distributions of k-mers exhibit long tails in a
power-law-like trend, and rank frequency plots exhibit a concave Zipf's curve.
The location of the most frequent 1000-mers comprises 172 kilobase-ranged
regions, including four large stretches on chromosomes 1 and X, containing
genes with biomedical implications. Even the read length 1000 would be
insufficient to reliably sequence these specific regions.Comment: 5 figure
Methods for Joint Normalization and Comparison of Hi-C data
The development of chromatin conformation capture technology has opened new avenues of study into the 3D structure and function of the genome. Chromatin structure is known to influence gene regulation, and differences in structure are now emerging as a mechanism of regulation between, e.g., cell differentiation and disease vs. normal states. Hi-C sequencing technology now provides a way to study the 3D interactions of the chromatin over the whole genome. However, like all sequencing technologies, Hi-C suffers from several forms of bias stemming from both the technology and the DNA sequence itself. Several normalization methods have been developed for normalizing individual Hi-C datasets, but little work has been done on developing joint normalization methods for comparing two or more Hi-C datasets. To make full use of Hi-C data, joint normalization and statistical comparison techniques are needed to carry out experiments to identify regions where chromatin structure differs between conditions.
We develop methods for the joint normalization and comparison of two Hi-C datasets, which we then extended to more complex experimental designs. Our normalization method is novel in that it makes use of the distance-dependent nature of chromatin interactions. Our modification of the Minus vs. Average (MA) plot to the Minus vs. Distance (MD) plot allows for a nonparametric data-driven normalization technique using loess smoothing. Additionally, we present a simple statistical method using Z-scores for detecting differentially interacting regions between two datasets. Our initial method was published as the Bioconductor R package HiCcompare [http://bioconductor.org/packages/HiCcompare/](http://bioconductor.org/packages/HiCcompare/).
We then further extended our normalization and comparison method for use in complex Hi-C experiments with more than two datasets and optional covariates. We extended the normalization method to jointly normalize any number of Hi-C datasets by using a cyclic loess procedure on the MD plot. The cyclic loess normalization technique can remove between dataset biases efficiently and effectively even when several datasets are analyzed at one time. Our comparison method implements a generalized linear model-based approach for comparing complex Hi-C experiments, which may have more than two groups and additional covariates. The extended methods are also available as a Bioconductor R package [http://bioconductor.org/packages/multiHiCcompare/](http://bioconductor.org/packages/multiHiCcompare/). Finally, we demonstrate the use of HiCcompare and multiHiCcompare in several test cases on real data in addition to comparing them to other similar methods (https://doi.org/10.1002/cpbi.76)
Accurate estimation of homologue-specific DNA concentration-ratios in cancer samples allows long-range haplotyping
Interpretation of allelic copy measurements at polymorphic markers in cancer samples presents distinctive challenges and opportunities. Due to frequent gross chromosomal alterations occurring in cancer (aneuploidy), many genomic regions are present at homologous-allele imbalance. Within such regions, the unequal contribution of alleles at heterozygous markers allows for direct phasing of the haplotype derived from each individual parent. In addition, genome-wide estimates of homologue specific copy- ratios (HSCRs) are important for interpretation of the cancer genome in terms of fixed integral copy-numbers. We describe HAPSEG, a probabilistic method to interpret bi- allelic marker data in cancer samples. HAPSEG operates by partitioning the genome into segments of distinct copy number and modeling the four distinct genotypes in each segment. We describe general methods for fitting these models to data which are suit- able for both SNP microarrays and massively parallel sequencing data. In addition, we demonstrate a specially tailored error-model for interpretation of systematic variations arising in microarray platforms. The ability to directly determine haplotypes from cancer samples represents an opportunity to expand reference panels of phased chromosomes, which may have general interest in various population genetic applications. In addition, this property may be exploited to interrogate the relationship between germline risk and cancer phenotype with greater sensitivity than is possible using unphased genotype. Finally, we exploit the statistical dependency of phased genotypes to enable the fitting of more elaborate sample-level error-model parameters, allowing more accurate estimation of HSCRs in cancer samples
Recommended from our members
Cell-type-specific resolution epigenetics without the need for cell sorting or single-cell biology.
High costs and technical limitations of cell sorting and single-cell techniques currently restrict the collection of large-scale, cell-type-specific DNA methylation data. This, in turn, impedes our ability to tackle key biological questions that pertain to variation within a population, such as identification of disease-associated genes at a cell-type-specific resolution. Here, we show mathematically and empirically that cell-type-specific methylation levels of an individual can be learned from its tissue-level bulk data, conceptually emulating the case where the individual has been profiled with a single-cell resolution and then signals were aggregated in each cell population separately. Provided with this unprecedented way to perform powerful large-scale epigenetic studies with cell-type-specific resolution, we revisit previous studies with tissue-level bulk methylation and reveal novel associations with leukocyte composition in blood and with rheumatoid arthritis. For the latter, we further show consistency with validation data collected from sorted leukocyte sub-types
Recommended from our members
Computational Tools for Immune Repertoire Characterization and Primer Set Design
The enormous decrease in the cost of genomic sequencing over the past two decades has enabled researchers to revisit previously unaddressable questions in sequence analysis. However, this boom of genomic information has introduced new sets of problems that often demand computationally efficient methods. In this work, we describe computational tools for two such settings involving large-scale genomic data: 1) estimating copy number and allelic variation in two highly complex gene families, and 2) selective sequencing of a target genome in a complex DNA sample.We first describe a method that takes short reads from high-throughput sequencing and characterizes both copy number and allelic variation in the IGHV and TRBV loci. These two loci can vary extensively between individuals in copy number and contain genes that are highly similar, making their analysis technically challenging. Additionally, we have conducted the first study of a globally diverse sample of hundreds of individuals in these two loci from over a hundred populations. In addition to providing insight into the different evolutionary paths of the IGHV and TRBV loci, our results are also important to the adaptive immune repertoire sequencing community, where the lack of frequencies of common alleles and copy number variants is hampering existing analytical pipelines.In our second problem setting, we describe SOAPswga, an optimized and parallelized pipeline for primer design in the context of selective amplification. Unlike previous heuristic-based methods, SOAPswga uses machine learning methods to evaluate both individual primers and primer sets. Additionally, rather than brute force search for primer sets, such as in predecessor methods, SOAPswga uses branch-and-bound principles to pursue only the most promising sets. These optimizations, including the parallelization of each step, allow for a huge decrease in runtime from the order of weeks to minutes. We also discuss the results of our pipeline applied to the selective amplification of Mycobacterium tuberculosis in a sample of human blood. Lastly, we expand on the importance of this work, and in general, its potential usefulness to any setting consisting of targeted sequencing
- …