10,087 research outputs found
Exploring single-sample SNP and INDEL calling with whole-genome de novo assembly
Motivation: Eugene Myers in his string graph paper (Myers, 2005) suggested
that in a string graph or equivalently a unitig graph, any path spells a valid
assembly. As a string/unitig graph also encodes every valid assembly of reads,
such a graph, provided that it can be constructed correctly, is in fact a
lossless representation of reads. In principle, every analysis based on
whole-genome shotgun sequencing (WGS) data, such as SNP and insertion/deletion
(INDEL) calling, can also be achieved with unitigs.
Results: To explore the feasibility of using de novo assembly in the context
of resequencing, we developed a de novo assembler, fermi, that assembles
Illumina short reads into unitigs while preserving most of information of the
input reads. SNPs and INDELs can be called by mapping the unitigs against a
reference genome. By applying the method on 35-fold human resequencing data, we
showed that in comparison to the standard pipeline, our approach yields similar
accuracy for SNP calling and better results for INDEL calling. It has higher
sensitivity than other de novo assembly based methods for variant calling. Our
work suggests that variant calling with de novo assembly be a beneficial
complement to the standard variant calling pipeline for whole-genome
resequencing. In the methodological aspects, we proposed FMD-index for
forward-backward extension of DNA sequences, a fast algorithm for finding all
super-maximal exact matches and one-pass construction of unitigs from an
FMD-index.
Availability: http://github.com/lh3/fermi
Contact: [email protected]: Rev2: submitted version with minor improvements; 7 page
Cerulean: A hybrid assembly using high throughput short and long reads
Genome assembly using high throughput data with short reads, arguably,
remains an unresolvable task in repetitive genomes, since when the length of a
repeat exceeds the read length, it becomes difficult to unambiguously connect
the flanking regions. The emergence of third generation sequencing (Pacific
Biosciences) with long reads enables the opportunity to resolve complicated
repeats that could not be resolved by the short read data. However, these long
reads have high error rate and it is an uphill task to assemble the genome
without using additional high quality short reads. Recently, Koren et al. 2012
proposed an approach to use high quality short reads data to correct these long
reads and, thus, make the assembly from long reads possible. However, due to
the large size of both dataset (short and long reads), error-correction of
these long reads requires excessively high computational resources, even on
small bacterial genomes. In this work, instead of error correction of long
reads, we first assemble the short reads and later map these long reads on the
assembly graph to resolve repeats.
Contribution: We present a hybrid assembly approach that is both
computationally effective and produces high quality assemblies. Our algorithm
first operates with a simplified version of the assembly graph consisting only
of long contigs and gradually improves the assembly by adding smaller contigs
in each iteration. In contrast to the state-of-the-art long reads error
correction technique, which requires high computational resources and long
running time on a supercomputer even for bacterial genome datasets, our
software can produce comparable assembly using only a standard desktop in a
short running time.Comment: Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013
Linking de novo assembly results with long DNA reads by dnaasm-link application
Currently, third-generation sequencing techniques, which allow to obtain much
longer DNA reads compared to the next-generation sequencing technologies, are
becoming more and more popular. There are many possibilities to combine data
from next-generation and third-generation sequencing.
Herein, we present a new application called dnaasm-link for linking contigs,
a result of \textit{de novo} assembly of second-generation sequencing data,
with long DNA reads. Our tool includes an integrated module to fill gaps with a
suitable fragment of appropriate long DNA read, which improves the consistency
of the resulting DNA sequences. This feature is very important, in particular
for complex DNA regions, as presented in the paper. Finally, our implementation
outperforms other state-of-the-art tools in terms of speed and memory
requirements, which may enable the usage of the presented application for
organisms with a large genome, which is not possible in~existing applications.
The presented application has many advantages as (i) significant memory
optimization and reduction of computation time (ii) filling the gaps through
the appropriate fragment of a specified long DNA read (iii) reducing number of
spanned and unspanned gaps in the existing genome drafts.
The application is freely available to all users under GNU Library or Lesser
General Public License version 3.0 (LGPLv3). The demo application, docker image
and source code are available at http://dnaasm.sourceforge.net.Comment: 16 pages, 5 figure
Haplotype Assembly: An Information Theoretic View
This paper studies the haplotype assembly problem from an information
theoretic perspective. A haplotype is a sequence of nucleotide bases on a
chromosome, often conveniently represented by a binary string, that differ from
the bases in the corresponding positions on the other chromosome in a
homologous pair. Information about the order of bases in a genome is readily
inferred using short reads provided by high-throughput DNA sequencing
technologies. In this paper, the recovery of the target pair of haplotype
sequences using short reads is rephrased as a joint source-channel coding
problem. Two messages, representing haplotypes and chromosome memberships of
reads, are encoded and transmitted over a channel with erasures and errors,
where the channel model reflects salient features of high-throughput
sequencing. The focus of this paper is on the required number of reads for
reliable haplotype reconstruction, and both the necessary and sufficient
conditions are presented with order-wise optimal bounds.Comment: 30 pages, 5 figures, 1 tabel, journa
Extreme Scale De Novo Metagenome Assembly
Metagenome assembly is the process of transforming a set of short,
overlapping, and potentially erroneous DNA segments from environmental samples
into the accurate representation of the underlying microbiomes's genomes.
State-of-the-art tools require big shared memory machines and cannot handle
contemporary metagenome datasets that exceed Terabytes in size. In this paper,
we introduce the MetaHipMer pipeline, a high-quality and high-performance
metagenome assembler that employs an iterative de Bruijn graph approach.
MetaHipMer leverages a specialized scaffolding algorithm that produces long
scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is
end-to-end parallelized using the Unified Parallel C language and therefore can
run seamlessly on shared and distributed-memory systems. Experimental results
show that MetaHipMer matches or outperforms the state-of-the-art tools in terms
of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and
is able to assemble previously intractable grand challenge metagenomes. We
demonstrate the unprecedented capability of MetaHipMer by computing the first
full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion
reads - size 2.6 TBytes.Comment: Accepted to SC1
Minimum error correction-based haplotype assembly: considerations for long read data
The single nucleotide polymorphism (SNP) is the most widely studied type of
genetic variation. A haplotype is defined as the sequence of alleles at SNP
sites on each haploid chromosome. Haplotype information is essential in
unravelling the genome-phenotype association. Haplotype assembly is a
well-known approach for reconstructing haplotypes, exploiting reads generated
by DNA sequencing devices. The Minimum Error Correction (MEC) metric is often
used for reconstruction of haplotypes from reads. However, problems with the
MEC metric have been reported. Here, we investigate the MEC approach to
demonstrate that it may result in incorrectly reconstructed haplotypes for
devices that produce error-prone long reads. Specifically, we evaluate this
approach for devices developed by Illumina, Pacific BioSciences and Oxford
Nanopore Technologies. We show that imprecise haplotypes may be reconstructed
with a lower MEC than that of the exact haplotype. The performance of MEC is
explored for different coverage levels and error rates of data. Our simulation
results reveal that in order to avoid incorrect MEC-based haplotypes, a
coverage of 25 is needed for reads generated by Pacific BioSciences RS systems.Comment: 17 pages, 6 figure
Optimal Assembly for High Throughput Shotgun Sequencing
We present a framework for the design of optimal assembly algorithms for
shotgun sequencing under the criterion of complete reconstruction. We derive a
lower bound on the read length and the coverage depth required for
reconstruction in terms of the repeat statistics of the genome. Building on
earlier works, we design a de Brujin graph based assembly algorithm which can
achieve very close to the lower bound for repeat statistics of a wide range of
sequenced genomes, including the GAGE datasets. The results are based on a set
of necessary and sufficient conditions on the DNA sequence and the reads for
reconstruction. The conditions can be viewed as the shotgun sequencing analogue
of Ukkonen-Pevzner's necessary and sufficient conditions for Sequencing by
Hybridization.Comment: 26 pages, 18 figure
- …