17,422 research outputs found
Optimal Assembly for High Throughput Shotgun Sequencing
We present a framework for the design of optimal assembly algorithms for
shotgun sequencing under the criterion of complete reconstruction. We derive a
lower bound on the read length and the coverage depth required for
reconstruction in terms of the repeat statistics of the genome. Building on
earlier works, we design a de Brujin graph based assembly algorithm which can
achieve very close to the lower bound for repeat statistics of a wide range of
sequenced genomes, including the GAGE datasets. The results are based on a set
of necessary and sufficient conditions on the DNA sequence and the reads for
reconstruction. The conditions can be viewed as the shotgun sequencing analogue
of Ukkonen-Pevzner's necessary and sufficient conditions for Sequencing by
Hybridization.Comment: 26 pages, 18 figure
Reduced Complexity Sphere Decoding
In Multiple-Input Multiple-Output (MIMO) systems, Sphere Decoding (SD) can
achieve performance equivalent to full search Maximum Likelihood (ML) decoding,
with reduced complexity. Several researchers reported techniques that reduce
the complexity of SD further. In this paper, a new technique is introduced
which decreases the computational complexity of SD substantially, without
sacrificing performance. The reduction is accomplished by deconstructing the
decoding metric to decrease the number of computations and exploiting the
structure of a lattice representation. Furthermore, an application of SD,
employing a proposed smart implementation with very low computational
complexity is introduced. This application calculates the soft bit metrics of a
bit-interleaved convolutional-coded MIMO system in an efficient manner. Based
on the reduced complexity SD, the proposed smart implementation employs the
initial radius acquired by Zero-Forcing Decision Feedback Equalization (ZF-DFE)
which ensures no empty spheres. Other than that, a technique of a particular
data structure is also incorporated to efficiently reduce the number of
executions carried out by SD. Simulation results show that these approaches
achieve substantial gains in terms of the computational complexity for both
uncoded and coded MIMO systems.Comment: accepted to Journal. arXiv admin note: substantial text overlap with
arXiv:1009.351
JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction
Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder
Parallel Batch-Dynamic Graph Connectivity
In this paper, we study batch parallel algorithms for the dynamic
connectivity problem, a fundamental problem that has received considerable
attention in the sequential setting. The most well known sequential algorithm
for dynamic connectivity is the elegant level-set algorithm of Holm, de
Lichtenberg and Thorup (HDT), which achieves amortized time per
edge insertion or deletion, and time per query. We
design a parallel batch-dynamic connectivity algorithm that is work-efficient
with respect to the HDT algorithm for small batch sizes, and is asymptotically
faster when the average batch size is sufficiently large. Given a sequence of
batched updates, where is the average batch size of all deletions, our
algorithm achieves expected amortized work per
edge insertion and deletion and depth w.h.p. Our algorithm
answers a batch of connectivity queries in expected
work and depth w.h.p. To the best of our knowledge, our algorithm
is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
- …