17 research outputs found
Applied Randomized Algorithms for Efficient Genomic Analysis
The scope and scale of biological data continues to grow at an exponential clip, driven by advances in genetic sequencing, annotation and widespread adoption of surveillance efforts. For instance, the Sequence Read Archive (SRA) now contains more than 25 petabases of public data, while RefSeq, a collection of reference genomes, recently surpassed 100,000 complete genomes. In the process, it has outgrown the practical reach of many traditional algorithmic approaches in both time and space.
Motivated by this extreme scale, this thesis details efficient methods for clustering and summarizing large collections of sequence data. While our primary area of interest is biological sequences, these approaches largely apply to sequence collections of any type, including natural language, software source code, and graph structured data.
We applied recent advances in randomized algorithms to practical problems. We used MinHash and HyperLogLog, both examples of Locality- Sensitive Hashing, as well as coresets, which are approximate representations for finite sum problems, to build methods capable of scaling to billions of items. Ultimately, these are all derived from variations on sampling.
We combined these advances with hardware-based optimizations and
incorporated into free and open-source software libraries (sketch, frp, lib- simdsampling) and practical software tools built on these libraries (Dashing, Minicore, Dashing 2), empowering users to interact practically with colossal datasets on commodity hardware
Random projections: data perturbation for classification problems
Random projections offer an appealing and flexible approach to a wide range
of large-scale statistical problems. They are particularly useful in
high-dimensional settings, where we have many covariates recorded for each
observation. In classification problems there are two general techniques using
random projections. The first involves many projections in an ensemble -- the
idea here is to aggregate the results after applying different random
projections, with the aim of achieving superior statistical accuracy. The
second class of methods include hashing and sketching techniques, which are
straightforward ways to reduce the complexity of a problem, perhaps therefore
with a huge computational saving, while approximately preserving the
statistical efficiency.Comment: 24 pages, 4 figure