28,733 research outputs found
FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search
We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for
\textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search
system for ultra-high dimensional datasets on a single machine, that does not
require similarity computations and is tailored for high-performance computing
platforms. By leveraging a LSH style randomized indexing procedure and
combining it with several principled techniques, such as reservoir sampling,
recent advances in one-pass minwise hashing, and count based estimations, we
reduce the computational and parallelization costs of similarity search, while
retaining sound theoretical guarantees.
We evaluate FLASH on several real, high-dimensional datasets from different
domains, including text, malicious URL, click-through prediction, social
networks, etc. Our experiments shed new light on the difficulties associated
with datasets having several million dimensions. Current state-of-the-art
implementations either fail on the presented scale or are orders of magnitude
slower than FLASH. FLASH is capable of computing an approximate k-NN graph,
from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than
10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam
dataset, using brute-force (), will require at least 20 teraflops. We
provide CPU and GPU implementations of FLASH for replicability of our results
Penalized estimation in large-scale generalized linear array models
Large-scale generalized linear array models (GLAMs) can be challenging to
fit. Computation and storage of its tensor product design matrix can be
impossible due to time and memory constraints, and previously considered design
matrix free algorithms do not scale well with the dimension of the parameter
vector. A new design matrix free algorithm is proposed for computing the
penalized maximum likelihood estimate for GLAMs, which, in particular, handles
nondifferentiable penalty functions. The proposed algorithm is implemented and
available via the R package \verb+glamlasso+. It combines several ideas --
previously considered separately -- to obtain sparse estimates while at the
same time efficiently exploiting the GLAM structure. In this paper the
convergence of the algorithm is treated and the performance of its
implementation is investigated and compared to that of \verb+glmnet+ on
simulated as well as real data. It is shown that the computation time fo
- …