301 research outputs found
Sparser Johnson-Lindenstrauss Transforms
We give two different and simple constructions for dimensionality reduction
in via linear mappings that are sparse: only an
-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion with high probability, while
still achieving the asymptotically optimal number of rows. These are the first
constructions to provide subconstant sparsity for all values of parameters,
improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar,
and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up
applications where dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified
abstract, fixed typos, added open problem section; v4: simplified section 4
by giving 1 analysis that covers both constructions; v3: proof of Theorem 25
in v2 was written incorrectly, now fixed; v2: Added another construction
achieving same upper bound, and added proof of near-tight lower bound for DKS
schem
Fast Cross-Polytope Locality-Sensitive Hashing
We provide a variant of cross-polytope locality sensitive hashing with
respect to angular distance which is provably optimal in asymptotic sensitivity
and enjoys hash computation time. Building on a recent
result (by Andoni, Indyk, Laarhoven, Razenshteyn, Schmidt, 2015), we show that
optimal asymptotic sensitivity for cross-polytope LSH is retained even when the
dense Gaussian matrix is replaced by a fast Johnson-Lindenstrauss transform
followed by discrete pseudo-rotation, reducing the hash computation time from
to . Moreover, our scheme achieves
the optimal rate of convergence for sensitivity. By incorporating a
low-randomness Johnson-Lindenstrauss transform, our scheme can be modified to
require only random bitsComment: 14 pages, 6 figure
- …