75 research outputs found
Monotone Maps, Sphericity and Bounded Second Eigenvalue
We consider {\em monotone} embeddings of a finite metric space into low
dimensional normed space. That is, embeddings that respect the order among the
distances in the original space. Our main interest is in embeddings into
Euclidean spaces. We observe that any metric on points can be embedded into
, while, (in a sense to be made precise later), for almost every
-point metric space, every monotone map must be into a space of dimension
.
It becomes natural, then, to seek explicit constructions of metric spaces
that cannot be monotonically embedded into spaces of sublinear dimension. To
this end, we employ known results on {\em sphericity} of graphs, which suggest
one example of such a metric space - that defined by a complete bipartitegraph.
We prove that an -regular graph of order , with bounded diameter
has sphericity , where is the second
largest eigenvalue of the adjacency matrix of the graph, and 0 < \delta \leq
\half is constant. We also show that while random graphs have linear
sphericity, there are {\em quasi-random} graphs of logarithmic sphericity.
For the above bound to be linear, must be constant. We show that
if the second eigenvalue of an -regular graph is bounded by a constant,
then the graph is close to being complete bipartite. Namely, its adjacency
matrix differs from that of a complete bipartite graph in only
entries. Furthermore, for any 0 < \delta < \half, and , there are
only finitely many -regular graphs with second eigenvalue at most
Sparser Johnson-Lindenstrauss Transforms
We give two different and simple constructions for dimensionality reduction
in via linear mappings that are sparse: only an
-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion with high probability, while
still achieving the asymptotically optimal number of rows. These are the first
constructions to provide subconstant sparsity for all values of parameters,
improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar,
and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up
applications where dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified
abstract, fixed typos, added open problem section; v4: simplified section 4
by giving 1 analysis that covers both constructions; v3: proof of Theorem 25
in v2 was written incorrectly, now fixed; v2: Added another construction
achieving same upper bound, and added proof of near-tight lower bound for DKS
schem
Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform
The problems of random projections and sparse reconstruction have much in
common and individually received much attention. Surprisingly, until now they
progressed in parallel and remained mostly separate. Here, we employ new tools
from probability in Banach spaces that were successfully used in the context of
sparse reconstruction to advance on an open problem in random pojection. In
particular, we generalize and use an intricate result by Rudelson and Vershynin
for sparse reconstruction which uses Dudley's theorem for bounding Gaussian
processes. Our main result states that any set of real
vectors in dimensional space can be linearly mapped to a space of dimension
k=O(\log N\polylog(n)), while (1) preserving the pairwise distances among the
vectors to within any constant distortion and (2) being able to apply the
transformation in time on each vector. This improves on the best
known achieved by Ailon and Liberty and by Ailon and Chazelle.
The dependence in the distortion constant however is believed to be
suboptimal and subject to further investigation. For constant distortion, this
settles the open question posed by these authors up to a \polylog(n) factor
while considerably simplifying their constructions
Isometric sketching of any set via the Restricted Isometry Property
In this paper we show that for the purposes of dimensionality reduction
certain class of structured random matrices behave similarly to random Gaussian
matrices. This class includes several matrices for which matrix-vector multiply
can be computed in log-linear time, providing efficient dimensionality
reduction of general sets. In particular, we show that using such matrices any
set from high dimensions can be embedded into lower dimensions with near
optimal distortion. We obtain our results by connecting dimensionality
reduction of any set to dimensionality reduction of sparse vectors via a
chaining argument.Comment: 17 page
- …