36 research outputs found
A variant of the Johnson-Lindenstrauss lemma for circulant matrices
We continue our study of the Johnson-Lindenstrauss lemma and its connection
to circulant matrices started in \cite{HV}. We reduce the bound on from
proven there to . Our
technique differs essentially from the one used in \cite{HV}. We employ the
discrete Fourier transform and singular value decomposition to deal with the
dependency caused by the circulant structure
New bounds for circulant Johnson-Lindenstrauss embeddings
This paper analyzes circulant Johnson-Lindenstrauss (JL) embeddings which, as
an important class of structured random JL embeddings, are formed by
randomizing the column signs of a circulant matrix generated by a random
vector. With the help of recent decoupling techniques and matrix-valued
Bernstein inequalities, we obtain a new bound
for Gaussian circulant JL embeddings.
Moreover, by using the Laplace transform technique (also called Bernstein's
trick), we extend the result to subgaussian case. The bounds in this paper
offer a small improvement over the current best bounds for Gaussian circulant
JL embeddings for certain parameter regimes and are derived using more direct
methods.Comment: 11 pages; accepted by Communications in Mathematical Science
Restricted Isometries for Partial Random Circulant Matrices
In the theory of compressed sensing, restricted isometry analysis has become
a standard tool for studying how efficiently a measurement matrix acquires
information about sparse and compressible signals. Many recovery algorithms are
known to succeed when the restricted isometry constants of the sampling matrix
are small. Many potential applications of compressed sensing involve a
data-acquisition process that proceeds by convolution with a random pulse
followed by (nonrandom) subsampling. At present, the theoretical analysis of
this measurement technique is lacking. This paper demonstrates that the th
order restricted isometry constant is small when the number of samples
satisfies , where is the length of the pulse.
This bound improves on previous estimates, which exhibit quadratic scaling
On Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms
The Johnson-Lindenstrauss lemma is one of the corner stone results in
dimensionality reduction. It says that given , for any set of vectors , there exists a mapping such
that preserves all pairwise distances between vectors in to within
if . Much effort has gone
into developing fast embedding algorithms, with the Fast Johnson-Lindenstrauss
transform of Ailon and Chazelle being one of the most well-known techniques.
The current fastest algorithm that yields the optimal dimensions has an embedding time of . An exciting approach towards improving this, due to
Hinrichs and Vyb\'iral, is to use a random Toeplitz matrix for the
embedding. Using Fast Fourier Transform, the embedding of a vector can then be
computed in time. The big question is of course whether dimensions suffice for this technique. If so, this
would end a decades long quest to obtain faster and faster
Johnson-Lindenstrauss transforms. The current best analysis of the embedding of
Hinrichs and Vyb\'iral shows that dimensions
suffices. The main result of this paper, is a proof that this analysis
unfortunately cannot be tightened any further, i.e., there exists a set of
vectors requiring for the Toeplitz
approach to work
Sparser Johnson-Lindenstrauss Transforms
We give two different and simple constructions for dimensionality reduction
in via linear mappings that are sparse: only an
-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion with high probability, while
still achieving the asymptotically optimal number of rows. These are the first
constructions to provide subconstant sparsity for all values of parameters,
improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar,
and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up
applications where dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified
abstract, fixed typos, added open problem section; v4: simplified section 4
by giving 1 analysis that covers both constructions; v3: proof of Theorem 25
in v2 was written incorrectly, now fixed; v2: Added another construction
achieving same upper bound, and added proof of near-tight lower bound for DKS
schem
Restricted isometries for partial random circulant matrices
In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m ≳ (s logn)^(3/2), where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling