36 research outputs found

    A variant of the Johnson-Lindenstrauss lemma for circulant matrices

    Get PDF
    We continue our study of the Johnson-Lindenstrauss lemma and its connection to circulant matrices started in \cite{HV}. We reduce the bound on kk from k=O(ϵ2log3n)k=O(\epsilon^{-2}\log^3n) proven there to k=O(ϵ2log2n)k=O(\epsilon^{-2}\log^2n). Our technique differs essentially from the one used in \cite{HV}. We employ the discrete Fourier transform and singular value decomposition to deal with the dependency caused by the circulant structure

    New bounds for circulant Johnson-Lindenstrauss embeddings

    Full text link
    This paper analyzes circulant Johnson-Lindenstrauss (JL) embeddings which, as an important class of structured random JL embeddings, are formed by randomizing the column signs of a circulant matrix generated by a random vector. With the help of recent decoupling techniques and matrix-valued Bernstein inequalities, we obtain a new bound k=O(ϵ2log(1+δ)(n))k=O(\epsilon^{-2}\log^{(1+\delta)} (n)) for Gaussian circulant JL embeddings. Moreover, by using the Laplace transform technique (also called Bernstein's trick), we extend the result to subgaussian case. The bounds in this paper offer a small improvement over the current best bounds for Gaussian circulant JL embeddings for certain parameter regimes and are derived using more direct methods.Comment: 11 pages; accepted by Communications in Mathematical Science

    Restricted Isometries for Partial Random Circulant Matrices

    Get PDF
    In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the ssth order restricted isometry constant is small when the number mm of samples satisfies m(slogn)3/2m \gtrsim (s \log n)^{3/2}, where nn is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling

    On Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms

    Get PDF
    The Johnson-Lindenstrauss lemma is one of the corner stone results in dimensionality reduction. It says that given NN, for any set of NN vectors XRnX \subset \mathbb{R}^n, there exists a mapping f:XRmf : X \to \mathbb{R}^m such that f(X)f(X) preserves all pairwise distances between vectors in XX to within (1±ε)(1 \pm \varepsilon) if m=O(ε2lgN)m = O(\varepsilon^{-2} \lg N). Much effort has gone into developing fast embedding algorithms, with the Fast Johnson-Lindenstrauss transform of Ailon and Chazelle being one of the most well-known techniques. The current fastest algorithm that yields the optimal m=O(ε2lgN)m = O(\varepsilon^{-2}\lg N) dimensions has an embedding time of O(nlgn+ε2lg3N)O(n \lg n + \varepsilon^{-2} \lg^3 N). An exciting approach towards improving this, due to Hinrichs and Vyb\'iral, is to use a random m×nm \times n Toeplitz matrix for the embedding. Using Fast Fourier Transform, the embedding of a vector can then be computed in O(nlgm)O(n \lg m) time. The big question is of course whether m=O(ε2lgN)m = O(\varepsilon^{-2} \lg N) dimensions suffice for this technique. If so, this would end a decades long quest to obtain faster and faster Johnson-Lindenstrauss transforms. The current best analysis of the embedding of Hinrichs and Vyb\'iral shows that m=O(ε2lg2N)m = O(\varepsilon^{-2}\lg^2 N) dimensions suffices. The main result of this paper, is a proof that this analysis unfortunately cannot be tightened any further, i.e., there exists a set of NN vectors requiring m=Ω(ε2lg2N)m = \Omega(\varepsilon^{-2} \lg^2 N) for the Toeplitz approach to work

    Sparser Johnson-Lindenstrauss Transforms

    Get PDF
    We give two different and simple constructions for dimensionality reduction in 2\ell_2 via linear mappings that are sparse: only an O(ε)O(\varepsilon)-fraction of entries in each column of our embedding matrices are non-zero to achieve distortion 1+ε1+\varepsilon with high probability, while still achieving the asymptotically optimal number of rows. These are the first constructions to provide subconstant sparsity for all values of parameters, improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar, and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up applications where 2\ell_2 dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified abstract, fixed typos, added open problem section; v4: simplified section 4 by giving 1 analysis that covers both constructions; v3: proof of Theorem 25 in v2 was written incorrectly, now fixed; v2: Added another construction achieving same upper bound, and added proof of near-tight lower bound for DKS schem

    Restricted isometries for partial random circulant matrices

    Get PDF
    In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m ≳ (s logn)^(3/2), where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling
    corecore