23 research outputs found

    An Improved Lower Bound for Sparse Reconstruction from Subsampled Hadamard Matrices

    Full text link
    We give a short argument that yields a new lower bound on the number of subsampled rows from a bounded, orthonormal matrix necessary to form a matrix with the restricted isometry property. We show that a matrix formed by uniformly subsampling rows of an N×NN \times N Hadamard matrix contains a KK-sparse vector in the kernel, unless the number of subsampled rows is Ω(KlogKlog(N/K))\Omega(K \log K \log (N/K)) --- our lower bound applies whenever min(K,N/K)>logCN\min(K, N/K) > \log^C N. Containing a sparse vector in the kernel precludes not only the restricted isometry property, but more generally the application of those matrices for uniform sparse recovery.Comment: Improved exposition and added an autho

    An Improved Lower Bound for Sparse Reconstruction from Subsampled Walsh Matrices

    Get PDF
    We give a short argument that yields a new lower bound on the number of uniformly and independently subsampled rows from a bounded, orthonormal matrix necessary to form a matrix with the restricted isometry property. We show that a matrix formed by uniformly and independently subsampling rows of an N ×N Walsh matrix contains a K-sparse vector in the kernel, unless the number of subsampled rows is Ω(KlogKlog(N/K)) — our lower bound applies whenever min(K,N/K) \u3e logC N. Containing a sparse vector in the kernel precludes not only the restricted isometry property, but more generally the application of those matrices for uniform sparse recovery

    Sparser Johnson-Lindenstrauss Transforms

    Get PDF
    We give two different and simple constructions for dimensionality reduction in 2\ell_2 via linear mappings that are sparse: only an O(ε)O(\varepsilon)-fraction of entries in each column of our embedding matrices are non-zero to achieve distortion 1+ε1+\varepsilon with high probability, while still achieving the asymptotically optimal number of rows. These are the first constructions to provide subconstant sparsity for all values of parameters, improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar, and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up applications where 2\ell_2 dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified abstract, fixed typos, added open problem section; v4: simplified section 4 by giving 1 analysis that covers both constructions; v3: proof of Theorem 25 in v2 was written incorrectly, now fixed; v2: Added another construction achieving same upper bound, and added proof of near-tight lower bound for DKS schem

    Restricted Isometry Property for General p-Norms

    Get PDF
    The Restricted Isometry Property (RIP) is a fundamental property of a matrix which enables sparse recovery. Informally, an m×nm \times n matrix satisfies RIP of order kk for the p\ell_p norm, if Axpxp\|Ax\|_p \approx \|x\|_p for every vector xx with at most kk non-zero coordinates. For every 1p<1 \leq p < \infty we obtain almost tight bounds on the minimum number of rows mm necessary for the RIP property to hold. Prior to this work, only the cases p=1p = 1, 1+1/logk1 + 1 / \log k, and 22 were studied. Interestingly, our results show that the case p=2p = 2 is a "singularity" point: the optimal number of rows mm is Θ~(kp)\widetilde{\Theta}(k^{p}) for all p[1,){2}p\in [1,\infty)\setminus \{2\}, as opposed to Θ~(k)\widetilde{\Theta}(k) for k=2k=2. We also obtain almost tight bounds for the column sparsity of RIP matrices and discuss implications of our results for the Stable Sparse Recovery problem.Comment: An extended abstract of this paper is to appear at the 31st International Symposium on Computational Geometry (SoCG 2015

    Random projections for Bayesian regression

    Get PDF
    This article deals with random projections applied as a data reduction technique for Bayesian regression analysis. We show sufficient conditions under which the entire dd-dimensional distribution is approximately preserved under random projections by reducing the number of data points from nn to kO(poly(d/ε))k\in O(\operatorname{poly}(d/\varepsilon)) in the case ndn\gg d. Under mild assumptions, we prove that evaluating a Gaussian likelihood function based on the projected data instead of the original data yields a (1+O(ε))(1+O(\varepsilon))-approximation in terms of the 2\ell_2 Wasserstein distance. Our main result shows that the posterior distribution of Bayesian linear regression is approximated up to a small error depending on only an ε\varepsilon-fraction of its defining parameters. This holds when using arbitrary Gaussian priors or the degenerate case of uniform distributions over Rd\mathbb{R}^d for β\beta. Our empirical evaluations involve different simulated settings of Bayesian linear regression. Our experiments underline that the proposed method is able to recover the regression model up to small error while considerably reducing the total running time
    corecore