35 research outputs found

    An Improved Lower Bound for Sparse Reconstruction from Subsampled Hadamard Matrices

    Full text link
    We give a short argument that yields a new lower bound on the number of subsampled rows from a bounded, orthonormal matrix necessary to form a matrix with the restricted isometry property. We show that a matrix formed by uniformly subsampling rows of an N×NN \times N Hadamard matrix contains a KK-sparse vector in the kernel, unless the number of subsampled rows is Ω(KlogKlog(N/K))\Omega(K \log K \log (N/K)) --- our lower bound applies whenever min(K,N/K)>logCN\min(K, N/K) > \log^C N. Containing a sparse vector in the kernel precludes not only the restricted isometry property, but more generally the application of those matrices for uniform sparse recovery.Comment: Improved exposition and added an autho

    Sketching via hashing: from heavy hitters to compressed sensing to sparse fourier transform

    Get PDF
    Sketching via hashing is a popular and useful method for processing large data sets. Its basic idea is as follows. Suppose that we have a large multi-set of elements S=[formula], and we would like to identify the elements that occur “frequently" in S. The algorithm starts by selecting a hash function h that maps the elements into an array c[1…m]. The array entries are initialized to 0. Then, for each element a ∈ S, the algorithm increments c[h(a)]. At the end of the process, each array entry c[j] contains the count of all data elements a ∈ S mapped to j

    Isometric sketching of any set via the Restricted Isometry Property

    Full text link
    In this paper we show that for the purposes of dimensionality reduction certain class of structured random matrices behave similarly to random Gaussian matrices. This class includes several matrices for which matrix-vector multiply can be computed in log-linear time, providing efficient dimensionality reduction of general sets. In particular, we show that using such matrices any set from high dimensions can be embedded into lower dimensions with near optimal distortion. We obtain our results by connecting dimensionality reduction of any set to dimensionality reduction of sparse vectors via a chaining argument.Comment: 17 page

    The Restricted Isometry Property of Subsampled Fourier Matrices

    Full text link
    A matrix ACq×NA \in \mathbb{C}^{q \times N} satisfies the restricted isometry property of order kk with constant ε\varepsilon if it preserves the 2\ell_2 norm of all kk-sparse vectors up to a factor of 1±ε1\pm \varepsilon. We prove that a matrix AA obtained by randomly sampling q=O(klog2klogN)q = O(k \cdot \log^2 k \cdot \log N) rows from an N×NN \times N Fourier matrix satisfies the restricted isometry property of order kk with a fixed ε\varepsilon with high probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math., 2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page

    On the List-Decodability of Random Linear Rank-Metric Codes

    Full text link
    The list-decodability of random linear rank-metric codes is shown to match that of random rank-metric codes. Specifically, an Fq\mathbb{F}_q-linear rank-metric code over Fqm×n\mathbb{F}_q^{m \times n} of rate R=(1ρ)(1nmρ)εR = (1-\rho)(1-\frac{n}{m}\rho)-\varepsilon is shown to be (with high probability) list-decodable up to fractional radius ρ(0,1)\rho \in (0,1) with lists of size at most Cρ,qε\frac{C_{\rho,q}}{\varepsilon}, where Cρ,qC_{\rho,q} is a constant depending only on ρ\rho and qq. This matches the bound for random rank-metric codes (up to constant factors). The proof adapts the approach of Guruswami, H\aa stad, Kopparty (STOC 2010), who established a similar result for the Hamming metric case, to the rank-metric setting

    Two new results about quantum exact learning

    Get PDF
    We present two new results about exact learning by quantum computers. First, we show how to exactly learn a kk-Fourier-sparse nn-bit Boolean function from O(k1.5(logk)2)O(k^{1.5}(\log k)^2) uniform quantum examples for that function. This improves over the bound of Θ~(kn)\widetilde{\Theta}(kn) uniformly random classical examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang's lemma for the special case of sparse functions. Second, we show that if a concept class C\mathcal{C} can be exactly learned using QQ quantum membership queries, then it can also be learned using O(Q2logQlogC)O\left(\frac{Q^2}{\log Q}\log|\mathcal{C}|\right) classical membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP'04) by a logQ\log Q-factor.Comment: v3: 21 pages. Small corrections and clarification
    corecore