35 research outputs found
An Improved Lower Bound for Sparse Reconstruction from Subsampled Hadamard Matrices
We give a short argument that yields a new lower bound on the number of
subsampled rows from a bounded, orthonormal matrix necessary to form a matrix
with the restricted isometry property. We show that a matrix formed by
uniformly subsampling rows of an Hadamard matrix contains a
-sparse vector in the kernel, unless the number of subsampled rows is
--- our lower bound applies whenever . Containing a sparse vector in the kernel precludes not only
the restricted isometry property, but more generally the application of those
matrices for uniform sparse recovery.Comment: Improved exposition and added an autho
Sketching via hashing: from heavy hitters to compressed sensing to sparse fourier transform
Sketching via hashing is a popular and useful method for processing large data sets. Its basic idea is as follows. Suppose that we have a large multi-set of elements S=[formula], and we would like to identify the elements that occur “frequently" in S. The algorithm starts by selecting a hash function h that maps the elements into an array c[1…m]. The array entries are initialized to 0. Then, for each element a ∈ S, the algorithm increments c[h(a)]. At the end of the process, each array entry c[j] contains the count of all data elements a ∈ S mapped to j
Isometric sketching of any set via the Restricted Isometry Property
In this paper we show that for the purposes of dimensionality reduction
certain class of structured random matrices behave similarly to random Gaussian
matrices. This class includes several matrices for which matrix-vector multiply
can be computed in log-linear time, providing efficient dimensionality
reduction of general sets. In particular, we show that using such matrices any
set from high dimensions can be embedded into lower dimensions with near
optimal distortion. We obtain our results by connecting dimensionality
reduction of any set to dimensionality reduction of sparse vectors via a
chaining argument.Comment: 17 page
The Restricted Isometry Property of Subsampled Fourier Matrices
A matrix satisfies the restricted isometry
property of order with constant if it preserves the
norm of all -sparse vectors up to a factor of . We prove
that a matrix obtained by randomly sampling rows from an Fourier matrix satisfies the restricted
isometry property of order with a fixed with high
probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math.,
2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page
On the List-Decodability of Random Linear Rank-Metric Codes
The list-decodability of random linear rank-metric codes is shown to match
that of random rank-metric codes. Specifically, an -linear
rank-metric code over of rate is shown to be (with high probability)
list-decodable up to fractional radius with lists of size at
most , where is a constant
depending only on and . This matches the bound for random rank-metric
codes (up to constant factors). The proof adapts the approach of Guruswami,
H\aa stad, Kopparty (STOC 2010), who established a similar result for the
Hamming metric case, to the rank-metric setting
Two new results about quantum exact learning
We present two new results about exact learning by quantum computers. First,
we show how to exactly learn a -Fourier-sparse -bit Boolean function from
uniform quantum examples for that function. This
improves over the bound of uniformly random classical
examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang's
lemma for the special case of sparse functions. Second, we show that if a
concept class can be exactly learned using quantum membership
queries, then it can also be learned using classical membership queries. This improves the
previous-best simulation result (Servedio and Gortler, SICOMP'04) by a -factor.Comment: v3: 21 pages. Small corrections and clarification