23 research outputs found
One-Bit ExpanderSketch for One-Bit Compressed Sensing
Is it possible to obliviously construct a set of hyperplanes H such that you
can approximate a unit vector x when you are given the side on which the vector
lies with respect to every h in H? In the sparse recovery literature, where x
is approximately k-sparse, this problem is called one-bit compressed sensing
and has received a fair amount of attention the last decade. In this paper we
obtain the first scheme that achieves almost optimal measurements and sublinear
decoding time for one-bit compressed sensing in the non-uniform case. For a
large range of parameters, we improve the state of the art in both the number
of measurements and the decoding time
Sublinear-Time Algorithms for Compressive Phase Retrieval
In the compressive phase retrieval problem, or phaseless compressed sensing,
or compressed sensing from intensity only measurements, the goal is to
reconstruct a sparse or approximately -sparse vector
given access to , where denotes the vector obtained from
taking the absolute value of coordinate-wise. In this paper
we present sublinear-time algorithms for different variants of the compressive
phase retrieval problem which are akin to the variants considered for the
classical compressive sensing problem in theoretical computer science. Our
algorithms use pure combinatorial techniques and near-optimal number of
measurements.Comment: The ell_2/ell_2 algorithm was substituted by a modification of the
ell_infty/ell_2 algorithm which strictly subsumes i
Nearly Optimal Sparse Polynomial Multiplication
In the sparse polynomial multiplication problem, one is asked to multiply two
sparse polynomials f and g in time that is proportional to the size of the
input plus the size of the output. The polynomials are given via lists of their
coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a
nearly optimal algorithm when the coefficients are positive, and Arnold and
Roche (ISSAC 15) devised an algorithm running in time proportional to the
"structural sparsity" of the product, i.e. the set supp(F)+supp(G). The latter
algorithm is particularly efficient when there not "too many cancellations" of
coefficients in the product. In this work we give a clean, nearly optimal
algorithm for the sparse polynomial multiplication problem.Comment: Accepted to IEEE Transactions on Information Theor
On Fast Decoding of High-Dimensional Signals from One-Bit Measurements
In the problem of one-bit compressed sensing, the goal is to find a delta-close estimation of a k-sparse vector x in R^n given the signs of the entries of y = Phi x, where Phi is called the measurement matrix. For the one-bit compressed sensing problem, previous work [Plan, 2013][Gopi, 2013] achieved Theta (delta^{-2} k log(n/k)) and O~( 1/delta k log (n/k)) measurements, respectively, but the decoding time was Omega ( n k log (n/k)). In this paper, using tools and techniques developed in the context of two-stage group testing and streaming algorithms, we contribute towards the direction of sub-linear decoding time. We give a variety of schemes for the different versions of one-bit compressed sensing, such as the for-each and for-all versions, and for support recovery; all these have at most a log k overhead in the number of measurements and poly(k, log n) decoding time, which is an exponential improvement over previous work, in terms of the dependence on n
Fast n-Fold Boolean Convolution via Additive Combinatorics
We consider the problem of computing the Boolean convolution (with wraparound) of ~vectors of dimension , or, equivalently, the problem of computing the sumset for . Boolean convolution formalizes the frequent task of combining two subproblems, where the whole problem has a solution of size if for some the first subproblem has a solution of size~ and the second subproblem has a solution of size . Our problem formalizes a natural generalization, namely combining solutions of subproblems subject to a modular constraint. This simultaneously generalises Modular Subset Sum and Boolean Convolution (Sumset Computation). Although nearly optimal algorithms are known for special cases of this problem, not even tiny improvements are known for the general case. We almost resolve the computational complexity of this problem, shaving essentially a factor of from the running time of previous algorithms. Specifically, we present a \emph{deterministic} algorithm running in \emph{almost} linear time with respect to the input plus output size . We also present a \emph{Las Vegas} algorithm running in \emph{nearly} linear expected time with respect to the input plus output size . Previously, no deterministic or randomized algorithm was known. At the heart of our approach lies a careful usage of Kneser's theorem from Additive Combinatorics, and a new deterministic almost linear output-sensitive algorithm for non-negative sparse convolution. In total, our work builds a solid toolbox that could be of independent interest
Deterministic Sparse Fourier Transform with an ?_{?} Guarantee
In this paper we revisit the deterministic version of the Sparse Fourier
Transform problem, which asks to read only a few entries of and design a recovery algorithm such that the output of the
algorithm approximates , the Discrete Fourier Transform (DFT) of .
The randomized case has been well-understood, while the main work in the
deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which
obtains samples and a similar runtime
with the guarantee. We focus on the stronger
guarantee and the closely related problem of incoherent
matrices. We list our contributions as follows.
1. We find a deterministic collection of samples for the
recovery in time , and a deterministic
collection of samples for the sparse
recovery in time .
2. We give new deterministic constructions of incoherent matrices that are
row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's
inequality and bounds on exponential sums considered in analytic number theory.
Our first construction matches a previous randomized construction of Nelson,
Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of
the incoherent matrix.
Our algorithms are nearly sample-optimal, since a lower bound of is known, even for the case where the sensing matrix can be
arbitrarily designed. A similar lower bound of is
known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment
Deterministic Heavy Hitters with Sublinear Query Time
We study the classic problem of finding l_1 heavy hitters in the streaming model. In the general turnstile model, we give the first deterministic sublinear-time sketching algorithm which takes a linear sketch of length O(epsilon^{-2} log n * log^*(epsilon^{-1})), which is only a factor of log^*(epsilon^{-1}) more than the best existing polynomial-time sketching algorithm (Nelson et al., RANDOM \u2712). Our approach is based on an iterative procedure, where most unrecovered heavy hitters are identified in each iteration. Although this technique has been extensively employed in the related problem of sparse recovery, this is the first time, to the best of our knowledge, that it has been used in the context of heavy hitters. Along the way we also obtain a sublinear time algorithm for the closely related problem of the l_1/l_1 compressed sensing, matching the space usage of previous (super-)linear time algorithms. In the strict turnstile model, we show that the runtime can be improved and the sketching matrix can be made strongly explicit with O(epsilon^{-2}log^3 n/log^3(1/epsilon)) rows