11 research outputs found
On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation
We study classic streaming and sparse recovery problems using deterministic
linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the
latter also being known as l1-heavy hitters), norm estimation, and approximate
inner product. We focus on devising a fixed matrix A in R^{m x n} and a
deterministic recovery/estimation procedure which work for all possible input
vectors simultaneously. Our results improve upon existing work, the following
being our main contributions:
* A proof that linf/l1 sparse recovery and inner product estimation are
equivalent, and that incoherent matrices can be used to solve both problems.
Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log
n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms
by making use of the Fast Johnson-Lindenstrauss transform. Both our running
times and number of measurements improve upon previous work. We can also obtain
better error guarantees than previous work in terms of a smaller tail of the
input vector.
* A new lower bound for the number of linear measurements required to solve
l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are
required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where
x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude.
* A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of
measurements required to solve deterministic norm estimation, i.e., to recover
|x|_2 +/- eps|x|_1.
For all the problems we study, tight bounds are already known for the
randomized complexity from previous work, except in the case of l1/l1 sparse
recovery, where a nearly tight bound is known. Our work thus aims to study the
deterministic complexities of these problems
On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation
We study classic streaming and sparse recovery problems using deterministic linear sketches, including and sparse recovery problems (the latter also being known as ℓ1ℓ1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions:
• A proof that sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is . We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson–Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector.
• A new lower bound for the number of linear measurements required to solve sparse recovery. We show measurements are required to recover an x′ with , where is x projected onto all but its largest k coordinates in magnitude.
• A tight bound of on the number of measurements required to solve deterministic norm estimation, i.e., to recover .
For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems. We remark that some of the matrices used in our algorithms, although known to exist, currently are not yet explicit in the sense that deterministic polynomial time constructions are not yet known, although in all cases polynomial time Monte Carlo algorithms are known.Engineering and Applied Science
Optimality of the Johnson-Lindenstrauss Lemma
For any integers and , we show the existence of a set of vectors such that any embedding satisfying
must have This lower bound matches the upper bound given by the Johnson-Lindenstrauss
lemma [JL84]. Furthermore, our lower bound holds for nearly the full range of
of interest, since there is always an isometric embedding into
dimension (either the identity map, or projection onto
).
Previously such a lower bound was only known to hold against linear maps ,
and not for such a wide range of parameters [LN16]. The
best previously known lower bound for general was [Wel74, Lev83, Alo03], which
is suboptimal for any .Comment: v2: simplified proof, also added reference to Lev8
Time lower bounds for nonadaptive turnstile streaming algorithms
We say a turnstile streaming algorithm is "non-adaptive" if, during updates,
the memory cells written and read depend only on the index being updated and
random coins tossed at the beginning of the stream (and not on the memory
contents of the algorithm). Memory cells read during queries may be decided
upon adaptively. All known turnstile streaming algorithms in the literature are
non-adaptive.
We prove the first non-trivial update time lower bounds for both randomized
and deterministic turnstile streaming algorithms, which hold when the
algorithms are non-adaptive. While there has been abundant success in proving
space lower bounds, there have been no non-trivial update time lower bounds in
the turnstile model. Our lower bounds hold against classically studied problems
such as heavy hitters, point query, entropy estimation, and moment estimation.
In some cases of deterministic algorithms, our lower bounds nearly match known
upper bounds
Tight Bounds for Set Disjointness in the Message Passing Model
In a multiparty message-passing model of communication, there are
players. Each player has a private input, and they communicate by sending
messages to one another over private channels. While this model has been used
extensively in distributed computing and in multiparty computation, lower
bounds on communication complexity in this model and related models have been
somewhat scarce. In recent work \cite{phillips12,woodruff12,woodruff13}, strong
lower bounds of the form were obtained for several
functions in the message-passing model; however, a lower bound on the classical
Set Disjointness problem remained elusive.
In this paper, we prove tight lower bounds of the form
for the Set Disjointness problem in the message passing model. Our bounds are
obtained by developing information complexity tools in the message-passing
model, and then proving an information complexity lower bound for Set
Disjointness. As a corollary, we show a tight lower bound for the task
allocation problem \cite{DruckerKuhnOshman} via a reduction from Set
Disjointness
For-all Sparse Recovery in Near-optimal Time
An approximate sparse recovery system in norm consists of parameters , , , an -by- measurement , and a recovery algorithm, . Given a vector, , the system approximates by , which must satisfy . We consider the 'for all' model, in which a single matrix , possibly 'constructed' non-explicitly using the probabilistic method, is used for all signals . The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses measurements and runs in time for any constant . In this paper, we improve the number of measurements to , matching the best existing upper bound (attained by super-linear algorithms), and the runtime to , with a modest restriction that , for any constants . When for some , the runtime is reduced to . With no restrictions on , we have an approximation recovery system with measurements