36 research outputs found
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
We extend the "method of multiplicities" to get the following results, of
interest in combinatorics and randomness extraction. (A) We show that every
Kakeya set (a set of points that contains a line in every direction) in
\F_q^n must be of size at least . This bound is tight to within a factor for every as , compared to previous bounds
that were off by exponential factors in . (B) We give improved randomness
extractors and "randomness mergers". Mergers are seeded functions that take as
input (possibly correlated) random variables in and a
short random seed and output a single random variable in that is
statistically close to having entropy when one of the
input variables is distributed uniformly. The seed we require is only
-bits long, which significantly improves upon
previous construction of mergers. (C) Using our new mergers, we show how to
construct randomness extractors that use logarithmic length seeds while
extracting fraction of the min-entropy of the source.
The "method of multiplicities", as used in prior work, analyzed subsets of
vector spaces over finite fields by constructing somewhat low degree
interpolating polynomials that vanish on every point in the subset {\em with
high multiplicity}. The typical use of this method involved showing that the
interpolating polynomial also vanished on some points outside the subset, and
then used simple bounds on the number of zeroes to complete the analysis. Our
augmentation to this technique is that we prove, under appropriate conditions,
that the interpolating polynomial vanishes {\em with high multiplicity} outside
the set. This novelty leads to significantly tighter analyses.Comment: 26 pages, now includes extractors with sublinear entropy los
Better lossless condensers through derandomized curve samplers
Lossless condensers are unbalanced expander graphs, with expansion close to optimal. Equivalently, they may be viewed as functions that use a short random seed to map a source on n bits to a source on many fewer bits while preserving all of the min-entropy. It is known how to build lossless condensers when the graphs are slightly unbalanced in the work of M. Capalbo et al. (2002). The highly unbalanced case is also important but the only known construction does not condense the source well. We give explicit constructions of lossless condensers with condensing close to optimal, and using near-optimal seed length. Our main technical contribution is a randomness-efficient method for sampling FD (where F is a field) with low-degree curves. This problem was addressed before in the works of E. Ben-Sasson et al. (2003) and D. Moshkovitz and R. Raz (2006) but the solutions apply only to degree one curves, i.e., lines. Our technique is new and elegant. We use sub-sampling and obtain our curve samplers by composing a sequence of low-degree manifolds, starting with high-dimension, low-degree manifolds and proceeding through lower and lower dimension manifolds with (moderately) growing degrees, until we finish with dimension-one, low-degree manifolds, i.e., curves. The technique may be of independent interest
Recommended from our members
Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous constructions of Ta-Shma et al. [2007] required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and self-contained description and analysis, based on the ideas underlying the recent list-decodable error-correcting codes of Parvaresh and Vardy [2005].
Our expanders can be interpreted as near-optimal “randomness condensers,” that reduce the task of extracting randomness from sources of arbitrary min-entropy rate to extracting randomness from sources of min-entropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new, self-contained construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. [2003] and improving upon it when the error parameter is small (e.g., 1/poly(n)).Engineering and Applied Science
Unbalanced Expanders from Multiplicity Codes
In 2007 Guruswami, Umans and Vadhan gave an explicit construction of a lossless condenser based on Parvaresh-Vardy codes. This lossless condenser is a basic building block in many constructions, and, in particular, is behind the state of the art extractor constructions.
We give an alternative construction that is based on Multiplicity codes. While the bottom-line result is similar to the GUV result, the analysis is very different. In GUV (and Parvaresh-Vardy codes) the polynomial ring is closed to a finite field, and every polynomial is associated with related elements in the finite field. In our construction a polynomial from the polynomial ring is associated with its iterated derivatives. Our analysis boils down to solving a differential equation over a finite field, and uses previous techniques, introduced by Kopparty (in [Swastik Kopparty, 2015]) for the list-decoding setting. We also observe that these (and more general) questions were studied in differential algebra, and we use the terminology and result developed there.
We believe these techniques have the potential of getting better constructions and solving the current bottlenecks in the area
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Algebraic and Combinatorial Methods in Computational Complexity
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings
Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform
For every fixed constant , we design an algorithm for computing
the -sparse Walsh-Hadamard transform of an -dimensional vector in time . Specifically, the
algorithm is given query access to and computes a -sparse satisfying , for an absolute constant , where is the
transform of and is its best -sparse approximation. Our
algorithm is fully deterministic and only uses non-adaptive queries to
(i.e., all queries are determined and performed in parallel when the algorithm
starts).
An important technical tool that we use is a construction of nearly optimal
and linear lossless condensers which is a careful instantiation of the GUV
condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a
deterministic and non-adaptive compressed sensing scheme based
on general lossless condensers that is equipped with a fast reconstruction
algorithm running in time (for the GUV-based
condenser) and is of independent interest. Our scheme significantly simplifies
and improves an earlier expander-based construction due to Berinde, Gilbert,
Indyk, Karloff, Strauss (Allerton 2008).
Our methods use linear lossless condensers in a black box fashion; therefore,
any future improvement on explicit constructions of such condensers would
immediately translate to improved parameters in our framework (potentially
leading to reconstruction time with a reduced exponent in
the poly-logarithmic factor, and eliminating the extra parameter ).
Finally, by allowing the algorithm to use randomness, while still using
non-adaptive queries, the running time of the algorithm can be improved to