18 research outputs found
Deterministic Extractors for Additive Sources
We propose a new model of a weakly random source that admits randomness
extraction. Our model of additive sources includes such natural sources as
uniform distributions on arithmetic progressions (APs), generalized arithmetic
progressions (GAPs), and Bohr sets, each of which generalizes affine sources.
We give an explicit extractor for additive sources with linear min-entropy over
both and , for large prime , although our
results over require that the source further satisfy a
list-decodability condition. As a corollary, we obtain explicit extractors for
APs, GAPs, and Bohr sources with linear min-entropy, although again our results
over require the list-decodability condition. We further
explore special cases of additive sources. We improve previous constructions of
line sources (affine sources of dimension 1), requiring a field of size linear
in , rather than by Gabizon and Raz. This beats the
non-explicit bound of obtained by the probabilistic method.
We then generalize this result to APs and GAPs
On Randomness Extraction in AC0
We consider randomness extraction by AC0 circuits. The main parameter, n, is the length of the source, and all other parameters are functions of it. The additional extraction parameters are the min-entropy bound k=k(n), the seed length r=r(n), the output length m=m(n), and the (output) deviation bound epsilon=epsilon(n).
For k = r+1) is possible if and only if k * r > n/poly(log(n)). For k >= n/log^(O(1))(n),
we show that AC0-extraction of r+Omega(r) bits is possible when r=O(log(n)), but leave open the question of whether more bits can be extracted in this case.
The impossibility result is for constant epsilon, and the possibility result supports epsilon=1/poly(n). The impossibility result is for (possibly) non-uniform AC0, whereas the possibility result hold for uniform AC0. All our impossibility results hold even for the model of bit-fixing sources, where k coincides with the number of non-fixed (i.e., random) bits.
We also consider deterministic AC0 extraction from various classes of restricted sources. In particular, for any constant , we give explicit AC0 extractors for poly(1/delta) independent sources that are each of min-entropy rate delta; and four sources suffice for delta=0.99. Also, we give non-explicit AC0 extractors for bit-fixing sources of entropy rate 1/poly(log(n)) (i.e., having n/poly(log(n)) unfixed bits). This shows that the known analysis of the "restriction method" (for making a circuit constant by fixing as few variables as possible) is tight for AC0 even if the restriction is picked deterministically depending on the circuit
Deterministic Coupon Collection and Better Strong Dispersers
Hashing is one of the main techniques in data processing and algorithm design for very large data sets. While random hash functions satisfy most desirable properties, it is often too expensive to store a fully random hash function. Motivated by this, much attention has been given to designing small families of hash functions suitable for various applications. In this work, we study the question of designing space-efficient hash families H = {h:[U] -> [N]} with the natural property of \u27covering\u27: H is said to be covering if any set of Omega(N log N) distinct items from the universe (the "coupon-collector limit") are hashed to cover all N bins by most hash functions in H. We give an explicit covering family H of size poly(N) (which is optimal), so that hash functions in H can be specified efficiently by O(log N) bits.
We build covering hash functions by drawing a connection to "dispersers", which are quite well-studied and have a variety of applications themselves. We in fact need strong dispersers and we give new constructions of strong dispersers which may be of independent interest. Specifically, we construct strong dispersers with optimal entropy loss in the high min-entropy, but very small error (poly(n)/2^n for n bit sources) regimes. We also provide a strong disperser construction with constant error but for any min-entropy. Our constructions achieve these by using part of the source to replace seed from previous non-strong constructions in surprising ways. In doing so, we take two of the few constructions of dispersers with parameters better than known extractors and make them strong
Randomness Extraction in AC0 and with Small Locality
Randomness extractors, which extract high quality (almost-uniform) random
bits from biased random sources, are important objects both in theory and in
practice. While there have been significant progress in obtaining near optimal
constructions of randomness extractors in various settings, the computational
complexity of randomness extractors is still much less studied. In particular,
it is not clear whether randomness extractors with good parameters can be
computed in several interesting complexity classes that are much weaker than P.
In this paper we study randomness extractors in the following two models of
computation: (1) constant-depth circuits (AC0), and (2) the local computation
model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13],
only achieve constructions with weak parameters. In this work we give explicit
constructions of randomness extractors with much better parameters. As an
application, we use our AC0 extractors to study pseudorandom generators in AC0,
and show that we can construct both cryptographic pseudorandom generators
(under reasonable computational assumptions) and unconditional pseudorandom
generators for space bounded computation with very good parameters.
Our constructions combine several previous techniques in randomness
extractors, as well as introduce new techniques to reduce or preserve the
complexity of extractors, which may be of independent interest. These include
(1) a general way to reduce the error of strong seeded extractors while
preserving the AC0 property and small locality, and (2) a seeded randomness
condenser with small locality.Comment: 62 page
Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous constructions of Ta-Shma et al. [2007] required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and self-contained description and analysis, based on the ideas underlying the recent list-decodable error-correcting codes of Parvaresh and Vardy [2005].
Our expanders can be interpreted as near-optimal “randomness condensers,” that reduce the task of extracting randomness from sources of arbitrary min-entropy rate to extracting randomness from sources of min-entropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new, self-contained construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. [2003] and improving upon it when the error parameter is small (e.g., 1/poly(n)).Engineering and Applied Science
Recommended from our members
Algebraic and analytic techniques in coding theory
Error correcting codes are designed to tackle the problem of reliable trans- mission of data through noisy channels. A major challenge in coding theory is to efficiently recover the original message even when many symbols of the received data have been corrupted. This is called the unique decoding problem of error correcting codes. More precisely, if the user wants to send K bits, the code stretches K bits to N bits to tolerate errors in the N bits. Then the goal is to recover the original K bits of the message.
Often, the receiver requires only a certain part of the message. In such cases, analyzing the entire received data (word) becomes prohibitive. The challenge is to design a local decoder which queries only few locations of the received word and outputs the part of the message required. This is known as local decoding of an error correcting code.
The unique decoding problem faces a certain combinatorial barrier. That is, there is a limit to the number of errors it can tolerate in order to uniquely identify the correct message. This is called the unique decoding radius. A major open problem is to understand what happens if one allows for errors beyond this threshold. The goal is to design an algorithm that can recover the right message, or possibly a list of messages (preferably a small number). This is referred to as list decoding of an error correcting code.
At the core of many such codes lies polynomials. Polynomials play a fundamental role in computer science with important applications in algorithm design, complexity theory, pseudo-randomness and machine learning.
In this dissertation, we improve our understanding of well known classes of codes and discover various properties of polynomials. As an additional consequence, we obtain results in a suite of problems in effective algebraic geometry, including Hilbert’s nullstellensatz, ideal membership problem and counting rational points in a variety.Computer Science
A Quantum Random Number Generator Certified by Value Indefiniteness
In this paper we propose a quantum random number generator (QRNG) which
utilizes an entangled photon pair in a Bell singlet state, and is certified
explicitly by value indefiniteness. While "true randomness" is a mathematical
impossibility, the certification by value indefiniteness ensures the quantum
random bits are incomputable in the strongest sense. This is the first QRNG
setup in which a physical principle (Kochen-Specker value indefiniteness)
guarantees that no single quantum bit produced can be classically computed
(reproduced and validated), the mathematical form of bitwise physical
unpredictability. The effects of various experimental imperfections are
discussed in detail, particularly those related to detector efficiencies,
context alignment and temporal correlations between bits. The analysis is to a
large extent relevant for the construction of any QRNG based on beam-splitters.
By measuring the two entangled photons in maximally misaligned contexts and
utilizing the fact that two rather than one bitstring are obtained, more
efficient and robust unbiasing techniques can be applied. A robust and
efficient procedure based on XORing the bitstrings together---essentially using
one as a one-time-pad for the other---is proposed to extract random bits in the
presence of experimental imperfections, as well as a more efficient
modification of the von Neumann procedure for the same task. Some open problems
are also discussed.Comment: 25 pages, 3 figure
Algebraic methods in randomness and pseudorandomness
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 183-188).Algebra and randomness come together rather nicely in computation. A central example of this relationship in action is the Schwartz-Zippel lemma and its application to the fast randomized checking of polynomial identities. In this thesis, we further this relationship in two ways: (1) by compiling new algebraic techniques that are of potential computational interest, and (2) demonstrating the relevance of these techniques by making progress on several questions in randomness and pseudorandomness. The technical ingredients we introduce include: " Multiplicity-enhanced versions of the Schwartz-Zippel lenina and the "polynomial method", extending their applicability to "higher-degree" polynomials. " Conditions for polynomials to have an unusually small number of roots. " Conditions for polynomials to have an unusually structured set of roots, e.g., containing a large linear space. Our applications include: * Explicit constructions of randomness extractors with logarithmic seed and vanishing "entropy loss". " Limit laws for first-order logic augmented with the parity quantifier on random graphs (extending the classical 0-1 law). " Explicit dispersers for affine sources of imperfect randomness with sublinear entropy.by Swastik Kopparty.Ph.D