149 research outputs found
Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions
In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)).
The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon.
A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game
Three-Source Extractors for Polylogarithmic Min-Entropy
We continue the study of constructing explicit extractors for independent
general weak random sources. The ultimate goal is to give a construction that
matches what is given by the probabilistic method --- an extractor for two
independent -bit weak random sources with min-entropy as small as . Previously, the best known result in the two-source case is an
extractor by Bourgain \cite{Bourgain05}, which works for min-entropy ;
and the best known result in the general case is an earlier work of the author
\cite{Li13b}, which gives an extractor for a constant number of independent
sources with min-entropy . However, the constant in the
construction of \cite{Li13b} depends on the hidden constant in the best known
seeded extractor, and can be large; moreover the error in that construction is
only .
In this paper, we make two important improvements over the result in
\cite{Li13b}. First, we construct an explicit extractor for \emph{three}
independent sources on bits with min-entropy .
In fact, our extractor works for one independent source with poly-logarithmic
min-entropy and another independent block source with two blocks each having
poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next
step would be to break the barrier in two-source extractors. Second, we
improve the error of the extractor from to
, which is almost optimal and crucial for cryptographic
applications. Some of the techniques developed here may be of independent
interests
A New Approach for Constructing Low-Error, Two-Source Extractors
Our main contribution in this paper is a new reduction from explicit two-source extractors for polynomially-small entropy rate and negligible error to explicit t-non-malleable extractors with seed-length that has a good dependence on t. Our reduction is based on the Chattopadhyay and Zuckerman framework (STOC 2016), and surprisingly we dispense with the use of resilient functions which appeared to be a major ingredient there and in follow-up works. The use of resilient functions posed a fundamental barrier towards achieving negligible error, and our new reduction circumvents this bottleneck.
The parameters we require from t-non-malleable extractors for our reduction to work hold in a non-explicit construction, but currently it is not known how to explicitly construct such extractors. As a result we do not give an unconditional construction of an explicit low-error two-source extractor. Nonetheless, we believe our work gives a viable approach for solving the important problem of low-error two-source extractors. Furthermore, our work highlights an existing barrier in constructing low-error two-source extractors, and draws attention to the dependence of the parameter t in the seed-length of the non-malleable extractor. We hope this work would lead to further developments in explicit constructions of both non-malleable and two-source extractors
On Extractors and Exposure-Resilient Functions for Sublogarithmic Entropy
We study deterministic extractors for oblivious bit-fixing sources (a.k.a.
resilient functions) and exposure-resilient functions with small min-entropy:
of the function's n input bits, k << n bits are uniformly random and unknown to
the adversary. We simplify and improve an explicit construction of extractors
for bit-fixing sources with sublogarithmic k due to Kamp and Zuckerman (SICOMP
2006), achieving error exponentially small in k rather than polynomially small
in k. Our main result is that when k is sublogarithmic in n, the short output
length of this construction (O(log k) output bits) is optimal for extractors
computable by a large class of space-bounded streaming algorithms.
Next, we show that a random function is an extractor for oblivious bit-fixing
sources with high probability if and only if k is superlogarithmic in n,
suggesting that our main result may apply more generally. In contrast, we show
that a random function is a static (resp. adaptive) exposure-resilient function
with high probability even if k is as small as a constant (resp. log log n). No
explicit exposure-resilient functions achieving these parameters are known
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
Two Source Extractors for Asymptotically Optimal Entropy, and (Many) More
A long line of work in the past two decades or so established close
connections between several different pseudorandom objects and applications.
These connections essentially show that an asymptotically optimal construction
of one central object will lead to asymptotically optimal solutions to all the
others. However, despite considerable effort, previous works can get close but
still lack one final step to achieve truly asymptotically optimal
constructions.
In this paper we provide the last missing link, thus simultaneously achieving
explicit, asymptotically optimal constructions and solutions for various well
studied extractors and applications, that have been the subjects of long lines
of research. Our results include:
Asymptotically optimal seeded non-malleable extractors, which in turn give
two source extractors for asymptotically optimal min-entropy of ,
explicit constructions of -Ramsey graphs on vertices with , and truly optimal privacy amplification protocols with an active adversary.
Two source non-malleable extractors and affine non-malleable extractors for
some linear min-entropy with exponentially small error, which in turn give the
first explicit construction of non-malleable codes against -split state
tampering and affine tampering with constant rate and \emph{exponentially}
small error.
Explicit extractors for affine sources, sumset sources, interleaved sources,
and small space sources that achieve asymptotically optimal min-entropy of
or (for space sources).
An explicit function that requires strongly linear read once branching
programs of size , which is optimal up to the constant in
. Previously, even for standard read once branching programs, the
best known size lower bound for an explicit function is .Comment: Fixed some minor error
An entropy lower bound for non-malleable extractors
A (k, ε)-non-malleable extractor is a function nmExt : {0, 1} n × {0, 1} d → {0, 1} that takes two inputs, a weak source X ~ {0, 1} n of min-entropy k and an independent uniform seed s E {0, 1} d , and outputs a bit nmExt(X, s) that is ε-close to uniform, even given the seed s and the value nmExt(X, s') for an adversarially chosen seed s' ≠ s. Dodis and Wichs (STOC 2009) showed the existence of (k, ε)-non-malleable extractors with seed length d = log(n - k - 1) + 2 log(1/ε) + 6 that support sources of min-entropy k > log(d) + 2 log(1/ε) + 8. We show that the foregoing bound is essentially tight, by proving that any (k, ε)-non-malleable extractor must satisfy the min-entropy bound k > log(d) + 2 log(1/ε) - log log(1/ε) - C for an absolute constant C. In particular, this implies that non-malleable extractors require min-entropy at least Ω(loglog(n)). This is in stark contrast to the existence of strong seeded extractors that support sources of min-entropy k = O(log(1/ε)). Our techniques strongly rely on coding theory. In particular, we reveal an inherent connection between non-malleable extractors and error correcting codes, by proving a new lemma which shows that any (k, ε)-non-malleable extractor with seed length d induces a code C ⊆ {0,1} 2k with relative distance 1/2 - 2ε and rate d-1/2k
- …