149 research outputs found

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Three-Source Extractors for Polylogarithmic Min-Entropy

    Full text link
    We continue the study of constructing explicit extractors for independent general weak random sources. The ultimate goal is to give a construction that matches what is given by the probabilistic method --- an extractor for two independent nn-bit weak random sources with min-entropy as small as logn+O(1)\log n+O(1). Previously, the best known result in the two-source case is an extractor by Bourgain \cite{Bourgain05}, which works for min-entropy 0.49n0.49n; and the best known result in the general case is an earlier work of the author \cite{Li13b}, which gives an extractor for a constant number of independent sources with min-entropy polylog(n)\mathsf{polylog(n)}. However, the constant in the construction of \cite{Li13b} depends on the hidden constant in the best known seeded extractor, and can be large; moreover the error in that construction is only 1/poly(n)1/\mathsf{poly(n)}. In this paper, we make two important improvements over the result in \cite{Li13b}. First, we construct an explicit extractor for \emph{three} independent sources on nn bits with min-entropy kpolylog(n)k \geq \mathsf{polylog(n)}. In fact, our extractor works for one independent source with poly-logarithmic min-entropy and another independent block source with two blocks each having poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next step would be to break the 0.49n0.49n barrier in two-source extractors. Second, we improve the error of the extractor from 1/poly(n)1/\mathsf{poly(n)} to 2kΩ(1)2^{-k^{\Omega(1)}}, which is almost optimal and crucial for cryptographic applications. Some of the techniques developed here may be of independent interests

    Near-Optimal Erasure List-Decodable Codes

    Get PDF

    A New Approach for Constructing Low-Error, Two-Source Extractors

    Get PDF
    Our main contribution in this paper is a new reduction from explicit two-source extractors for polynomially-small entropy rate and negligible error to explicit t-non-malleable extractors with seed-length that has a good dependence on t. Our reduction is based on the Chattopadhyay and Zuckerman framework (STOC 2016), and surprisingly we dispense with the use of resilient functions which appeared to be a major ingredient there and in follow-up works. The use of resilient functions posed a fundamental barrier towards achieving negligible error, and our new reduction circumvents this bottleneck. The parameters we require from t-non-malleable extractors for our reduction to work hold in a non-explicit construction, but currently it is not known how to explicitly construct such extractors. As a result we do not give an unconditional construction of an explicit low-error two-source extractor. Nonetheless, we believe our work gives a viable approach for solving the important problem of low-error two-source extractors. Furthermore, our work highlights an existing barrier in constructing low-error two-source extractors, and draws attention to the dependence of the parameter t in the seed-length of the non-malleable extractor. We hope this work would lead to further developments in explicit constructions of both non-malleable and two-source extractors

    On Extractors and Exposure-Resilient Functions for Sublogarithmic Entropy

    Full text link
    We study deterministic extractors for oblivious bit-fixing sources (a.k.a. resilient functions) and exposure-resilient functions with small min-entropy: of the function's n input bits, k << n bits are uniformly random and unknown to the adversary. We simplify and improve an explicit construction of extractors for bit-fixing sources with sublogarithmic k due to Kamp and Zuckerman (SICOMP 2006), achieving error exponentially small in k rather than polynomially small in k. Our main result is that when k is sublogarithmic in n, the short output length of this construction (O(log k) output bits) is optimal for extractors computable by a large class of space-bounded streaming algorithms. Next, we show that a random function is an extractor for oblivious bit-fixing sources with high probability if and only if k is superlogarithmic in n, suggesting that our main result may apply more generally. In contrast, we show that a random function is a static (resp. adaptive) exposure-resilient function with high probability even if k is as small as a constant (resp. log log n). No explicit exposure-resilient functions achieving these parameters are known

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    Two Source Extractors for Asymptotically Optimal Entropy, and (Many) More

    Full text link
    A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of O(logn)O(\log n), explicit constructions of KK-Ramsey graphs on NN vertices with K=logO(1)NK=\log^{O(1)} N, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against 22-split state tampering and affine tampering with constant rate and \emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of O(logn)O(\log n) or 2s+O(logn)2s+O(\log n) (for space ss sources). An explicit function that requires strongly linear read once branching programs of size 2nO(logn)2^{n-O(\log n)}, which is optimal up to the constant in O()O(\cdot). Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is 2nO(log2n)2^{n-O(\log^2 n)}.Comment: Fixed some minor error

    An entropy lower bound for non-malleable extractors

    Get PDF
    A (k, ε)-non-malleable extractor is a function nmExt : {0, 1} n × {0, 1} d → {0, 1} that takes two inputs, a weak source X ~ {0, 1} n of min-entropy k and an independent uniform seed s E {0, 1} d , and outputs a bit nmExt(X, s) that is ε-close to uniform, even given the seed s and the value nmExt(X, s') for an adversarially chosen seed s' ≠ s. Dodis and Wichs (STOC 2009) showed the existence of (k, ε)-non-malleable extractors with seed length d = log(n - k - 1) + 2 log(1/ε) + 6 that support sources of min-entropy k > log(d) + 2 log(1/ε) + 8. We show that the foregoing bound is essentially tight, by proving that any (k, ε)-non-malleable extractor must satisfy the min-entropy bound k > log(d) + 2 log(1/ε) - log log(1/ε) - C for an absolute constant C. In particular, this implies that non-malleable extractors require min-entropy at least Ω(loglog(n)). This is in stark contrast to the existence of strong seeded extractors that support sources of min-entropy k = O(log(1/ε)). Our techniques strongly rely on coding theory. In particular, we reveal an inherent connection between non-malleable extractors and error correcting codes, by proving a new lemma which shows that any (k, ε)-non-malleable extractor with seed length d induces a code C ⊆ {0,1} 2k with relative distance 1/2 - 2ε and rate d-1/2k
    corecore