15 research outputs found

    Two Source Extractors for Asymptotically Optimal Entropy, and (Many) More

    Full text link
    A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of O(logn)O(\log n), explicit constructions of KK-Ramsey graphs on NN vertices with K=logO(1)NK=\log^{O(1)} N, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against 22-split state tampering and affine tampering with constant rate and \emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of O(logn)O(\log n) or 2s+O(logn)2s+O(\log n) (for space ss sources). An explicit function that requires strongly linear read once branching programs of size 2nO(logn)2^{n-O(\log n)}, which is optimal up to the constant in O()O(\cdot). Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is 2nO(log2n)2^{n-O(\log^2 n)}.Comment: Fixed some minor error

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions

    Get PDF
    The recent line of study on randomness extractors has been a great success, resulting in exciting new techniques, new connections, and breakthroughs to long standing open problems in several seemingly different topics. These include seeded non-malleable extractors, privacy amplification protocols with an active adversary, independent source extractors (and explicit Ramsey graphs), and non-malleable codes in the split state model. Previously, the best constructions are given in [Xin Li, 2017]: seeded non-malleable extractors with seed length and entropy requirement O(log n+log(1/epsilon)log log (1/epsilon)) for error epsilon; two-round privacy amplification protocols with optimal entropy loss for security parameter up to Omega(k/log k), where k is the entropy of the shared weak source; two-source extractors for entropy O(log n log log n); and non-malleable codes in the 2-split state model with rate Omega(1/log n). However, in all cases there is still a gap to optimum and the motivation to close this gap remains strong. In this paper, we introduce a set of new techniques to further push the frontier in the above questions. Our techniques lead to improvements in all of the above questions, and in several cases partially optimal constructions. This is in contrast to all previous work, which only obtain close to optimal constructions. Specifically, we obtain: 1) A seeded non-malleable extractor with seed length O(log n)+log^{1+o(1)}(1/epsilon) and entropy requirement O(log log n+log(1/epsilon)), where the entropy requirement is asymptotically optimal by a recent result of Gur and Shinkar [Tom Gur and Igor Shinkar, 2018]; 2) A two-round privacy amplification protocol with optimal entropy loss for security parameter up to Omega(k), which solves the privacy amplification problem completely; 3) A two-source extractor for entropy O((log n log log n)/(log log log n)), which also gives an explicit Ramsey graph on N vertices with no clique or independent set of size (log N)^{O((log log log N)/(log log log log N))}; and 4) The first explicit non-malleable code in the 2-split state model with constant rate, which has been a major goal in the study of non-malleable codes for quite some time. One small caveat is that the error of this code is only (an arbitrarily small) constant, but we can also achieve negligible error with rate Omega(log log log n/log log n), which already improves the rate in [Xin Li, 2017] exponentially. We believe our new techniques can help to eventually obtain completely optimal constructions in the above questions, and may have applications in other settings

    Privacy Amplification from Non-malleable Codes

    Get PDF
    Non-malleable Codes give us the following property: their codewords cannot be tampered into codewords of related messages. Privacy Amplification allows parties to convert their weak shared secret into a fully hidden, uniformly distributed secret key, while communicating on a fully tamperable public channel. In this work, we show how to construct a constant round privacy amplification protocol from any augmented split-state non-malleable code. Existentially, this gives us another primitive (in addition to optimal non-malleable extractors) whose optimal construction would solve the long-standing open problem of building constant round privacy amplification with optimal entropy loss. Instantiating our code with the current best known NMC gives us an 88-round privacy amplification protocol with entropy loss O(log(n)+κlog(κ))O(\log(n)+ \kappa \log (\kappa)) and min-entropy requirement Ω(log(n)+κlog(κ))\Omega(\log(n) +\kappa\log (\kappa)), where κ\kappa is the security parameter and nn is the length of the shared weak secret. In fact, for our result, even the weaker primitive of Non-malleable Randomness Encoders suffice. We view our result as an exciting connection between two of the most fascinating and well-studied information theoretic primitives, non-malleable codes and privacy amplification

    Extractors for Adversarial Sources via Extremal Hypergraphs

    Get PDF
    Randomness extraction is a fundamental problem that has been studied for over three decades. A well-studied setting assumes that one has access to multiple independent weak random sources, each with some entropy. However, this assumption is often unrealistic in practice. In real life, natural sources of randomness can produce samples with no entropy at all or with unwanted dependence. Motivated by this and applications from cryptography, we initiate a systematic study of randomness extraction for the class of adversarial sources defined as follows. A weak source X\mathbf{X} of the form X1,...,XN\mathbf{X}_1,...,\mathbf{X}_N, where each Xi\mathbf{X}_i is on nn bits, is an (N,K,n,k)(N,K,n,k)-source of locality dd if the following hold: (1) Somewhere good sources: at least KK of the Xi\mathbf{X}_i\u27s are independent, and each contains min-entropy at least kk. We call these Xi\mathbf{X}_i\u27s good sources, and their locations are unknown. (2) Bounded dependence: each remaining (bad) source can depend arbitrarily on at most dd good sources. We focus on constructing extractors with negligible error, in the regime where most of the entropy is contained within a few sources instead of across many (i.e., kk is at least polynomial in KK). In this setting, even for the case of 00-locality, very little is known prior to our work. For d1d \geq 1, essentially no previous results are known. We present various new extractors for adversarial sources in a wide range of parameters, and some of our constructions work for locality d=KΩ(1)d = K^{\Omega(1)}. As an application, we also give improved extractors for small-space sources. The class of adversarial sources generalizes several previously studied classes of sources, and our explicit extractor constructions exploit tools from recent advances in extractor machinery, such as two-source non-malleable extractors and low-error condensers. Thus, our constructions can be viewed as a new application of non-malleable extractors. In addition, our constructions combine the tools from extractor theory in a novel way through various sorts of explicit extremal hypergraphs. These connections leverage recent progress in combinatorics, such as improved bounds on cap sets and explicit constructions of Ramsey graphs, and may be of independent interest
    corecore