580 research outputs found

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Better lossless condensers through derandomized curve samplers

    Get PDF
    Lossless condensers are unbalanced expander graphs, with expansion close to optimal. Equivalently, they may be viewed as functions that use a short random seed to map a source on n bits to a source on many fewer bits while preserving all of the min-entropy. It is known how to build lossless condensers when the graphs are slightly unbalanced in the work of M. Capalbo et al. (2002). The highly unbalanced case is also important but the only known construction does not condense the source well. We give explicit constructions of lossless condensers with condensing close to optimal, and using near-optimal seed length. Our main technical contribution is a randomness-efficient method for sampling FD (where F is a field) with low-degree curves. This problem was addressed before in the works of E. Ben-Sasson et al. (2003) and D. Moshkovitz and R. Raz (2006) but the solutions apply only to degree one curves, i.e., lines. Our technique is new and elegant. We use sub-sampling and obtain our curve samplers by composing a sequence of low-degree manifolds, starting with high-dimension, low-degree manifolds and proceeding through lower and lower dimension manifolds with (moderately) growing degrees, until we finish with dimension-one, low-degree manifolds, i.e., curves. The technique may be of independent interest

    Quantum-proof randomness extractors via operator space theory

    Get PDF
    Quantum-proof randomness extractors are an important building block for classical and quantum cryptography as well as device independent randomness amplification and expansion. Furthermore they are also a useful tool in quantum Shannon theory. It is known that some extractor constructions are quantum-proof whereas others are provably not [Gavinsky et al., STOC'07]. We argue that the theory of operator spaces offers a natural framework for studying to what extent extractors are secure against quantum adversaries: we first phrase the definition of extractors as a bounded norm condition between normed spaces, and then show that the presence of quantum adversaries corresponds to a completely bounded norm condition between operator spaces. From this we show that very high min-entropy extractors as well as extractors with small output are always (approximately) quantum-proof. We also study a generalization of extractors called randomness condensers. We phrase the definition of condensers as a bounded norm condition and the definition of quantum-proof condensers as a completely bounded norm condition. Seeing condensers as bipartite graphs, we then find that the bounded norm condition corresponds to an instance of a well studied combinatorial problem, called bipartite densest subgraph. Furthermore, using the characterization in terms of operator spaces, we can associate to any condenser a Bell inequality (two-player game) such that classical and quantum strategies are in one-to-one correspondence with classical and quantum attacks on the condenser. Hence, we get for every quantum-proof condenser (which includes in particular quantum-proof extractors) a Bell inequality that can not be violated by quantum mechanics.Comment: v3: 34 pages, published versio

    The Bounded Storage Model in The Presence of a Quantum Adversary

    Get PDF
    An extractor is a function E that is used to extract randomness. Given an imperfect random source X and a uniform seed Y, the output E(X,Y) is close to uniform. We study properties of such functions in the presence of prior quantum information about X, with a particular focus on cryptographic applications. We prove that certain extractors are suitable for key expansion in the bounded storage model where the adversary has a limited amount of quantum memory. For extractors with one-bit output we show that the extracted bit is essentially equally secure as in the case where the adversary has classical resources. We prove the security of certain constructions that output multiple bits in the bounded storage model.Comment: 13 pages Latex, v3: discussion of independent randomizers adde
    corecore