7 research outputs found

    An entropy lower bound for non-malleable extractors

    Get PDF
    A (k, Īµ)-non-malleable extractor is a function nmExt : {0, 1} n Ɨ {0, 1} d ā†’ {0, 1} that takes two inputs, a weak source X ~ {0, 1} n of min-entropy k and an independent uniform seed s E {0, 1} d , and outputs a bit nmExt(X, s) that is Īµ-close to uniform, even given the seed s and the value nmExt(X, s') for an adversarially chosen seed s' ā‰  s. Dodis and Wichs (STOC 2009) showed the existence of (k, Īµ)-non-malleable extractors with seed length d = log(n - k - 1) + 2 log(1/Īµ) + 6 that support sources of min-entropy k > log(d) + 2 log(1/Īµ) + 8. We show that the foregoing bound is essentially tight, by proving that any (k, Īµ)-non-malleable extractor must satisfy the min-entropy bound k > log(d) + 2 log(1/Īµ) - log log(1/Īµ) - C for an absolute constant C. In particular, this implies that non-malleable extractors require min-entropy at least Ī©(loglog(n)). This is in stark contrast to the existence of strong seeded extractors that support sources of min-entropy k = O(log(1/Īµ)). Our techniques strongly rely on coding theory. In particular, we reveal an inherent connection between non-malleable extractors and error correcting codes, by proving a new lemma which shows that any (k, Īµ)-non-malleable extractor with seed length d induces a code C āŠ† {0,1} 2k with relative distance 1/2 - 2Īµ and rate d-1/2k

    Randomness-Efficient Curve Samplers

    Get PDF
    Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions. The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(logNā€‰+ā€‰log(1/Ī“)) random bits exist, where N is the domain size and Ī“ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TSU06] they obtained curve samplers with near-optimal randomness complexity. We present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor), sampling curves of degree (m log_q (1/Ī“))^(O(1)) in F^m_q. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes

    Quantum-Proof Extractors: Optimal up to Constant Factors

    Get PDF
    We give the first construction of a family of quantum-proof extractors that has optimal seed length dependence O(log(n/Ē«)) on the input length n and error Ē«. Our extractors support any min-entropy k = ā„¦(log n + log1+Ī± (1/Ē«)) and extract m = (1 āˆ’ Ī±)k bits that are Ē«-close to uniform, for any desired constant Ī± > 0. Previous constructions had a quadratically worse seed length or were restricted to very large input min-entropy or very few output bits. Our result is based on a generic reduction showing that any strong classical condenser is automatically quantum-proof, with comparable parameters. The existence of such a reduction for extractors is a long-standing open question; here we give an affirmative answer for condensers. Once this reduction is established, to obtain our quantum-proof extractors one only needs to consider high entropy sources. We construct quantum-proof extractors with the desired parameters for such sources by extending a classical approach to extractor construction, based on the use of block-sources and sampling, to the quantum setting. Our extractors can be used to obtain improved protocols for device-independent randomness expansion and for privacy amplification

    Universal codes in the shared-randomness model for channels with general distortion capabilities

    Full text link
    We put forth new models for universal channel coding. Unlike standard codes which are designed for a specific type of channel, our most general universal code makes communication resilient on every channel, provided the noise level is below the tolerated bound, where the noise level t of a channel is the logarithm of its ambiguity (the maximum number of strings that can be distorted into a given one). The other more restricted universal codes still work for large classes of natural channels. In a universal code, encoding is channel-independent, but the decoding function knows the type of channel. We allow the encoding and the decoding functions to share randomness, which is unavailable to the channel. There are two scenarios for the type of attack that a channel can perform. In the oblivious scenario, codewords belong to an additive group and the channel distorts a codeword by adding a vector from a fixed set. The selection is based on the message and the encoding function, but not on the codeword. In the Hamming scenario, the channel knows the codeword and is fully adversarial. For a universal code, there are two parameters of interest: the rate, which is the ratio between the message length k and the codeword length n, and the number of shared random bits. We show the existence in both scenarios of universal codes with rate 1-t/n - o(1), which is optimal modulo the o(1) term. The number of shared random bits is O(log n) in the oblivious scenario, and O(n) in the Hamming scenario, which, for typical values of the noise level, we show to be optimal, modulo the constant hidden in the O() notation. In both scenarios, the universal encoding is done in time polynomial in n, but the channel-dependent decoding procedures are in general not efficient. For some weaker classes of channels we construct universal codes with polynomial-time encoding and decoding.Comment: Removed the mentioning of online matching, which is not used her

    Quantum-Proof Extractors: Optimal up to Constant Factors

    Get PDF
    We give the first construction of a family of quantum-proof extractors that has optimal seed length dependence O(log(n/Ē«)) on the input length n and error Ē«. Our extractors support any min-entropy k = ā„¦(log n + log1+Ī± (1/Ē«)) and extract m = (1 āˆ’ Ī±)k bits that are Ē«-close to uniform, for any desired constant Ī± > 0. Previous constructions had a quadratically worse seed length or were restricted to very large input min-entropy or very few output bits. Our result is based on a generic reduction showing that any strong classical condenser is automatically quantum-proof, with comparable parameters. The existence of such a reduction for extractors is a long-standing open question; here we give an affirmative answer for condensers. Once this reduction is established, to obtain our quantum-proof extractors one only needs to consider high entropy sources. We construct quantum-proof extractors with the desired parameters for such sources by extending a classical approach to extractor construction, based on the use of block-sources and sampling, to the quantum setting. Our extractors can be used to obtain improved protocols for device-independent randomness expansion and for privacy amplification

    Expander Graphs and Coding Theory

    Get PDF
    Expander graphs are highly connected sparse graphs which lie at the interface of many diļ¬€erent ļ¬elds of study. For example, they play important roles in prime sieves, cryptography, compressive sensing, metric embedding, and coding theory to name a few. This thesis focuses on the connections between sparse graphs and coding theory. It is a major challenge to explicitly construct sparse graphs with good expansion properties, for example Ramanujan graphs. Nevertheless, explicit constructions do exist, and in this thesis, we survey many of these constructions up to this point including a new construction which slightly improves on an earlier edge expansion bound. The edge expansion of a graph is crucial in applications, and it is well-known that computing the edge expansion of an arbitrary graph is NP-hard. We present a simple algo-rithm for approximating the edge expansion of a graph using linear programming techniques. While Andersen and Lang (2008) proved similar results, our analysis attacks the problem from a diļ¬€erent vantage point and was discovered independently. The main contribution in the thesis is a new result in fast decoding for expander codes. Current algorithms in the literature can decode a constant fraction of errors in linear time but require that the underlying graphs have vertex expansion at least 1/2. We present a fast decoding algorithm that can decode a constant fraction of errors in linear time given any vertex expansion (even if it is much smaller than 1/2) by using a stronger local code, and the fraction of errors corrected almost doubles that of Viderman (2013)
    corecore