436 research outputs found

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Trevisan's extractor in the presence of quantum side information

    Get PDF
    Randomness extraction involves the processing of purely classical information and is therefore usually studied in the framework of classical probability theory. However, such a classical treatment is generally too restrictive for applications, where side information about the values taken by classical random variables may be represented by the state of a quantum system. This is particularly relevant in the context of cryptography, where an adversary may make use of quantum devices. Here, we show that the well known construction paradigm for extractors proposed by Trevisan is sound in the presence of quantum side information. We exploit the modularity of this paradigm to give several concrete extractor constructions, which, e.g, extract all the conditional (smooth) min-entropy of the source using a seed of length poly-logarithmic in the input, or only require the seed to be weakly random.Comment: 20+10 pages; v2: extract more min-entropy, use weakly random seed; v3: extended introduction, matches published version with sections somewhat reordere

    Better short-seed quantum-proof extractors

    Get PDF
    We construct a strong extractor against quantum storage that works for every min-entropy kk, has logarithmic seed length, and outputs Ω(k)\Omega(k) bits, provided that the quantum adversary has at most βk\beta k qubits of memory, for any \beta < \half. The construction works by first condensing the source (with minimal entropy-loss) and then applying an extractor that works well against quantum adversaries when the source is close to uniform. We also obtain an improved construction of a strong quantum-proof extractor in the high min-entropy regime. Specifically, we construct an extractor that uses a logarithmic seed length and extracts Ω(n)\Omega(n) bits from any source over \B^n, provided that the min-entropy of the source conditioned on the quantum adversary's state is at least (1−β)n(1-\beta) n, for any \beta < \half.Comment: 14 page

    On Extractors and Exposure-Resilient Functions for Sublogarithmic Entropy

    Full text link
    We study deterministic extractors for oblivious bit-fixing sources (a.k.a. resilient functions) and exposure-resilient functions with small min-entropy: of the function's n input bits, k << n bits are uniformly random and unknown to the adversary. We simplify and improve an explicit construction of extractors for bit-fixing sources with sublogarithmic k due to Kamp and Zuckerman (SICOMP 2006), achieving error exponentially small in k rather than polynomially small in k. Our main result is that when k is sublogarithmic in n, the short output length of this construction (O(log k) output bits) is optimal for extractors computable by a large class of space-bounded streaming algorithms. Next, we show that a random function is an extractor for oblivious bit-fixing sources with high probability if and only if k is superlogarithmic in n, suggesting that our main result may apply more generally. In contrast, we show that a random function is a static (resp. adaptive) exposure-resilient function with high probability even if k is as small as a constant (resp. log log n). No explicit exposure-resilient functions achieving these parameters are known

    Experimental device-independent certified randomness generation with an instrumental causal structure

    Full text link
    The intrinsic random nature of quantum physics offers novel tools for the generation of random numbers, a central challenge for a plethora of fields. Bell non-local correlations obtained by measurements on entangled states allow for the generation of bit strings whose randomness is guaranteed in a device-independent manner, i.e. without assumptions on the measurement and state-generation devices. Here, we generate this strong form of certified randomness on a new platform: the so-called instrumental scenario, which is central to the field of causal inference. First, we theoretically show that certified random bits, private against general quantum adversaries, can be extracted exploiting device-independent quantum instrumental-inequality violations. To that end, we adapt techniques previously developed for the Bell scenario. Then, we experimentally implement the corresponding randomness-generation protocol using entangled photons and active feed-forward of information. Moreover, we show that, for low levels of noise, our protocol offers an advantage over the simplest Bell-nonlocality protocol based on the Clauser-Horn-Shimony-Holt inequality.Comment: Modified Supplementary Information: removed description of extractor algorithm introduced by arXiv:1212.0520. Implemented security of the protocol against general adversarial attack

    Near-Optimal Erasure List-Decodable Codes

    Get PDF

    Extractors: Low Entropy Requirements Colliding With Non-Malleability

    Get PDF
    The known constructions of negligible error (non-malleable) two-source extractors can be broadly classified in three categories: (1) Constructions where one source has min-entropy rate about 1/21/2, the other source can have small min-entropy rate, but the extractor doesn't guarantee non-malleability. (2) Constructions where one source is uniform, and the other can have small min-entropy rate, and the extractor guarantees non-malleability when the uniform source is tampered. (3) Constructions where both sources have entropy rate very close to 11 and the extractor guarantees non-malleability against the tampering of both sources. We introduce a new notion of collision resistant extractors and in using it we obtain a strong two source non-malleable extractor where we require the first source to have 0.80.8 entropy rate and the other source can have min-entropy polylogarithmic in the length of the source. We show how the above extractor can be applied to obtain a non-malleable extractor with output rate 12\frac 1 2, which is optimal. We also show how, by using our extractor and extending the known protocol, one can obtain a privacy amplification secure against memory tampering where the size of the secret output is almost optimal

    Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions

    Get PDF
    How to generate provably true randomness with minimal assumptions? This question is important not only for the efficiency and the security of information processing, but also for understanding how extremely unpredictable events are possible in Nature. All current solutions require special structures in the initial source of randomness, or a certain independence relation among two or more sources. Both types of assumptions are impossible to test and difficult to guarantee in practice. Here we show how this fundamental limit can be circumvented by extractors that base security on the validity of physical laws and extract randomness from untrusted quantum devices. In conjunction with the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness extractor uses just a single and general weak source, produces an arbitrarily long and near-uniform output, with a close-to-optimal error, secure against all-powerful quantum adversaries, and tolerating a constant level of implementation imprecision. The source necessarily needs to be unpredictable to the devices, but otherwise can even be known to the adversary. Our central technical contribution, the Equivalence Lemma, provides a general principle for proving composition security of untrusted-device protocols. It implies that unbounded randomness expansion can be achieved simply by cross-feeding any two expansion protocols. In particular, such an unbounded expansion can be made robust, which is known for the first time. Another significant implication is, it enables the secure randomness generation and key distribution using public randomness, such as that broadcast by NIST's Randomness Beacon. Our protocol also provides a method for refuting local hidden variable theories under a weak assumption on the available randomness for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An abstract model of robustness is added and the robustness claim in V2 is made rigorous. Focuses on quantum-security. A future update is planned to address non-signaling securit
    • …
    corecore