19 research outputs found

    Incompressiblity and Next-Block Pseudoentropy

    Get PDF
    A distribution is k-incompressible, Yao [FOCS \u2782], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP \u2799], and to other cryptographic hardness assumptions, was unclear. We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k-2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP \u2713]. We deduce that a samplable distribution X that is (H(X)+2)-incompressible, implies the existence of one-way functions

    Dimension, Pseudorandomness and Extraction of Pseudorandomness

    Get PDF
    In this paper we propose a quantification of distributions on a set of strings, in terms of how close to pseudorandom a distribution is. The quantification is an adaptation of the theory of dimension of sets of infinite sequences introduced by Lutz. Adapting Hitchcock\u27s work, we also show that the logarithmic loss incurred by a predictor on a distribution is quantitatively equivalent to the notion of dimension we define. Roughly, this captures the equivalence between pseudorandomness defined via indistinguishability and via unpredictability. Later we show some natural properties of our notion of dimension. We also do a comparative study among our proposed notion of dimension and two well known notions of computational analogue of entropy, namely HILL-type pseudo min-entropy and next-bit pseudo Shannon entropy. Further, we apply our quantification to the following problem. If we know that the dimension of a distribution on the set of n-length strings is s in (0,1], can we extract out O(sn) pseudorandom bits out of the distribution? We show that to construct such extractor, one need at least Omega(log n) bits of pure randomness. However, it is still open to do the same using O(log n) random bits. We show that deterministic extraction is possible in a special case - analogous to the bit-fixing sources introduced by Chor et al., which we term nonpseudorandom bit-fixing source. We adapt the techniques of Gabizon, Raz and Shaltiel to construct a deterministic pseudorandom extractor for this source. By the end, we make a little progress towards P vs. BPP problem by showing that existence of optimal stretching function that stretches O(log n) input bits to produce n output bits such that output distribution has dimension s in (0,1], implies P=BPP

    Unifying computational entropies via Kullback-Leibler divergence

    Get PDF
    We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent "duality" between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC '12)

    A Uniform Min-Max Theorem and Characterizations of Computational Randomness

    Get PDF
    This thesis develops several tools and techniques using ideas from information theory, optimization, and online learning, and applies them to a number of highly related fundamental problems in complexity theory, pseudorandomness theory, and cryptography.Engineering and Applied Science

    A Tight Lower Bound for Entropy Flattening

    Get PDF

    From Laconic Zero-Knowledge to Public-Key Cryptography

    Get PDF
    Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK. In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers. Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE. We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol

    The Randomized Iterate Revisited - Almost Linear Seed Length PRGs from A Broader Class of One-way Functions

    Get PDF
    We revisit the randomized iterate technique that was originally used by Goldreich, Krawczyk, and Luby (SICOMP 1993) and refined by Haitner, Harnik and Reingold (CRYPTO 2006) in constructing pseudorandom generators (PRGs) from regular one-way functions (OWFs). We abstract out a technical lemma (which is folklore in leakage resilient cryptography), and use it to provide a simpler and more modular proof for the Haitner-Harnik-Reingold PRGs from regular OWFs. We introduce a more general class of OWFs called weakly-regular one-way functions from which we construct a PRG of seed length O(n*logn). More specifically, consider an arbitrary one-way function f with range divided into sets Y1, Y2, ..., Yn where each Y_i={ y:2^{i-1}<=|f^{-1}(y)|<2^{i} }. We say that f is weakly-regular if there exists a (not necessarily efficient computable) cut-off point max such that Y_max is of some noticeable portion (say n^{-c} for constant c), and Y_max+1, ..., Y_n only sum to a negligible fraction. We construct a PRG by making O(n^{2c+1}) calls to f and achieve seed length O(n*logn) using bounded space generators. This generalizes the approach of Haitner et al., where regular OWFs fall into a special case for c=0. We use a proof technique that is similar to and extended from the method by Haitner, Harnik and Reingold for hardness amplification of regular weakly-one-way functions. Our work further explores the feasibility and limits of the randomized iterate type of black-box constructions. In particular, the underlying f can have an arbitrary structure as long as the set of images with maximal preimage size has a noticeable fraction. In addition, our construction is much more seed-length efficient and security-preserving (albeit less general) than the HILL-style generators where the best known construction by Vadhan and Zheng (STOC 2012) requires seed length O(n^3)
    corecore