26 research outputs found

    How to Fake Auxiliary Input

    Get PDF
    Consider a joint distribution (X,A)(X,A) on a set X×{0,1}{\cal X}\times\{0,1\}^\ell. We show that for any family F{\cal F} of distinguishers f ⁣:X×{0,1}{0,1}f \colon {\cal X} \times \{0,1\}^\ell \rightarrow \{0,1\}, there exists a simulator h ⁣:X{0,1}h \colon {\cal X} \rightarrow \{0,1\}^\ell such that \begin{enumerate} \item no function in F{\cal F} can distinguish (X,A)(X,A) from (X,h(X))(X,h(X)) with advantage ϵ\epsilon, \item hh is only O(23ϵ2)O(2^{3\ell}\epsilon^{-2}) times less efficient than the functions in F{\cal F}. \end{enumerate} For the most interesting settings of the parameters (in particular, the cryptographic case where XX has superlogarithmic min-entropy, ϵ>0\epsilon > 0 is negligible and F{\cal F} consists of circuits of polynomial size), we can make the simulator hh \emph{deterministic}. As an illustrative application of this theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt\u2709. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES. Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,ϵ)(s,\epsilon)-indistinguishable we need the complexity O(s25ϵ2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϵ2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    A New Approximate Min-Max Theorem with Applications in Cryptography

    Full text link
    We propose a novel proof technique that can be applied to attack a broad class of problems in computational complexity, when switching the order of universal and existential quantifiers is helpful. Our approach combines the standard min-max theorem and convex approximation techniques, offering quantitative improvements over the standard way of using min-max theorems as well as more concise and elegant proofs

    A Uniform Min-Max Theorem with Applications in Cryptography

    Get PDF
    We present a new, more constructive proof of von Neumann’s Min-Max Theorem for two-player zero-sum game — specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior ’99) with the advantage that the algorithm runs in poly(n) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over {0, 1}n^n . This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem). We describe several applications, including a more modular and improved uniform version of Impagliazzo’s Hardcore Theorem (FOCS ’95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC ’11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC ’09)).Engineering and Applied Science

    LNCS

    Get PDF
    Consider a joint distribution (X,A) on a set. We show that for any family of distinguishers, there exists a simulator such that 1 no function in can distinguish (X,A) from (X,h(X)) with advantage ε, 2 h is only O(2 3ℓ ε -2) times less efficient than the functions in. For the most interesting settings of the parameters (in particular, the cryptographic case where X has superlogarithmic min-entropy, ε > 0 is negligible and consists of circuits of polynomial size), we can make the simulator h deterministic. As an illustrative application of our theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt'09. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES. Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem

    Generative Models of Huge Objects

    Get PDF
    This work initiates the systematic study of explicit distributions that are indistinguishable from a single exponential-size combinatorial object. In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness). Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory. Problems that are well understood in the setting of pseudorandomness present significant challenges and at times are impossible when considering generative models of huge objects. We demonstrate the versatility of this study by providing a learning algorithm for huge indistinguishable objects in several natural settings including: dense functions and graphs with a truthfulness requirement on the number of ones in the function or edges in the graphs, and a version of the weak regularity lemma for sparse graphs that satisfy some global properties. These and other results generalize basic pseudorandom objects as well as notions introduced in algorithmic fairness. The results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory

    A Cryptographic Proof of Regularity Lemmas: Simpler Unified Proofs and Refined Bounds

    Get PDF
    In this work we present a short and unified proof for the Strong and Weak Regularity Lemma, based on the cryptographic technique called \emph{low-complexity approximations}. In short, both problems reduce to a task of finding constructively an approximation for a certain target function under a class of distinguishers (test functions), where distinguishers are combinations of simple rectangle-indicators. In our case these approximations can be learned by a simple iterative procedure, which yields a unified and simple proof, achieving for any graph with density dd and any approximation parameter ϵ\epsilon the partition size \begin{itemize} \item a tower of 2\u27s of height O(dϵ2)O\left( d_{}\epsilon^{-2} \right) for a variant of Strong Regularity \item a power of 2 with exponent O(dϵ2)O\left(d\epsilon^{-2} \right) for Weak Regularity \end{itemize} The novelty in our proof is as follows: (a) a simple approach which yields both strong and weaker variant, and (b) improvements for sparse graphs. At an abstract level, our proof can be seen a refinement and simplification of the ``analytic\u27\u27 proof given by Lovasz and Szegedy

    A counterexample to the chain rule for conditional HILL entropy

    Get PDF
    Most entropy notions H(.) like Shannon or min-entropy satisfy a chain rule stating that for random variables X,Z, and A we have H(X|Z,A)≥H(X|Z)−|A|. That is, by conditioning on A the entropy of X can decrease by at most the bitlength |A| of A. Such chain rules are known to hold for some computational entropy notions like Yao’s and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption, and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: we construct joint distributions (X,Z,A), where A is a distribution over a single bit, such that the HILL entropy H HILL (X|Z) is large but H HILL (X|Z,A) is basically zero. Our counterexample just makes the minimal assumption that NP⊈P/poly. Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable. Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object
    corecore