663 research outputs found

    A New Approximate Min-Max Theorem with Applications in Cryptography

    Full text link
    We propose a novel proof technique that can be applied to attack a broad class of problems in computational complexity, when switching the order of universal and existential quantifiers is helpful. Our approach combines the standard min-max theorem and convex approximation techniques, offering quantitative improvements over the standard way of using min-max theorems as well as more concise and elegant proofs

    Generative Models of Huge Objects

    Get PDF
    This work initiates the systematic study of explicit distributions that are indistinguishable from a single exponential-size combinatorial object. In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness). Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory. Problems that are well understood in the setting of pseudorandomness present significant challenges and at times are impossible when considering generative models of huge objects. We demonstrate the versatility of this study by providing a learning algorithm for huge indistinguishable objects in several natural settings including: dense functions and graphs with a truthfulness requirement on the number of ones in the function or edges in the graphs, and a version of the weak regularity lemma for sparse graphs that satisfy some global properties. These and other results generalize basic pseudorandom objects as well as notions introduced in algorithmic fairness. The results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory

    Learning DNF Expressions from Fourier Spectrum

    Full text link
    Since its introduction by Valiant in 1984, PAC learning of DNF expressions remains one of the central problems in learning theory. We consider this problem in the setting where the underlying distribution is uniform, or more generally, a product distribution. Kalai, Samorodnitsky and Teng (2009) showed that in this setting a DNF expression can be efficiently approximated from its "heavy" low-degree Fourier coefficients alone. This is in contrast to previous approaches where boosting was used and thus Fourier coefficients of the target function modified by various distributions were needed. This property is crucial for learning of DNF expressions over smoothed product distributions, a learning model introduced by Kalai et al. (2009) and inspired by the seminal smoothed analysis model of Spielman and Teng (2001). We introduce a new approach to learning (or approximating) a polynomial threshold functions which is based on creating a function with range [-1,1] that approximately agrees with the unknown function on low-degree Fourier coefficients. We then describe conditions under which this is sufficient for learning polynomial threshold functions. Our approach yields a new, simple algorithm for approximating any polynomial-size DNF expression from its "heavy" low-degree Fourier coefficients alone. Our algorithm greatly simplifies the proof of learnability of DNF expressions over smoothed product distributions. We also describe an application of our algorithm to learning monotone DNF expressions over product distributions. Building on the work of Servedio (2001), we give an algorithm that runs in time \poly((s \cdot \log{(s/\eps)})^{\log{(s/\eps)}}, n), where ss is the size of the target DNF expression and \eps is the accuracy. This improves on \poly((s \cdot \log{(ns/\eps)})^{\log{(s/\eps)} \cdot \log{(1/\eps)}}, n) bound of Servedio (2001).Comment: Appears in Conference on Learning Theory (COLT) 201

    LNCS

    Get PDF
    Consider a joint distribution (X,A) on a set. We show that for any family of distinguishers, there exists a simulator such that 1 no function in can distinguish (X,A) from (X,h(X)) with advantage ε, 2 h is only O(2 3ℓ ε -2) times less efficient than the functions in. For the most interesting settings of the parameters (in particular, the cryptographic case where X has superlogarithmic min-entropy, ε > 0 is negligible and consists of circuits of polynomial size), we can make the simulator h deterministic. As an illustrative application of our theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt'09. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES. Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem

    A Subgradient Algorithm For Computational Distances and Applications to Cryptography

    Get PDF
    The task of finding a constructive approximation in the computational distance, while simultaneously preserving additional constrains (referred to as simulators ), appears as the key difficulty in problems related to complexity theory, cryptography and combinatorics. In this paper we develop a general framework to \emph{efficiently} prove results of this sort, based on \emph{subgradient-based optimization applied to computational distances}. This approach is simpler and natural than KL-projections already studied in this context (for example the uniform min-max theorem from CRYPTO\u2713), while simultaneously may lead to quantitatively better results. Some applications of our algorithm include: \begin{itemize} \item Fixing an erroneous boosting proof for simulating auxiliary inputs from TCC\u2713 and much better bounds for the EUROCRYPT\u2709 leakage-resilient stream cipher \item Deriving the unified proof for Impagliazzo Hardcore Lemma, Dense Model Theorem, Weak Szemeredi Theorem (CCC\u2709) \item Showing that dense leakages can be efficiently simulated, with significantly improved bounds \end{itemize} Interestingly, our algorithm can take advantage of small-variance assumptions imposed on distinguishers, that have been studied recently in the context of key derivation

    Non interactive simulation of correlated distributions is decidable

    Get PDF
    A basic problem in information theory is the following: Let P=(X,Y)\mathbf{P} = (\mathbf{X}, \mathbf{Y}) be an arbitrary distribution where the marginals X\mathbf{X} and Y\mathbf{Y} are (potentially) correlated. Let Alice and Bob be two players where Alice gets samples {xi}i≥1\{x_i\}_{i \ge 1} and Bob gets samples {yi}i≥1\{y_i\}_{i \ge 1} and for all ii, (xi,yi)∼P(x_i, y_i) \sim \mathbf{P}. What joint distributions Q\mathbf{Q} can be simulated by Alice and Bob without any interaction? Classical works in information theory by G{\'a}cs-K{\"o}rner and Wyner answer this question when at least one of P\mathbf{P} or Q\mathbf{Q} is the distribution on {0,1}×{0,1}\{0,1\} \times \{0,1\} where each marginal is unbiased and identical. However, other than this special case, the answer to this question is understood in very few cases. Recently, Ghazi, Kamath and Sudan showed that this problem is decidable for Q\mathbf{Q} supported on {0,1}×{0,1}\{0,1\} \times \{0,1\}. We extend their result to Q\mathbf{Q} supported on any finite alphabet. We rely on recent results in Gaussian geometry (by the authors) as well as a new \emph{smoothing argument} inspired by the method of \emph{boosting} from learning theory and potential function arguments from complexity theory and additive combinatorics.Comment: The reduction for non-interactive simulation for general source distribution to the Gaussian case was incorrect in the previous version. It has been rectified no

    Generalization and Equilibrium in Generative Adversarial Nets (GANs)

    Full text link
    We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.Comment: This is an updated version of an ICML'17 paper with the same title. The main difference is that in the ICML'17 version the pure equilibrium result was only proved for Wasserstein GAN. In the current version the result applies to most reasonable training objectives. In particular, Theorem 4.3 now applies to both original GAN and Wasserstein GA
    • …
    corecore