10 research outputs found

    A Complete Characterization of Statistical Query Learning with Applications to Evolvability

    Get PDF
    Statistical query (SQ) learning model of Kearns (1993) is a natural restriction of the PAC learning model in which a learning algorithm is allowed to obtain estimates of statistical properties of the examples but cannot see the examples themselves. We describe a new and simple characterization of the query complexity of learning in the SQ learning model. Unlike the previously known bounds on SQ learning our characterization preserves the accuracy and the efficiency of learning. The preservation of accuracy implies that that our characterization gives the first characterization of SQ learning in the agnostic learning framework. The preservation of efficiency is achieved using a new boosting technique and allows us to derive a new approach to the design of evolutionary algorithms in Valiant's (2006) model of evolvability. We use this approach to demonstrate the existence of a large class of monotone evolutionary learning algorithms based on square loss performance estimation. These results differ significantly from the few known evolutionary algorithms and give evidence that evolvability in Valiant's model is a more versatile phenomenon than there had been previous reason to suspect.Comment: Simplified Lemma 3.8 and it's application

    Differential Privacy for the Analyst via Private Equilibrium Computation

    Full text link
    We give new mechanisms for answering exponentially many queries from multiple analysts on a private database, while protecting differential privacy both for the individuals in the database and for the analysts. That is, our mechanism's answer to each query is nearly insensitive to changes in the queries asked by other analysts. Our mechanism is the first to offer differential privacy on the joint distribution over analysts' answers, providing privacy for data analysts even if the other data analysts collude or register multiple accounts. In some settings, we are able to achieve nearly optimal error rates (even compared to mechanisms which do not offer analyst privacy), and we are able to extend our techniques to handle non-linear queries. Our analysis is based on a novel view of the private query-release problem as a two-player zero-sum game, which may be of independent interest

    LNCS

    Get PDF
    Consider a joint distribution (X,A) on a set. We show that for any family of distinguishers, there exists a simulator such that 1 no function in can distinguish (X,A) from (X,h(X)) with advantage ε, 2 h is only O(2 3ℓ ε -2) times less efficient than the functions in. For the most interesting settings of the parameters (in particular, the cryptographic case where X has superlogarithmic min-entropy, ε > 0 is negligible and consists of circuits of polynomial size), we can make the simulator h deterministic. As an illustrative application of our theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt'09. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES. Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem

    How to Fake Auxiliary Input

    Get PDF
    Consider a joint distribution (X,A)(X,A) on a set X×{0,1}{\cal X}\times\{0,1\}^\ell. We show that for any family F{\cal F} of distinguishers f ⁣:X×{0,1}{0,1}f \colon {\cal X} \times \{0,1\}^\ell \rightarrow \{0,1\}, there exists a simulator h ⁣:X{0,1}h \colon {\cal X} \rightarrow \{0,1\}^\ell such that \begin{enumerate} \item no function in F{\cal F} can distinguish (X,A)(X,A) from (X,h(X))(X,h(X)) with advantage ϵ\epsilon, \item hh is only O(23ϵ2)O(2^{3\ell}\epsilon^{-2}) times less efficient than the functions in F{\cal F}. \end{enumerate} For the most interesting settings of the parameters (in particular, the cryptographic case where XX has superlogarithmic min-entropy, ϵ>0\epsilon > 0 is negligible and F{\cal F} consists of circuits of polynomial size), we can make the simulator hh \emph{deterministic}. As an illustrative application of this theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt\u2709. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES. Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem

    The Uniform Hardcore Lemma via Approximate Bregman Projections ∗

    No full text
    We give a simple, more efficient and uniform proof of the hard-core lemma, a fundamental result in complexity theory with applications in machine learning and cryptography. Our result follows from the connection between boosting algorithms and hard-core set constructions discovered by Klivans and Servedio [11]. Informally stated, our result is the following: suppose we fix a family of boolean functions. Assume there is an efficient algorithm which for every input length and every smooth distribution (i.e. one that doesn’t assign too much weight to any single input) over the inputs produces a circuit such that the circuit computes the boolean function noticeably better than random. Then, there is an efficient algorithm which for every input length produces a circuit that computes the function correctly on almost all inputs. Our algorithm significantly simplifies previous proofs of the uniform and the non-uniform hard-core lemma, while matching or improving the previously best known parameters. The algorithm uses a generalized multiplicative update rule combined with a natural notion of approximate Bregman projection. Bregman projections are widely used in convex optimization and machine learning. We present an algorithm which efficiently approximates the Bregman projection onto the set of high density measures when the Kullback-Leibler divergence is used as a distance function. Our algorithm has a logarithmic runtime over any domain from which we can efficiently sample. High density measures correspond to smooth distributions which arise naturally, for instance, in the context of online learning. Hence, our technique may be of independent interest. ∗This paper includes and extends previous work by one of the authors [10]
    corecore