9 research outputs found

    Aggregate Pseudorandom Functions and Connections to Learning

    Get PDF
    In the first part of this work, we introduce a new type of pseudo-random function for which ``aggregate queries\u27\u27 over exponential-sized sets can be efficiently answered. We show how to use algebraic properties of underlying classical pseudo random functions, to construct such ``aggregate pseudo-random functions\u27\u27 for a number of classes of aggregation queries under cryptographic hardness assumptions. For example, one aggregate query we achieve is the product of all function values accepted by a polynomial-sized read-once boolean formula. On the flip side, we show that certain aggregate queries are impossible to support. Aggregate pseudo-random functions fall within the framework of the work of Goldreich, Goldwasser, and Nussboim on the ``Implementation of Huge Random Objects,\u27\u27 providing truthful implementations of pseudo-random functions for which aggregate queries can be answered. In the second part of this work, we show how various extensions of pseudo-random functions considered recently in the cryptographic literature, yield impossibility results for various extensions of machine learning models, continuing a line of investigation originated by Valiant and Kearns in the 1980s. The extended pseudo-random functions we address include constrained pseudo random functions, aggregatable pseudo random functions, and pseudo random functions secure under related-key attacks

    Decompositions of graphs of functions and efficient iterations of lookup tables

    Get PDF
    We show that every function f implemented as a lookup table can be implemented such that the computational complexity of evaluating f^m(x) is small, independently of m and x. The implementation only increases the storage space by a small_constant_ factor.Comment: Small update

    Zero Knowledge Protocols from Succinct Constraint Detection

    Get PDF
    We study the problem of constructing proof systems that achieve both soundness and zero knowledge unconditionally (without relying on intractability assumptions). Known techniques for this goal are primarily *combinatorial*, despite the fact that constructions of interactive proofs (IPs) and probabilistically checkable proofs (PCPs) heavily rely on *algebraic* techniques to achieve their properties. We present simple and natural modifications of well-known algebraic IP and PCP protocols that achieve unconditional (perfect) zero knowledge in recently introduced models, overcoming limitations of known techniques. 1. We modify the PCP of Ben-Sasson and Sudan [BS08] to obtain zero knowledge for NEXP in the model of Interactive Oracle Proofs [BCS16,RRR16], where the verifier, in each round, receives a PCP from the prover. 2. We modify the IP of Lund, Fortnow, Karloff, and Nisan [LFKN92] to obtain zero knowledge for #P in the model of Interactive PCPs [KR08], where the verifier first receives a PCP from the prover and then interacts with him. The simulators in our zero knowledge protocols rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes: * An algorithm to detect constraints for Reed--Muller codes of exponential length. This algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. * An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree. This algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are locally spanned by a small number of small-support constraints

    On the Implementation of Huge Random Objects

    No full text
    We initiate a general study of pseudo-random implementations of huge random objects, and apply it to a few areas in which random objects occur naturally. For example, a random object being considered may be a random connected graph, a random bounded-degreegraph, or a random errorcorrecting code with good distance. A pseudo-random implementation of such type T objects must generate objects of type T that can not be distinguished from random ones, rather than objects that can not be distinguished from type T objects (although they are not type T at all)

    On the Implementation of Huge Random Objects

    No full text
    We initiate a general study of the feasibility of implementing (huge) random objects, and demonstrate its applicability to a number of areas in which random objects occur naturally. We highlight two types of measures of the quality of the implementation (with respect to the desired specification): The first type corresponds to various standard notions of indistinguishability (applied to function ensembles), whereas the second type is a novel notion that we call truthfulness. Intuitively, a truthful implementation of a random object of Type T must (always) be an object of Type T, and not merely be indistinguishable from a random object of Type T. Our formalism allows for the consideration of random objects that satisfy some fixed property (or have some fixed structure) as well as the consideration of objects supporting complex queries. For example, we consider the truthful implementation of random Hamiltonian graphs as well as supporting complex queries regarding such graphs (e.g., providing the next vertex along a fixed Hamiltonian path in such a graph)

    On the Implementation of Huge Random Objects

    No full text
    We initiate a general study of pseudo-random implementations of huge random objects, and apply it to a few areas in which random objects occur naturally. For example, a random object being considered may be a random connected graph, a random bounded-degree graph, or a random error-correcting code with good distance. A pseudo-random implementation of such type T objects must generate objects of type T that can not be distinguished from random ones, rather than objects that can not be distinguished from type T objects (although they are not type T at all). We will model a type T object as a function, and access objects by queries into these functions. We investigate supporting both standard queries that only evaluates the primary function at locations of the user's choice (e.g., edge queries in a graph), and complex queries that may ask for the result of a computation on the primary function, where this computation may be infeasible to perform with a polynomial number of standard queries (e.g., providing the next vertex along a Hamiltonian path in the graph)

    Aggregate Pseudorandom Functions and Connections to Learning

    No full text
    In the first part of this work, we introduce a new type of pseudo-random function for which “aggregate queries ” over exponential-sized sets can be efficiently answered. We show how to use algebraic properties of underlying classical pseudo random functions, to construct such “ag-gregate pseudo-random functions ” for a number of classes of aggregation queries under crypto-graphic hardness assumptions. For example, one aggregate query we achieve is the product of all function values accepted by a polynomial-sized read-once boolean formula. On the flip side, we show that certain aggregate queries are impossible to support. Aggregate pseudo-random func-tions fall within the framework of the work of Goldreich, Goldwasser, and Nussboim [GGN10] on the “Implementation of Huge Random Objects, ” providing truthful implementations of pseudo-random functions for which aggregate queries can be answered. In the second part of this work, we show how various extensions of pseudo-random functions considered recently in the cryptographic literature, yield impossibility results for various exten-sions of machine learning models, continuing a line of investigation originated by Valiant and Kearns in the 1980s. The extended pseudo-random functions we address include constraine
    corecore