5 research outputs found

    Multi-Theorem Preprocessing NIZKs from Lattices

    Get PDF
    Non-interactive zero-knowledge (NIZK) proofs are fundamental to modern cryptography. Numerous NIZK constructions are known in both the random oracle and the common reference string (CRS) models. In the CRS model, there exist constructions from several classes of cryptographic assumptions such as trapdoor permutations, pairings, and indistinguishability obfuscation. Notably absent from this list, however, are constructions from standard lattice assumptions. While there has been partial progress in realizing NIZKs from lattices for specific languages, constructing NIZK proofs (and arguments) for all of NP from standard lattice assumptions remains open. In this work, we make progress on this problem by giving the first construction of a multi-theorem NIZK for NP from standard lattice assumptions in the preprocessing model. In the preprocessing model, a (trusted) setup algorithm generates proving and verification keys. The proving key is needed to construct proofs and the verification key is needed to check proofs. In the multi-theorem setting, the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Existing constructions of NIZKs in the preprocessing model (or even the designated-verifier model) that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in the preprocessing model does not seem to be inherently easier than constructing them in the CRS model. We begin by constructing a multi-theorem preprocessing NIZK directly from context-hiding homomorphic signatures. Then, we show how to efficiently implement the preprocessing step using a new cryptographic primitive called blind homomorphic signatures. This primitive may be of independent interest. Finally, we show how to leverage our new lattice-based preprocessing NIZKs to obtain new malicious-secure MPC protocols purely from standard lattice assumptions

    LNCS

    Get PDF
    We extend a commitment scheme based on the learning with errors over rings (RLWE) problem, and present efficient companion zeroknowledge proofs of knowledge. Our scheme maps elements from the ring (or equivalently, n elements fro

    The Hunting of the SNARK

    Get PDF
    The existence of succinct non-interactive arguments for NP (i.e., non-interactive computationally-sound proofs where the verifier\u27s work is essentially independent of the complexity of the NP nondeterministic verifier) has been an intriguing question for the past two decades. Other than CS proofs in the random oracle model [Micali, FOCS \u2794], the only existing candidate construction is based on an elaborate assumption that is tailored to a specific protocol [Di Crescenzo and Lipmaa, CiE \u2708]. We formulate a general and relatively natural notion of an \emph{extractable collision-resistant hash function (ECRH)} and show that, if ECRHs exist, then a modified version of Di Crescenzo and Lipmaa\u27s protocol is a succinct non-interactive argument for NP. Furthermore, the modified protocol is actually a succinct non-interactive \emph{adaptive argument of knowledge (SNARK).} We then propose several candidate constructions for ECRHs and relaxations thereof. We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption. Going beyond \ECRHs, we formulate the notion of {\em extractable one-way functions (\EOWFs)}. Assuming the existence of a natural variant of \EOWFs, we construct a 22-message selective-opening-attack secure commitment scheme and a 3-round zero-knowledge argument of knowledge. Furthermore, if the \EOWFs are concurrently extractable, the 3-round zero-knowledge protocol is also concurrent zero-knowledge. Our constructions circumvent previous black-box impossibility results regarding these protocols by relying on \EOWFs as the non-black-box component in the security reductions

    LNCS

    Get PDF
    We construct a perfectly binding string commitment scheme whose security is based on the learning parity with noise (LPN) assumption, or equivalently, the hardness of decoding random linear codes. Our scheme not only allows for a simple and efficient zero-knowledge proof of knowledge for committed values (essentially a Σ-protocol), but also for such proofs showing any kind of relation amongst committed values, i.e. proving that messages m_0,...,m_u, are such that m_0=C(m_1,...,m_u) for any circuit C. To get soundness which is exponentially small in a security parameter t, and when the zero-knowledge property relies on the LPN problem with secrets of length l, our 3 round protocol has communication complexity O(t|C|l log(l)) and computational complexity of O(t|C|l) bit operations. The hidden constants are small, and the computation consists mostly of computing inner products of bit-vectors

    Delegating computation reliably : paradigms and constructions

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 285-297).In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's depth, and only poly-logarithmic in the computation's size. The space needed for verification is only logarithmic in the computation size. Thus, for any computation of polynomial size and poly-logarithmic depth (the rich complexity class N/C), the work required to verify the correctness of the output is only quasi-linear in the input length. The work required to prove the output's correctness is only polynomial in the original computation's size. This protocol also has applications to constructing one-round arguments for delegating computation, and efficient zero-knowledge proofs. * A general transformation, reducing the parallel running time (or computation depth) of the verifier in protocols for delegating computation (interactive proofs) to be constant. Next, we explore the power of the delegation paradigm in settings where mutually distrustful parties interact. In particular, we consider the settings of checking the correctness of computer programs and of designing error-correcting codes. We show: * A new methodology for checking the correctness of programs (program checking), in which work is delegated from the program checker to the untrusted program being checked. Using this methodology we obtain program checkers for an entire complexity class (the class of N/C¹-computations that are WNC-hard), and for a slew of specific functions such as matrix multiplication, inversion, determinant and rank, as well as graph functions such as connectivity, perfect matching and bounded-degree graph isomorphism. * A methodology for designing error-correcting codes with efficient decoding procedures, in which work is delegated from the decoder to the encoder. We use this methodology to obtain constant-depth (AC⁰) locally decodable and locally-list decodable codes. We also show that the parameters of these codes are optimal (up to polynomial factors) for constant-depth decoding.by Guy N. Rothblum.Ph.D
    corecore