64 research outputs found

    Robustness for Space-Bounded Statistical Zero Knowledge

    Get PDF
    We show that the space-bounded Statistical Zero Knowledge classes SZK_L and NISZK_L are surprisingly robust, in that the power of the verifier and simulator can be strengthened or weakened without affecting the resulting class. Coupled with other recent characterizations of these classes [Eric Allender et al., 2023], this can be viewed as lending support to the conjecture that these classes may coincide with the non-space-bounded classes SZK and NISZK, respectively

    Compact NIZKs from Standard Assumptions on Bilinear Maps

    Get PDF
    A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM\u2712) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by O(Cλ)O(|C|\lambda), where CC is the circuit for the NP relation and λ\lambda is the security parameter. By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static qq-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem. Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in C|C|, that is, the proof size only grows by |C| +\poly(\lambda), based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size. As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size |w| \cdot \poly(\lambda) where w|w| is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of C|C| based on a standard assumption. (2)~We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is |w| \cdot \poly(\lambda) from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO\u2719), which gave a similar result based on a non-static qq-type assumption. The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest

    Degree 2 is Complete for the Round-Complexity of Malicious MPC

    Get PDF
    We show, via a non-interactive reduction, that the existence of a secure multi-party computation (MPC) protocol for degree-22 functions implies the existence of a protocol with the same round complexity for general functions. Thus showing that when considering the round complexity of MPC, it is sufficient to consider very simple functions. Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blow-up which is polynomial in the number of players and in S,2DS,2^D, where S,DS,D are the circuit size and depth of the function to be computed. Using one-way functions as an additional assumption, the exponential dependence on the depth can be removed. As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases. * 33-round perfectly-secure protocol (with guaranteed output delivery) against an active adversary that corrupts less than a quarter of the parties. * 22-round statistically-secure protocol that achieves security with ``selective abort\u27\u27 against an active adversary that corrupts less than half of the parties. * Assuming one-way functions, 22-round computationally-secure protocol that achieves security with (standard) abort against an active adversary that corrupts less than half of the parties. This gives a new and conceptually simpler proof to the recent result of Ananth et al. (Crypto 2018). Technically, our non-interactive reduction draws from the encoding method of Applebaum, Brakerski and Tsabary (TCC 2018). We extend these methods to ones that can be meaningfully analyzed even in the presence of malicious adversaries

    Low-Complexity Cryptographic Hash Functions

    Get PDF
    Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output

    Delegating computation reliably : paradigms and constructions

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 285-297).In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's depth, and only poly-logarithmic in the computation's size. The space needed for verification is only logarithmic in the computation size. Thus, for any computation of polynomial size and poly-logarithmic depth (the rich complexity class N/C), the work required to verify the correctness of the output is only quasi-linear in the input length. The work required to prove the output's correctness is only polynomial in the original computation's size. This protocol also has applications to constructing one-round arguments for delegating computation, and efficient zero-knowledge proofs. * A general transformation, reducing the parallel running time (or computation depth) of the verifier in protocols for delegating computation (interactive proofs) to be constant. Next, we explore the power of the delegation paradigm in settings where mutually distrustful parties interact. In particular, we consider the settings of checking the correctness of computer programs and of designing error-correcting codes. We show: * A new methodology for checking the correctness of programs (program checking), in which work is delegated from the program checker to the untrusted program being checked. Using this methodology we obtain program checkers for an entire complexity class (the class of N/C¹-computations that are WNC-hard), and for a slew of specific functions such as matrix multiplication, inversion, determinant and rank, as well as graph functions such as connectivity, perfect matching and bounded-degree graph isomorphism. * A methodology for designing error-correcting codes with efficient decoding procedures, in which work is delegated from the decoder to the encoder. We use this methodology to obtain constant-depth (AC⁰) locally decodable and locally-list decodable codes. We also show that the parameters of these codes are optimal (up to polynomial factors) for constant-depth decoding.by Guy N. Rothblum.Ph.D

    Lower Bounds in the Hardware Token Model

    Get PDF
    We study the complexity of secure computation in the tamper-proof hardware token model. Our main focus is on non-interactive unconditional two-party computation using bit-OT tokens, but we also study computational security with stateless tokens that have more complex functionality. Our results can be summarized as follows: - There exists a class of functions such that the number of bit-OT tokens required to securely implement them is at least the size of the sender\u27s input. The same applies for receiver\u27s input size (with a different class of functionalities). - Non-adaptive protocols in the hardware token model imply efficient (decomposable) randomized encodings. This can be interpreted as evidence to the impossibility of non-adaptive protocols for a large class of functions. - There exists a functionality for which there is no protocol in the stateless hardware token model accessing the tokens at most a constant number of times, even when the adversary is computationally bounded. En route to proving our results, we make interesting connections between the hardware token model and well studied notions such as OT hybrid model, randomized encodings, and obfuscation
    corecore