44 research outputs found

    Separating Two-Round Secure Computation From Oblivious Transfer

    Get PDF
    We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions. Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT. As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT

    A Note on Key Agreement and Non-Interactive Commitments

    Get PDF
    We observe that any key agreement protocol satisfying perfect completeness, regardless of its round complexity, can be used to construct a non-interactive commitment scheme. This observation simplifies the cryptographic assumptions required for some protocols that utilize non-interactive commitments and removes the need for ad-hoc constructions of non-interactive commitments from specific assumptions such as Learning with Errors

    The Journey from NP to TFNP Hardness

    Get PDF
    The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist. Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption). Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP? In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs

    New Techniques for Zero-Knowledge: Leveraging Inefficient Provers to Reduce Assumptions and Interaction

    Get PDF
    We present a transformation from NIZK with inefficient provers in the uniform random string (URS) model to ZAPs (two message witness indistinguishable proofs) with inefficient provers. While such a transformation was known for the case where the prover is efficient, the security proof breaks down if the prover is inefficient. Our transformation is obtained via new applications of Nisan-Wigderson designs, a combinatorial object originally introduced in the derandomization literature. We observe that our transformation is applicable both in the setting of super-polynomial provers/poly-time adversaries, as well as a new fine-grained setting, where the prover is polynomial time and the verifier/simulator/zero knowledge distinguisher are in a lower complexity class, such as NC1\mathsf{NC}^1. We also present NC1\mathsf{NC}^1-fine-grained NIZK in the URS model for all of NP\mathsf{NP} from the worst-case assumption \oplus L/\mathsf{\poly} \not\subseteq \mathsf{NC}^1. Our techniques yield the following applications: 1. ZAPs for AM\mathsf{AM} from Minicrypt assumptions (with super-polynomial time provers), 2. NC1\mathsf{NC}^1-fine-grained ZAPs for NP\mathsf{NP} from worst-case assumptions, 3. Protocols achieving an offline\u27\u27 notion of NIZK (oNIZK) in the standard (no-CRS) model with uniform soundness in both the super-polynomial setting (from Minicrypt assumptions) and the NC1\mathsf{NC}^1-fine-grained setting (from worst-case assumptions). The oNIZK notion is sufficient for use in indistinguishability-based proofs

    Stretching demi-bits and nondeterministic-secure pseudorandomness

    Get PDF
    We develop the theory of cryptographic nondeterministic-secure pseudorandomness beyond the point reached by Rudich's original work [25], and apply it to draw new consequences in average-case complexity and proof complexity. Specifically, we show the following: Demi-bit stretch: Super-bits and demi-bits are variants of cryptographic pseudorandom generators which are secure against nondeterministic statistical tests [25]. They were introduced to rule out certain approaches to proving strong complexity lower bounds beyond the limitations set out by the Natural Proofs barrier of Razborov and Rudich [23]. Whether demi-bits are stretchable at all had been an open problem since their introduction. We answer this question affirmatively by showing that: every demi-bit b : {0, 1}n → {0, 1}n+1 can be stretched into sublinear many demi-bits b′: {0, 1}n → {0, 1}n+nc , for every constant 0 < c < 1. Average-case hardness: Using work by Santhanam [26], we apply our results to obtain new averagecase Kolmogorov complexity results: we show that Kpoly[n-O(1)] is zero-error average-case hard against NP/poly machines iff Kpoly[n-o(n)] is, where for a function s(n) : N → N, Kpoly[s(n)] denotes the languages of all strings x ∈ {0, 1}n for which there are (fixed) polytime Turing machines of description-length at most s(n) that output x. Characterising super-bits by nondeterministic unpredictability: In the deterministic setting, Yao [31] proved that super-polynomial hardness of pseudorandom generators is equivalent to ("nextbit") unpredictability. Unpredictability roughly means that given any strict prefix of a random string, it is infeasible to predict the next bit. We initiate the study of unpredictability beyond the deterministic setting (in the cryptographic regime), and characterise the nondeterministic hardness of generators from an unpredictability perspective. Specifically, we propose four stronger notions of unpredictability: NP/poly-unpredictability, coNP/poly-unpredictability, ∩-unpredictability and ∪unpredictability, and show that super-polynomial nondeterministic hardness of generators lies between ∩-unpredictability and ∪unpredictability. Characterising super-bits by nondeterministic hard-core predicates: We introduce a nondeterministic variant of hard-core predicates, called super-core predicates. We show that the existence of a super-bit is equivalent to the existence of a super-core of some non-shrinking function. This serves as an analogue of the equivalence between the existence of a strong pseudorandom generator and the existence of a hard-core of some one-way function [8, 12], and provides a first alternative characterisation of super-bits. We also prove that a certain class of functions, which may have hard-cores, cannot possess any super-core

    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs

    Get PDF
    We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, C_V, that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communication from the prover by R-F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2^{R-F}. Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero-knowledge holds against a "semi-malicious" verifier that maliciously selects its random tape and then plays honestly. Consequently, we show that some classical HVSZK proof systems, like the one for the complete Statistical-Distance problem (Sahai and Vadhan, JACM 2003) admit randomness sparsification with no penalty. Along the way we introduce new notions of pseudorandomness against interactive proof systems, and study their relations to existing notions of pseudorandomness

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P≠\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Precise Time and Space Simulatable Zero-Knowledge

    Get PDF
    Traditionally, the definition of zero-knowledge states that an interactive proof of x∈Lx\in L provides zero (additional) knowledge if the view of any \emph{polynomial-time} verifier can be reconstructed by a \emph{polynomial-time} simulator. Since this definition only requires that the worst-case running-time of the verifier and simulator are polynomials, zero-knowledge becomes a worst-case notion. In STOC\u2706, Micali and Pass proposed a new notion of precise zero-knowledge, which captures the idea that the view of any verifier in every interaction can be reconstructed in (almost) the same time (i.e., the view can be ``indistinguishably reconstructed\u27\u27). This is the strongest notion among the known works towards precislization of the definition of zero-knowledge. However, as we know, there are two kinds of computational resources (i.e. time and space) that every algorithm consumes in computation. Although the view of a verifier in the interaction of a precise zero-knowledge protocol can be reconstructed in almost the same time, the simulator may run in very large space while at the same time the verifier only runs in very small space. In this case it is still doubtful to take indifference for the verifier to take part in the interaction or to run the simulator. Thus the notion of precise zero-knowledge may be still insufficient. This shows that precislization of the definition of zero-knowledge needs further investigation. In this paper, we propose a new notion of precise time and space simulatable zero-knowledge (PTSSZK), which captures the idea that the view of any verifier in each interaction can be reconstructed \emph{not only} in the same time, \emph{but also} in the same space. We construct the first PTSSZK proofs and arguments with simultaneous linear time and linear space precisions for all languages in \NP. Our protocols do not use noticeably more rounds than the known precise zero-knowledge protocols, and the probability analysis of the successful extraction of the new simulation strategy may be of independent interests

    Characterizing Deterministic-Prover Zero Knowledge

    Get PDF
    Randomness is typically thought to be essential for zero knowledge protocols. Following this intuition, Goldreich and Oren (Journal of Cryptology 94) proved that auxiliary-input zero knowledge cannot be achieved with a deterministic prover. On the other hand, positive results are only known in the honest-verifier setting, or when the prover is given at least a restricted source of entropy. We prove that removing (or just bounding) the verifier\u27s auxiliary input, deterministic-prover zero knowledge becomes feasible: - Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for NP∩coNP\mathsf{NP}\cap \mathsf{coNP} against verifiers with bounded non-uniform auxiliary input. - Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of NP\mathsf{NP}. Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17)
    corecore