21 research outputs found

    Statistical Witness Indistinguishability (and more) in Two Messages

    Get PDF
    Two-message witness indistinguishable protocols were first constructed by Dwork and Naor (FOCS 00). They have since proven extremely useful in the design of several cryptographic primitives. However, so far no two-message arguments for NP provided statistical privacy against malicious verifiers. In this paper, we construct the first: - Two-message statistical witness indistinguishable (SWI) arguments for NP. - Two-message statistical zero-knowledge arguments for NP with super-polynomial simulation (Statistical SPS-ZK). These were previously believed to be impossible to construct via black-box reductions (Chung et al., ePrint 2012). - Two-message statistical distributional weak zero-knowledge (SwZK) arguments for NP with polynomial simulation, where the instance is sampled by the prover in the second round. These protocols are based on quasi-polynomial hardness of two-message oblivious transfer (OT) with game-based security, which can in turn be based on quasi-polynomial DDH or QR or N\u27th residuosity. We also demonstrate simple applications of these arguments to constructing more secure forms of oblivious transfer. Along the way, we show that the Kalai and Raz (Crypto 09) transform compressing interactive proofs to two-message arguments can be generalized to compress certain types of interactive arguments. We introduce and construct a new technical tool, which is a variant of extractable two-message statistically hiding commitments, by extending the work of Khurana and Sahai (FOCS 17). These techniques may be of independent interest

    PCPs and Instance Compression from a Cryptographic Lens

    Get PDF

    Pseudo-Deterministic Proofs

    Get PDF
    We introduce pseudo-deterministic interactive proofs (psdIP): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. The IP=PSPACE characterization implies that psdIP = IP. The challenge is to find constant round pseudo-deterministic interactive proofs for hard search problems. We show a constant round pseudo-deterministic interactive proof for the graph isomorphism problem: on any input pair of isomorphic graphs (G_0,G_1), there exist a unique isomorphism phi from G_0 to G_1 (although many isomorphism many exist) which will be output by the verifier with high probability, regardless of any dishonest prover strategy. In contrast, we show that it is unlikely that psdIP proofs with constant rounds exist for NP-complete problems by showing that if any NP-complete problem has a constant round psdIP protocol, then the polynomial hierarchy collapses

    Efficient holographic proofs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1996.Includes bibliographical references (p. 57-63).by Alexander Craig Russell.Ph.D

    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs

    Get PDF
    We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, C_V, that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communication from the prover by R-F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2^{R-F}. Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero-knowledge holds against a "semi-malicious" verifier that maliciously selects its random tape and then plays honestly. Consequently, we show that some classical HVSZK proof systems, like the one for the complete Statistical-Distance problem (Sahai and Vadhan, JACM 2003) admit randomness sparsification with no penalty. Along the way we introduce new notions of pseudorandomness against interactive proof systems, and study their relations to existing notions of pseudorandomness

    Guidable Local Hamiltonian Problems with Implications to Heuristic Ans\"atze State Preparation and the Quantum PCP Conjecture

    Full text link
    We study 'Merlinized' versions of the recently defined Guided Local Hamiltonian problem, which we call 'Guidable Local Hamiltonian' problems. Unlike their guided counterparts, these problems do not have a guiding state provided as a part of the input, but merely come with the promise that one exists. We consider in particular two classes of guiding states: those that can be prepared efficiently by a quantum circuit; and those belonging to a class of quantum states we call classically evaluatable, for which it is possible to efficiently compute expectation values of local observables classically. We show that guidable local Hamiltonian problems for both classes of guiding states are QCMA\mathsf{QCMA}-complete in the inverse-polynomial precision setting, but lie within NP\mathsf{NP} (or NqP\mathsf{NqP}) in the constant precision regime when the guiding state is classically evaluatable. Our completeness results show that, from a complexity-theoretic perspective, classical Ans\"atze selected by classical heuristics are just as powerful as quantum Ans\"atze prepared by quantum heuristics, as long as one has access to quantum phase estimation. In relation to the quantum PCP conjecture, we (i) define a complexity class capturing quantum-classical probabilistically checkable proof systems and show that it is contained in BQPNP[1]\mathsf{BQP}^{\mathsf{NP}[1]} for constant proof queries; (ii) give a no-go result on 'dequantizing' the known quantum reduction which maps a QPCP\mathsf{QPCP}-verification circuit to a local Hamiltonian with constant promise gap; (iii) give several no-go results for the existence of quantum gap amplification procedures that preserve certain ground state properties; and (iv) propose two conjectures that can be viewed as stronger versions of the NLTS theorem. Finally, we show that many of our results can be directly modified to obtain similar results for the class MA\mathsf{MA}.Comment: 61 pages, 6 figure

    Barriers for Succinct Arguments in the Random Oracle Model

    Get PDF
    We establish barriers on the efficiency of succinct arguments in the random oracle model. We give evidence that, under standard complexity assumptions, there do not exist succinct arguments where the argument verifier makes a small number of queries to the random oracle. The new barriers follow from new insights into how probabilistic proofs play a fundamental role in constructing succinct arguments in the random oracle model. *IOPs are necessary for succinctness.* We prove that any succinct argument in the random oracle model can be transformed into a corresponding interactive oracle proof (IOP). The query complexity of the IOP is related to the succinctness of the argument. *Algorithms for IOPs.* We prove that if a language has an IOP with good soundness relative to query complexity, then it can be decided via a fast algorithm with small space complexity. By combining these results we obtain barriers for a large class of deterministic and non-deterministic languages. For example, a succinct argument for 3SAT with few verifier queries implies an IOP with good parameters, which in turn implies a fast algorithm for 3SAT that contradicts the Exponential-Time Hypothesis. We additionally present results that shed light on the necessity of several features of probabilistic proofs that are typically used to construct succinct arguments, such as holography and state restoration soundness. Our results collectively provide an explanation for why known constructions of succinct arguments have a certain structure

    Quantum Generalizations of the Polynomial Hierarchy with Applications to QMA(2)

    Get PDF
    The polynomial-time hierarchy (PH) has proven to be a powerful tool for providing separations in computational complexity theory (modulo standard conjectures such as PH does not collapse). Here, we study whether two quantum generalizations of PH can similarly prove separations in the quantum setting. The first generalization, QCPH, uses classical proofs, and the second, QPH, uses quantum proofs. For the former, we show quantum variants of the Karp-Lipton theorem and Toda\u27s theorem. For the latter, we place its third level, Q Sigma_3, into NEXP using the Ellipsoid Method for efficiently solving semidefinite programs. These results yield two implications for QMA(2), the variant of Quantum Merlin-Arthur (QMA) with two unentangled proofs, a complexity class whose characterization has proven difficult. First, if QCPH=QPH (i.e., alternating quantifiers are sufficiently powerful so as to make classical and quantum proofs "equivalent"), then QMA(2) is in the Counting Hierarchy (specifically, in P^{PP^{PP}}). Second, unless QMA(2)= Q Sigma_3 (i.e., alternating quantifiers do not help in the presence of "unentanglement"), QMA(2) is strictly contained in NEXP

    On Valiant\u27s Conjecture: Impossibility of Incrementally Verifiable Computation from Random Oracles

    Get PDF
    In his landmark paper at TCC 2008 Paul Valiant introduced the notion of ``incrementally verifiable computation\u27\u27 which enables a prover to incrementally compute a succinct proof of correct execution of a (potentially) long running process. The paper later won the 2019 TCC test of time award. The construction was proven secure in the random oracle model without any further computational assumptions. However, the overall proof was given using a non-standard version of the random-oracle methodology where sometimes the hash function is a random oracle and sometimes it has a short description as a circuit. Valiant clearly noted that this model is non-standard, but conjectured that the standard random oracle methodology would not suffice. This conjecture has been open for 14 years. We prove that under some mild extra assumptions on the proof system the conjecture is true: the standard random-oracle model does not allow incrementally verifiable computation without making computational assumptions. Two extra assumptions under which we can prove the conjecture are 1) the proof system is also zero-knowledge or 2) when the proof system makes a query to its random oracle it can know with non-negligible probability whether the query is fresh or was made by the proof system earlier in the construction of the proof

    Non-Uniformly Sound Certificates with Applications to Concurrent Zero-Knowledge

    Get PDF
    We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a ``best-possible security\u27\u27 against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all NP in the random oracle model, where the attacker\u27s advice can depend arbitrarily on the random oracle. We next show that the existence of non-uniformly sound certificates for P (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for NP
    corecore