2,168 research outputs found

    Interactive Oracle Proofs with Constant Rate and Query Complexity

    Get PDF
    We study interactive oracle proofs (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are: 1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity. 2. Reed-Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity. 3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity. For all the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08,Din07]. Also, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but sublinear (and super-constant) query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much "lighter" because we do not rely on any automorphisms of the code. We obtain our results by proving and combining "IOP-analogues" of tools underlying numerous IPs and PCPs: * Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier\u27s randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear. * Sublinear sumcheck. The sumcheck protocol [LFKN92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier\u27s running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic). Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs

    Fast Reed-Solomon Interactive Oracle Proofs of Proximity

    Get PDF
    The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice. To advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity. For codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems. Previous concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of "proof composition" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of "proof composition" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness

    Security of discrete log cryptosystems in the random oracle and the generic model

    Get PDF
    We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q

    On the Impossibility of Probabilistic Proofs in Relativized Worlds

    Get PDF
    We initiate the systematic study of probabilistic proofs in relativized worlds, where the goal is to understand, for a given oracle, the possibility of "non-trivial" proof systems for deterministic or nondeterministic computations that make queries to the oracle. This question is intimately related to a recent line of work that seeks to improve the efficiency of probabilistic proofs for computations that use functionalities such as cryptographic hash functions and digital signatures, by instantiating them via constructions that are "friendly" to known constructions of probabilistic proofs. Informally, negative results about probabilistic proofs in relativized worlds provide evidence that this line of work is inherent and, conversely, positive results provide a way to bypass it. We prove several impossibility results for probabilistic proofs relative to natural oracles. Our results provide strong evidence that tailoring certain natural functionalities to known probabilistic proofs is inherent

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Hard Properties with (Very) Short PCPPs and Their Applications

    Get PDF
    We show that there exist properties that are maximally hard for testing, while still admitting PCPPs with a proof size very close to linear. Specifically, for every fixed ?, we construct a property P^(?)? {0,1}^n satisfying the following: Any testing algorithm for P^(?) requires ?(n) many queries, and yet P^(?) has a constant query PCPP whose proof size is O(n?log^(?)n), where log^(?) denotes the ? times iterated log function (e.g., log^(2)n = log log n). The best previously known upper bound on the PCPP proof size for a maximally hard to test property was O(n?polylog(n)). As an immediate application, we obtain stronger separations between the standard testing model and both the tolerant testing model and the erasure-resilient testing model: for every fixed ?, we construct a property that has a constant-query tester, but requires ?(n/log^(?)(n)) queries for every tolerant or erasure-resilient tester

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems

    Predictable arguments of knowledge

    Get PDF
    We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK). Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality. We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography

    Naor-Yung paradigm with shared randomness and applications

    Get PDF
    The Naor-Yung paradigm (Naor and Yung, STOC’90) allows to generically boost security under chosen-plaintext attacks (CPA) to security against chosen-ciphertext attacks (CCA) for public-key encryption (PKE) schemes. The main idea is to encrypt the plaintext twice (under independent public keys), and to append a non-interactive zero-knowledge (NIZK) proof that the two ciphertexts indeed encrypt the same message. Later work by Camenisch, Chandran, and Shoup (Eurocrypt’09) and Naor and Segev (Crypto’09 and SIAM J. Comput.’12) established that the very same techniques can also be used in the settings of key-dependent message (KDM) and key-leakage attacks (respectively). In this paper we study the conditions under which the two ciphertexts in the Naor-Yung construction can share the same random coins. We find that this is possible, provided that the underlying PKE scheme meets an additional simple property. The motivation for re-using the same random coins is that this allows to design much more efficient NIZK proofs. We showcase such an improvement in the random oracle model, under standard complexity assumptions including Decisional Diffie-Hellman, Quadratic Residuosity, and Subset Sum. The length of the resulting ciphertexts is reduced by 50%, yielding truly efficient PKE schemes achieving CCA security under KDM and key-leakage attacks. As an additional contribution, we design the first PKE scheme whose CPA security under KDM attacks can be directly reduced to (low-density instances of) the Subset Sum assumption. The scheme supports keydependent messages computed via any affine function of the secret ke
    corecore