26 research outputs found

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems

    Quasi-Linear Size Zero Knowledge from Linear-Algebraic PCPs

    Get PDF
    The seminal result that every language having an interactive proof also has a zero-knowledge interactive proof assumes the existence of one-way functions. Ostrovsky and Wigderson (ISTCS 1993) proved that this assumption is necessary: if one-way functions do not exist, then only languages in BPP have zero-knowledge interactive proofs. Ben-Or et al. (STOC 1988) proved that, nevertheless, every language having a multi-prover interactive proof also has a zero-knowledge multi-prover interactive proof, unconditionally. Their work led to, among many other things, a line of work studying zero knowledge without intractability assumptions. In this line of work, Kilian, Petrank, and Tardos (STOC 1997) defined and constructed zero-knowledge probabilistically checkable proofs (PCPs). While PCPs with quasilinear-size proof length, but without zero knowledge, are known, no such result is known for zero knowledge PCPs. In this work, we show how to construct ``2-round\u27\u27 PCPs that are zero knowledge and of length \tilde{O}(K) where K is the number of queries made by a malicious polynomial time verifier. Previous solutions required PCPs of length at least K^6 to maintain zero knowledge. In this model, which we call *duplex PCP* (DPCP), the verifier first receives an oracle string from the prover, then replies with a message, and then receives another oracle string from the prover; a malicious verifier can make up to K queries in total to both oracles. Deviating from previous works, our constructions do not invoke the PCP Theorem as a blackbox but instead rely on certain algebraic properties of a specific family of PCPs. We show that if the PCP has a certain linear algebraic structure --- which many central constructions can be shown to possess, including [BFLS91,ALMSS98,BS08] --- we can add the zero knowledge property at virtually no cost (up to additive lower order terms) while introducing only minor modifications in the algorithms of the prover and verifier. We believe that our linear-algebraic characterization of PCPs may be of independent interest, as it gives a simplified way to view previous well-studied PCP constructions

    Fast Reed-Solomon Interactive Oracle Proofs of Proximity

    Get PDF
    The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice. To advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity. For codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems. Previous concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of "proof composition" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of "proof composition" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness

    Relaxed Locally Correctable Codes with Improved Parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C: ?^k ? ?? that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [{Ben-Sasson} et al., 2006] constructed a q-query RLDC that encodes a message of length k using a codeword of block length n = O_q(k^{1+O(1/?q)}) for any sufficiently large q, where O_q(?) hides some constant that depends only on q. In this work we improve the parameters of [{Ben-Sasson} et al., 2006] by constructing a q-query RLDC that encodes a message of length k using a codeword of block length O_q(k^{1+O(1/{q})}) for any sufficiently large q. This construction matches (up to a multiplicative constant factor) the lower bounds of [Jonathan Katz and Trevisan, 2000; Woodruff, 2007] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Tom Gur et al., 2018], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Relaxed locally correctable codes with improved parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C:SigmaktoSigmanC : Sigma^k to Sigma^n that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [Ben-Sasson et al. (2006)] constructed a qq-query RLDC that encodes a message of length kk using a codeword of block length n=Oq(k1+O(1/sqrtq))n = O_q(k^{1+O(1/sqrt{q})}) for any sufficiently large qq, where Oq(cdot)O_q(cdot) hides some constant that depends only on qq. In this work we improve the parameters of [Ben-Sasson et al. (2006)] by constructing a qq-query RLDC that encodes a message of length kk using a codeword of block length Oq(k1+O(1/q))O_q(k^{1+O(1/q)}) for any sufficiently large qq. This construction matches (up to a multiplicative constant factor) the lower bounds of [Katz and Trevisan (2000), Woodruff (2007)] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Gur et al. (2018)], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Derandomized Parallel Repetition via Structured PCPs

    Full text link
    A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with sub-constant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have sub-constant soundness error. Our construction can be viewed as a combinatorial alternative to the "manifold vs. point" construction, which is the only construction in the literature for this parameter range. The "manifold vs. point" PCP is based on a low degree test, while our construction is based on a direct product test. We also extend our construction to yield a decodable PCP (dPCP) with the same parameters. By plugging in this dPCP into the scheme of Dinur and Harsha (FOCS 2009) one gets an alternative construction of the result of Moshkovitz and Raz (FOCS 2008), namely: a construction of two-query PCPs with small soundness error and small alphabet size. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a de-Bruijn graph

    Interactive Oracle Proofs with Constant Rate and Query Complexity

    Get PDF
    We study interactive oracle proofs (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are: 1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity. 2. Reed-Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity. 3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity. For all the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08,Din07]. Also, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but sublinear (and super-constant) query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much "lighter" because we do not rely on any automorphisms of the code. We obtain our results by proving and combining "IOP-analogues" of tools underlying numerous IPs and PCPs: * Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier\u27s randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear. * Sublinear sumcheck. The sumcheck protocol [LFKN92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier\u27s running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic). Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs

    MPC for Tech Giants (GMPC): Enabling Gulliver and the Lilliputians to Cooperate Amicably

    Full text link
    In this work, we introduce the Gulliver multi-party computation model (GMPC). The GMPC model considers a single highly powerful party, called the server or Gulliver, that is connected to nn users over a star topology network (alternatively formulated as a full network, where the server can block any message). The users are significantly less powerful than the server, and, in particular, should have both computation and communication complexities that are polylogarithmic in nn. Protocols in the GMPC model should be secure against malicious adversaries that may corrupt a subset of the users and/or the server. Designing protocols in the GMPC model is a delicate task, since users can only hold information about polylog(n) other users (and, in particular, can only communicate with polylog(n) other users). In addition, the server can block any message between any pair of honest parties. Thus, reaching an agreement becomes a challenging task. Nevertheless, we design generic protocols in the GMPC model, assuming that at most α<1/6\alpha<1/6 fraction of the users may be corrupted (in addition to the server). Our main contribution is a variant of Feige's committee election protocol [FOCS 1999] that is secure in the GMPC model. Given this tool we show: 1. Assuming fully homomorphic encryption (FHE), any computationally efficient function with O(n⋅polylog(n))O\left(n\cdot polylog(n)\right)-size output can be securely computed in the GMPC model. 2. Any function that can be computed by a circuit of O(polylog(n))O(polylog(n)) depth, O(n⋅polylog(n))O\left(n\cdot polylog(n)\right) size, and bounded fan-in and fan-out can be securely computed in the GMPC model without assuming FHE. 3. In particular, sorting can be securely computed in the GMPC model without assuming FHE. This has important applications for the shuffle model of differential privacy, and resolves an open question of Bell et al. [CCS 2020]

    Resettably Sound Zero-Knoweldge Arguments from OWFs - the (semi) Black-Box way

    Get PDF
    We construct a constant-round resettably-sound zero-knowledge argument of knowledge based on black-box use of any one-way function. Resettable-soundness was introduced by Barak, Goldreich, Goldwasser and Lindell [FOCS 01] and is a strengthening of the soundness requirement in interactive proofs demanding that soundness should hold even if the malicious prover is allowed to “reset” and “restart” the verifier. In their work they show that resettably-sound ZK arguments require non-black-box simulation techniques, and also provide the first construction based on the breakthrough simulation technique of Barak [FOCS 01]. All known implementations of Barak’s non-black-box technique required non-black-box use of a collision-resistance hash-function (CRHF). Very recently, Goyal, Ostrovsky, Scafuro and Visconti [STOC 14] showed an implementation of Barak’s technique that needs only black-box access to a collision-resistant hash-function while still having a non-black-box simulator. (Such a construction is referred to as semi black-box.) Plugging this implementation in the BGGL’s construction yields the first resettably-sound ZK arguments based on black-box use of CRHFs. However, from the work of Chung, Pass and Seth [STOC 13] and Bitansky and Paneth [STOC13], we know that resettably-sound ZK arguments can be constructed from non-black-box use of any one-way function (OWF), which is the minimal assumption for ZK arguments. Hence, a natural question is whether it is possible to construct resettably-sound zero-knowledge arguments from black-box use of any OWF only. In this work we provide a positive answer to this question thus closing the gap between black-box and non-black-box constructions for resettably-sound ZK arguments

    Parallelism with limited nondeterminism

    Get PDF
    Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems. Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science
    corecore