55 research outputs found

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems

    Quantum Locally Testable Codes

    Full text link
    We initiate the study of quantum Locally Testable Codes (qLTCs). We provide a definition together with a simplification, denoted sLTCs, for the special case of stabilizer codes, together with some basic results using those definitions. The most crucial parameter of such codes is their soundness, R(δ)R(\delta), namely, the probability that a randomly chosen constraint is violated as a function of the distance of a word from the code (δ\delta, the relative distance from the code, is called the proximity). We then proceed to study limitations on qLTCs. In our first main result we prove a surprising, inherently quantum, property of sLTCs: for small values of proximity, the better the small-set expansion of the interaction graph of the constraints, the less sound the qLTC becomes. This phenomenon, which can be attributed to monogamy of entanglement, stands in sharp contrast to the classical setting. The complementary, more intuitive, result also holds: an upper bound on the soundness when the code is defined on poor small-set expanders (a bound which turns out to be far more difficult to show in the quantum case). Together we arrive at a quantum upper-bound on the soundness of stabilizer qLTCs set on any graph, which does not hold in the classical case. Many open questions are raised regarding what possible parameters are achievable for qLTCs. In the appendix we also define a quantum analogue of PCPs of proximity (PCPPs) and point out that the result of Ben-Sasson et. al. by which PCPPs imply LTCs with related parameters, carries over to the sLTCs. This creates a first link between qLTCs and quantum PCPs.Comment: Some of the results presented here appeared in an initial form in our quant-ph submission arXiv:1301.3407. This is a much extended and improved version. 30 pages, no figure

    Relaxed Locally Correctable Codes

    Get PDF
    Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice. A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the decoder to reject (i.e., refuse to answer) in case it detects that the codeword is corrupt. They call such a decoder a relaxed decoder and construct a constant-query relaxed LDC with almost-linear blocklength, which is sub-exponentially better than what is known for (full-fledged) LDCs in the constant-query regime. We consider an analogous relaxation for local correction. Thus, a relaxed local corrector reads only few bits from a (possibly) corrupt codeword and either recovers the desired bit of the codeword, or rejects in case it detects a corruption. We give two constructions of relaxed LCCs in two regimes, where the first optimizes the query complexity and the second optimizes the rate: 1. Constant Query Complexity: A relaxed LCC with polynomial blocklength whose corrector only reads a constant number of bits of the codeword. This is a sub-exponential improvement over the best constant query (full-fledged) LCCs that are known. 2. Constant Rate: A relaxed LCC with constant rate (i.e., linear blocklength) with quasi-polylogarithmic query complexity. This is a nearly sub-exponential improvement over the query complexity of a recent (full-fledged) constant-rate LCC of Kopparty et al. (STOC, 2016)

    Derandomized Parallel Repetition via Structured PCPs

    Full text link
    A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with sub-constant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have sub-constant soundness error. Our construction can be viewed as a combinatorial alternative to the "manifold vs. point" construction, which is the only construction in the literature for this parameter range. The "manifold vs. point" PCP is based on a low degree test, while our construction is based on a direct product test. We also extend our construction to yield a decodable PCP (dPCP) with the same parameters. By plugging in this dPCP into the scheme of Dinur and Harsha (FOCS 2009) one gets an alternative construction of the result of Moshkovitz and Raz (FOCS 2008), namely: a construction of two-query PCPs with small soundness error and small alphabet size. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a de-Bruijn graph

    ZK-PCPs from Leakage-Resilient Secret Sharing

    Get PDF
    Zero-Knowledge PCPs (ZK-PCPs; Kilian, Petrank, and Tardos, STOC `97) are PCPs with the additional zero-knowledge guarantee that the view of any (possibly malicious) verifier making a bounded number of queries to the proof can be efficiently simulated up to a small statistical distance. Similarly, ZK-PCPs of Proximity (ZK-PCPPs; Ishai and Weiss, TCC `14) are PCPPs in which the view of an adversarial verifier can be efficiently simulated with few queries to the input. Previous ZK-PCP constructions obtained an exponential gap between the query complexity q of the honest verifier, and the bound q^* on the queries of a malicious verifier (i.e., q = poly log (q^*)), but required either exponential-time simulation, or adaptive honest verification. This should be contrasted with standard PCPs, that can be verified non-adaptively (i.e., with a single round of queries to the proof). The problem of constructing such ZK-PCPs, even when q^* = q, has remained open since they were first introduced more than 2 decades ago. This question is also open for ZK-PCPPs, for which no construction with non-adaptive honest verification is known (not even with exponential-time simulation). We resolve this question by constructing the first ZK-PCPs and ZK-PCPPs which simultaneously achieve efficient zero-knowledge simulation and non-adaptive honest verification. Our schemes have a square-root query gap, namely q^*/q = O(?n) where n is the input length. Our constructions combine the "MPC-in-the-head" technique (Ishai et al., STOC `07) with leakage-resilient secret sharing. Specifically, we use the MPC-in-the-head technique to construct a ZK-PCP variant over a large alphabet, then employ leakage-resilient secret sharing to design a new alphabet reduction for ZK-PCPs which preserves zero-knowledge

    A PCP Characterization of AM

    Get PDF
    We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM. This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some `Randomized Optimization Hypotheses' related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses appear over-strong, and we present evidence against them. In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist hard-on-average optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility. We also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.Comment: 18 page

    Fast Reed-Solomon Interactive Oracle Proofs of Proximity

    Get PDF
    The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice. To advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity. For codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems. Previous concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of "proof composition" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of "proof composition" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness

    Interactive Oracle Proofs with Constant Rate and Query Complexity

    Get PDF
    We study interactive oracle proofs (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are: 1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity. 2. Reed-Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity. 3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity. For all the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08,Din07]. Also, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but sublinear (and super-constant) query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much "lighter" because we do not rely on any automorphisms of the code. We obtain our results by proving and combining "IOP-analogues" of tools underlying numerous IPs and PCPs: * Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier\u27s randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear. * Sublinear sumcheck. The sumcheck protocol [LFKN92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier\u27s running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic). Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs

    Combinatorial Construction of Locally Testable Codes

    Get PDF
    An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of LTCs by Ben-Sasson and Sudan (STOC 2005) and Dinur (J. ACM 54(3)) achieves very e cient parameters, it relies heavily on algebraic tools and on PCP machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit

    Computational integrity with a public random string from quasi-linear PCPs

    Get PDF
    A party running a computation remotely may benefit from misreporting its output, say, to lower its tax. Cryptographic protocols that detect and prevent such falsities hold the promise to enhance the security of decentralized systems with stringent computational integrity requirements, like Bitcoin [Nak09]. To gain public trust it is imperative to use publicly verifiable protocols that have no “backdoors” and which can be set up using only a short public random string. Probabilistically Checkable Proof (PCP) systems [BFL90, BFLS91, AS98, ALM + 98] can be used to construct astonishingly efficient protocols [Kil92, Mic00] of this nature but some of the main components of such systems — proof composition [AS98] and low-degree testing via PCPs of Proximity (PCPPs) [BGH + 05, DR06] — have been considered efficient only asymptotically, for unrealistically large computations; recent cryptographic alternatives [PGHR13, BCG + 13a] suffer from a non-public setup phase. This work introduces SCI, the first implementation of a scalable PCP system (that uses both PCPPs and proof composition). We used SCI to prove correctness of executions of up to 2202^{20} cycles of a simple processor (Figure 1) and calculated (Figure 2) its break-even point [SVP + 12, SMBW12]. The significance of our findings is two-fold: (i) it marks the transition of core PCP techniques (like proof composition and PCPs of Proximity) from mathematical theory to practical system engineering, and (ii) the thresholds obtained are nearly achievable and hence show that PCP-supported computational integrity is closer to reality than previously assumed
    corecore