104 research outputs found

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    State of the Art Report : Verified Computation

    Get PDF
    This report describes the state of the art in verifiable computation. The problem being solved is the following: The Verifiable Computation Problem (Verifiable Computing Problem) Suppose we have two computing agents. The first agent is the verifier, and the second agent is the prover. The verifier wants the prover to perform a computation. The verifier sends a description of the computation to the prover. Once the prover has completed the task, the prover returns the output to the verifier. The output will contain proof. The verifier can use this proof to check if the prover computed the output correctly. The check is not required to verify the algorithm used in the computation. Instead, it is a check that the prover computed the output using the computation specified by the verifier. The effort required for the check should be much less than that required to perform the computation. This state-of-the-art report surveys 128 papers from the literature comprising more than 4,000 pages. Other papers and books were surveyed but were omitted. The papers surveyed were overwhelmingly mathematical. We have summarised the major concepts that form the foundations for verifiable computation. The report contains two main sections. The first, larger section covers the theoretical foundations for probabilistically checkable and zero-knowledge proofs. The second section contains a description of the current practice in verifiable computation. Two further reports will cover (i) military applications of verifiable computation and (ii) a collection of technical demonstrators. The first of these is intended to be read by those who want to know what applications are enabled by the current state of the art in verifiable computation. The second is for those who want to see practical tools and conduct experiments themselves

    ZK-PCPs from Leakage-Resilient Secret Sharing

    Get PDF
    Zero-Knowledge PCPs (ZK-PCPs; Kilian, Petrank, and Tardos, STOC `97) are PCPs with the additional zero-knowledge guarantee that the view of any (possibly malicious) verifier making a bounded number of queries to the proof can be efficiently simulated up to a small statistical distance. Similarly, ZK-PCPs of Proximity (ZK-PCPPs; Ishai and Weiss, TCC `14) are PCPPs in which the view of an adversarial verifier can be efficiently simulated with few queries to the input. Previous ZK-PCP constructions obtained an exponential gap between the query complexity q of the honest verifier, and the bound q^* on the queries of a malicious verifier (i.e., q = poly log (q^*)), but required either exponential-time simulation, or adaptive honest verification. This should be contrasted with standard PCPs, that can be verified non-adaptively (i.e., with a single round of queries to the proof). The problem of constructing such ZK-PCPs, even when q^* = q, has remained open since they were first introduced more than 2 decades ago. This question is also open for ZK-PCPPs, for which no construction with non-adaptive honest verification is known (not even with exponential-time simulation). We resolve this question by constructing the first ZK-PCPs and ZK-PCPPs which simultaneously achieve efficient zero-knowledge simulation and non-adaptive honest verification. Our schemes have a square-root query gap, namely q^*/q = O(?n) where n is the input length. Our constructions combine the "MPC-in-the-head" technique (Ishai et al., STOC `07) with leakage-resilient secret sharing. Specifically, we use the MPC-in-the-head technique to construct a ZK-PCP variant over a large alphabet, then employ leakage-resilient secret sharing to design a new alphabet reduction for ZK-PCPs which preserves zero-knowledge

    Every set in P is strongly testable under a suitable encoding

    Get PDF
    We show that every set in P is strongly testable under a suitable encoding. By "strongly testable" we mean having a (proximity oblivious) tester that makes a constant number of queries and rejects with probability that is proportional to the distance of the tested object from the property. By a "suitable encoding" we mean one that is polynomial-time computable and invertible. This result stands in contrast to the known fact that some sets in P are extremely hard to test, providing another demonstration of the crucial role of representation in the context of property testing. The testing result is proved by showing that any set in P has a strong canonical PCP, where canonical means that (for yes-instances) there exists a single proof that is accepted with probability 1 by the system, whereas all other potential proofs are rejected with probability proportional to their distance from this proof. In fact, we show that UP equals the class of sets having strong canonical PCPs (of logarithmic randomness), whereas the class of sets having strong canonical PCPs with polynomial proof length equals "unambiguous- MA". Actually, for the testing result, we use a PCP-of-Proximity version of the foregoing notion and an analogous positive result (i.e., strong canonical PCPPs of logarithmic randomness for any set in UP)

    Efficient Batch Verification for UP

    Get PDF
    Consider a setting in which a prover wants to convince a verifier of the correctness of k NP statements. For example, the prover wants to convince the verifier that k given integers N_1,...,N_k are all RSA moduli (i.e., products of equal length primes). Clearly this problem can be solved by simply having the prover send the k NP witnesses, but this involves a lot of communication. Can interaction help? In particular, is it possible to construct interactive proofs for this task whose communication grows sub-linearly with k? Our main result is such an interactive proof for verifying the correctness of any k UP statements (i.e., NP statements that have a unique witness). The proof-system uses only a constant number of rounds and the communication complexity is k^delta * poly(m), where delta>0 is an arbitrarily small constant, m is the length of a single witness, and the poly term refers to a fixed polynomial that only depends on the language and not on delta. The (honest) prover strategy can be implemented in polynomial-time given access to the k (unique) witnesses. Our proof leverages "interactive witness verification" (IWV), a new type of proof-system that may be of independent interest. An IWV is a proof-system in which the verifier needs to verify the correctness of an NP statement using: (i) a sublinear number of queries to an alleged NP witness, and (ii) a short interaction with a powerful but untrusted prover. In contrast to the setting of PCPs and Interactive PCPs, here the verifier only has access to the raw NP witness, rather than some encoding thereof

    Relaxed locally correctable codes with improved parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C:SigmaktoSigmanC : Sigma^k to Sigma^n that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [Ben-Sasson et al. (2006)] constructed a qq-query RLDC that encodes a message of length kk using a codeword of block length n=Oq(k1+O(1/sqrtq))n = O_q(k^{1+O(1/sqrt{q})}) for any sufficiently large qq, where Oq(cdot)O_q(cdot) hides some constant that depends only on qq. In this work we improve the parameters of [Ben-Sasson et al. (2006)] by constructing a qq-query RLDC that encodes a message of length kk using a codeword of block length Oq(k1+O(1/q))O_q(k^{1+O(1/q)}) for any sufficiently large qq. This construction matches (up to a multiplicative constant factor) the lower bounds of [Katz and Trevisan (2000), Woodruff (2007)] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Gur et al. (2018)], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Toward probabilistic checking against non-signaling strategies with constant locality

    Get PDF
    Non-signaling strategies are a generalization of quantum strategies that have been studied in physics over the past three decades. Recently, they have found applications in theoretical computer science, including to proving inapproximability results for linear programming and to constructing protocols for delegating computation. A central tool for these applications is probabilistically checkable proof (PCPs) systems that are sound against non-signaling strategies. In this thesis we show, assuming a certain geometrical hypothesis about noise robustness of non-signaling proofs (or, equivalently, about robustness to noise of solutions to the Sherali-Adams linear program), that a slight variant of the parallel repetition of the exponential-length constant-query PCP construction due to Arora et al. (JACM 1998) is sound against non-signaling strategies with constant locality. Our proof relies on the analysis of the linearity test and agreement test (also known as the direct product test) in the non-signaling setting
    • …
    corecore