6 research outputs found

    Relaxed Locally Correctable Codes

    Get PDF
    Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice. A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the decoder to reject (i.e., refuse to answer) in case it detects that the codeword is corrupt. They call such a decoder a relaxed decoder and construct a constant-query relaxed LDC with almost-linear blocklength, which is sub-exponentially better than what is known for (full-fledged) LDCs in the constant-query regime. We consider an analogous relaxation for local correction. Thus, a relaxed local corrector reads only few bits from a (possibly) corrupt codeword and either recovers the desired bit of the codeword, or rejects in case it detects a corruption. We give two constructions of relaxed LCCs in two regimes, where the first optimizes the query complexity and the second optimizes the rate: 1. Constant Query Complexity: A relaxed LCC with polynomial blocklength whose corrector only reads a constant number of bits of the codeword. This is a sub-exponential improvement over the best constant query (full-fledged) LCCs that are known. 2. Constant Rate: A relaxed LCC with constant rate (i.e., linear blocklength) with quasi-polylogarithmic query complexity. This is a nearly sub-exponential improvement over the query complexity of a recent (full-fledged) constant-rate LCC of Kopparty et al. (STOC, 2016)

    Interactive Oracle Proofs with Constant Rate and Query Complexity

    Get PDF
    We study interactive oracle proofs (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are: 1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity. 2. Reed-Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity. 3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity. For all the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08,Din07]. Also, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but sublinear (and super-constant) query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much "lighter" because we do not rely on any automorphisms of the code. We obtain our results by proving and combining "IOP-analogues" of tools underlying numerous IPs and PCPs: * Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier\u27s randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear. * Sublinear sumcheck. The sumcheck protocol [LFKN92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier\u27s running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic). Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs

    Erasures vs. Errors in Local Decoding and Property Testing

    Get PDF
    We initiate the study of the role of erasures in local decoding and use our understanding to prove a separation between erasure-resilient and tolerant property testing. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures. Motivated by applications in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of a famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. We use this result to exhibit a property which is testable with a number of queries independent of the length of the input in the presence of erasures, but requires a number of queries that depends on the input length, n, for tolerant testing. We further study approximate locally list decodable codes that work against erasures and use them to strengthen our separation by constructing a property which is testable with a constant number of queries in the presence of erasures, but requires n^{Omega(1)} queries for tolerant testing. Next, we study the general relationship between local decoding in the presence of errors and in the presence of erasures. We observe that every locally (uniquely or list) decodable code that works in the presence of errors also works in the presence of twice as many erasures (with the same parameters up to constant factors). We show that there is also an implication in the other direction for locally decodable codes (with unique decoding): specifically, that the existence of a locally decodable code that works in the presence of erasures implies the existence of a locally decodable code that works in the presence of errors and has related parameters. However, it remains open whether there is an implication in the other direction for locally list decodable codes. We relate this question to other open questions in local decoding

    Aurora: Transparent Succinct Arguments for R1CS

    Get PDF
    We design, implement, and evaluate a zkSNARK for Rank-1 Constraint Satisfaction (R1CS), a widely-deployed NP-complete language that is undergoing standardization. Our construction uses a transparent setup, is plausibly post-quantum secure, and uses lightweight cryptography. A proof attesting to the satisfiability of n constraints has size O(log2n)O(\log^2 n); it can be produced with O(nlogn)O(n \log n) field operations and verified with O(n)O(n). At 128 bits of security, proofs are less than 130kB even for several million constraints, more than 20x shorter than prior zkSNARK with similar features. A key ingredient of our construction is a new Interactive Oracle Proof (IOP) for solving a *univariate* analogue of the classical sumcheck problem [LFKN92], originally studied for *multivariate* polynomials. Our protocol verifies the sum of entries of a Reed--Solomon codeword over any subgroup of a field. We also provide libiop, an open-source library for writing IOP-based arguments, in which a toolchain of transformations enables programmers to write new arguments by writing simple IOP sub-components. We have used this library to specify our construction and prior ones

    Analyzing massive datasets with missing entries: models and algorithms

    Get PDF
    We initiate a systematic study of computational models to analyze algorithms for massive datasets with missing or erased entries and study the relationship of our models with existing algorithmic models for large datasets. We focus on algorithms whose inputs are naturally represented as functions, codewords, or graphs. First, we generalize the property testing model, one of the most widely studied models of sublinear-time algorithms, to account for the presence of adversarially erased function values. We design efficient erasure-resilient property testing algorithms for several fundamental properties of real-valued functions such as monotonicity, Lipschitz property, convexity, and linearity. We then investigate the problems of local decoding and local list decoding of codewords containing erasures. We show that, in some cases, these problems are strictly easier than the corresponding problems of decoding codewords containing errors. Moreover, we use this understanding to show a separation between our erasure-resilient property testing model and the (error) tolerant property testing model. The philosophical message of this separation is that errors occurring in large datasets are, in general, harder to deal with, than erasures. Finally, we develop models and notions to reason about algorithms that are intended to run on large graphs with missing edges. While running algorithms on large graphs containing several missing edges, it is desirable to output solutions that are close to the solutions output when there are no missing edges. With this motivation, we define average sensitivity, a robustness metric for graph algorithms. We discuss various useful features of our definition and design approximation algorithms with good average sensitivity bounds for several optimization problems on graphs. We also define a model of erasure-resilient sublinear-time graph algorithms and design an efficient algorithm for testing connectivity of graphs

    Combinatorial PCPs with Short Proofs

    Get PDF
    The PCP theorem (Arora et. al., J. ACM 45(1,3)) asserts the existence of proofs that can be verified by reading a very small part of the proof. Since the discovery of the theorem, there has been a considerable work on improving the theorem in terms of the length of the proofs, culminating in the construction of PCPs of quasi-linear length, by Ben-Sasson and Sudan (SICOMP 38(2)) and Dinur (J. ACM 54(3)). One common theme in the aforementioned PCP constructions is that they all rely heavily on sophisticated algebraic machinery. The aforementioned work of Dinur (J. ACM 54(3)) suggested an alternative approach for constructing PCPs, which gives a simpler and arguably more intuitive proof of the PCP theorem using combinatorial techniques. However, this combinatorial construction only yields PCPs of polynomial length, and is therefore inferior to the algebraic constructions in this respect. This gives rise to the natural question of whether the proof length of the algebraic constructions can be matched using the combinatorial approach. In this work, we provide a combinatorial construction of PCPs of length n · (log n) O(log log n), coming very close to the state of the art algebraic constructions (whose proof length is n · (log n) O(1)). To this end, we develop a few generic PCP techniques which may be interesting in their own right. It should be mentioned that our construction does use low degree polynomials at one point. However, our use of polynomials is confined to the construction of error correcting codes with a certain simple multiplication property, and it is conceivable that such codes can be constructed without the use of polynomials
    corecore