136 research outputs found

    Lower bounds for constant query affine-invariant LCCs and LTCs

    Get PDF
    Affine-invariant codes are codes whose coordinates form a vector space over a finite field and which are invariant under affine transformations of the coordinate space. They form a natural, well-studied class of codes; they include popular codes such as Reed-Muller and Reed-Solomon. A particularly appealing feature of affine-invariant codes is that they seem well-suited to admit local correctors and testers. In this work, we give lower bounds on the length of locally correctable and locally testable affine-invariant codes with constant query complexity. We show that if a code CΣKn\mathcal{C} \subset \Sigma^{\mathbb{K}^n} is an rr-query locally correctable code (LCC), where K\mathbb{K} is a finite field and Σ\Sigma is a finite alphabet, then the number of codewords in C\mathcal{C} is at most exp(OK,r,Σ(nr1))\exp(O_{\mathbb{K}, r, |\Sigma|}(n^{r-1})). Also, we show that if CΣKn\mathcal{C} \subset \Sigma^{\mathbb{K}^n} is an rr-query locally testable code (LTC), then the number of codewords in C\mathcal{C} is at most exp(OK,r,Σ(nr2))\exp(O_{\mathbb{K}, r, |\Sigma|}(n^{r-2})). The dependence on nn in these bounds is tight for constant-query LCCs/LTCs, since Guo, Kopparty and Sudan (ITCS `13) construct affine-invariant codes via lifting that have the same asymptotic tradeoffs. Note that our result holds for non-linear codes, whereas previously, Ben-Sasson and Sudan (RANDOM `11) assumed linearity to derive similar results. Our analysis uses higher-order Fourier analysis. In particular, we show that the codewords corresponding to an affine-invariant LCC/LTC must be far from each other with respect to Gowers norm of an appropriate order. This then allows us to bound the number of codewords, using known decomposition theorems which approximate any bounded function in terms of a finite number of low-degree non-classical polynomials, upto a small error in the Gowers norm

    Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex

    Get PDF
    Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple coordinates simultaneously, the naive way is to repeat the local decoding for recovery of a single coordinate. This decoding algorithm might be more expensive, i.e., require higher query complexity. In this paper, we focus on Reed-Muller codes with usual parameter regime, namely, the total degree of evaluation polynomials is d=Θ(q)d=\Theta({q}), where qq is the code alphabet size (in fact, dd can be as big as q/4q/4 in our setting). By introducing a novel variation of codex, i.e., interleaved codex (the concept of codex has been used for arithmetic secret sharing \cite{C11,CCX12}), we are able to locally recover arbitrarily large number kk of coordinates of a Reed-Muller code simultaneously at the cost of querying O(q2k)O(q^2k) coordinates. It turns out that our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that accessing kk locations is in fact cheaper than repeating the procedure for accessing a single location for kk times. Our estimation of success error probability is based on error probability bound for tt-wise linearly independent variables given in \cite{BR94}

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length

    Get PDF
    Locally decodable codes are error correcting codes with the extra property that, in order to retrieve the correct value of just one position of the input with high probability, it is sufficient to read a small number of positions of the corresponding, possibly corrupted codeword. A breakthrough result by Yekhanin showed that 3-query linear locally decodable codes may have subexponential length. The construction of Yekhanin, and the three query constructions that followed, achieve correctness only up to a certain limit which is 13delta1 - 3 delta for nonbinary codes, where an adversary is allowed to corrupt up to delta fraction of the codeword. The largest correctness for a subexponential length 3-query binary code is achieved in a construction by Woodruff, and it is below 1 - 3 delta. We show that achieving slightly larger correctness (as a function of deltadelta) requires exponential codeword length for 3-query codes. Previously, there were no larger than quadratic lower bounds known for locally decodable codes with more than 2 queries, even in the case of 3-query linear codes. Our results hold for linear codes over arbitrary finite fields and for binary nonlinear codes. Considering larger number of queries, we obtain lower bounds for q-query codes for q>3, under certain assumptions on the decoding algorithm that have been commonly used in previous constructions. We also prove bounds on the largest correctness achievable by these decoding algorithms, regardless of the length of the code. Our results explain the limitations on correctness in previous constructions using such decoding algorithms. In addition, our results imply tradeoffs on the parameters of error correcting data structures

    Combinatorial Lower Bounds for 3-Query LDCs

    Get PDF
    A code is called a qq-query locally decodable code (LDC) if there is a randomized decoding algorithm that, given an index ii and a received word ww close to an encoding of a message xx, outputs xix_i by querying only at most qq coordinates of ww. Understanding the tradeoffs between the dimension, length and query complexity of LDCs is a fascinating and unresolved research challenge. In particular, for 33-query binary LDCs of dimension kk and length nn, the best known bounds are: 2ko(1)nΩ~(k2)2^{k^{o(1)}} \geq n \geq \tilde{\Omega}(k^2). In this work, we take a second look at binary 33-query LDCs. We investigate a class of 3-uniform hypergraphs that are equivalent to strong binary 3-query LDCs. We prove an upper bound on the number of edges in these hypergraphs, reproducing the known lower bound of Ω~(k2)\tilde{\Omega}(k^2) for the length of strong 33-query LDCs. In contrast to previous work, our techniques are purely combinatorial and do not rely on a direct reduction to 22-query LDCs, opening up a potentially different approach to analyzing 3-query LDCs.Comment: 10 page

    Relaxed Locally Correctable Codes with Improved Parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C: ?^k ? ?? that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [{Ben-Sasson} et al., 2006] constructed a q-query RLDC that encodes a message of length k using a codeword of block length n = O_q(k^{1+O(1/?q)}) for any sufficiently large q, where O_q(?) hides some constant that depends only on q. In this work we improve the parameters of [{Ben-Sasson} et al., 2006] by constructing a q-query RLDC that encodes a message of length k using a codeword of block length O_q(k^{1+O(1/{q})}) for any sufficiently large q. This construction matches (up to a multiplicative constant factor) the lower bounds of [Jonathan Katz and Trevisan, 2000; Woodruff, 2007] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Tom Gur et al., 2018], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems
    corecore