13 research outputs found

    Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument

    Get PDF
    A locally decodable code encodes n-bit strings x in m-bit codewords C(x), in such a way that one can recover any bit x_i from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries need exponential length: m=2^{Omega(n)}. Previously this was known only for linear codes (Goldreich et al. 02). Our proof shows that a 2-query LDC can be decoded with only 1 quantum query, and then proves an exponential lower bound for such 1-query locally quantum-decodable codes. We also show that q quantum queries allow more succinct LDCs than the best known LDCs with q classical queries. Finally, we give new classical lower bounds and quantum upper bounds for the setting of private information retrieval. In particular, we exhibit a quantum 2-server PIR scheme with O(n^{3/10}) qubits of communication, improving upon the O(n^{1/3}) bits of communication of the best known classical 2-server PIR.Comment: 16 pages Latex. 2nd version: title changed, large parts rewritten, some results added or improve

    Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length

    Get PDF
    Locally decodable codes are error correcting codes with the extra property that, in order to retrieve the correct value of just one position of the input with high probability, it is sufficient to read a small number of positions of the corresponding, possibly corrupted codeword. A breakthrough result by Yekhanin showed that 3-query linear locally decodable codes may have subexponential length. The construction of Yekhanin, and the three query constructions that followed, achieve correctness only up to a certain limit which is 1−3delta1 - 3 delta for nonbinary codes, where an adversary is allowed to corrupt up to delta fraction of the codeword. The largest correctness for a subexponential length 3-query binary code is achieved in a construction by Woodruff, and it is below 1 - 3 delta. We show that achieving slightly larger correctness (as a function of deltadelta) requires exponential codeword length for 3-query codes. Previously, there were no larger than quadratic lower bounds known for locally decodable codes with more than 2 queries, even in the case of 3-query linear codes. Our results hold for linear codes over arbitrary finite fields and for binary nonlinear codes. Considering larger number of queries, we obtain lower bounds for q-query codes for q>3, under certain assumptions on the decoding algorithm that have been commonly used in previous constructions. We also prove bounds on the largest correctness achievable by these decoding algorithms, regardless of the length of the code. Our results explain the limitations on correctness in previous constructions using such decoding algorithms. In addition, our results imply tradeoffs on the parameters of error correcting data structures

    Query-Efficient Locally Decodable Codes of Subexponential Length

    Full text link
    We develop the algebraic theory behind the constructions of Yekhanin (2008) and Efremenko (2009), in an attempt to understand the ``algebraic niceness'' phenomenon in Zm\mathbb{Z}_m. We show that every integer m=pq=2t−1m = pq = 2^t -1, where pp, qq and tt are prime, possesses the same good algebraic property as m=511m=511 that allows savings in query complexity. We identify 50 numbers of this form by computer search, which together with 511, are then applied to gain improvements on query complexity via Itoh and Suzuki's composition method. More precisely, we construct a 3⌈r/2⌉3^{\lceil r/2\rceil}-query LDC for every positive integer r<104r<104 and a ⌊(3/4)51⋅2r⌋\left\lfloor (3/4)^{51}\cdot 2^{r}\right\rfloor-query LDC for every integer r≥104r\geq 104, both of length NrN_{r}, improving the 2r2^r queries used by Efremenko (2009) and 3⋅2r−23\cdot 2^{r-2} queries used by Itoh and Suzuki (2010). We also obtain new efficient private information retrieval (PIR) schemes from the new query-efficient LDCs.Comment: to appear in Computational Complexit

    Relaxed Locally Correctable Codes with Improved Parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C: ?^k ? ?? that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [{Ben-Sasson} et al., 2006] constructed a q-query RLDC that encodes a message of length k using a codeword of block length n = O_q(k^{1+O(1/?q)}) for any sufficiently large q, where O_q(?) hides some constant that depends only on q. In this work we improve the parameters of [{Ben-Sasson} et al., 2006] by constructing a q-query RLDC that encodes a message of length k using a codeword of block length O_q(k^{1+O(1/{q})}) for any sufficiently large q. This construction matches (up to a multiplicative constant factor) the lower bounds of [Jonathan Katz and Trevisan, 2000; Woodruff, 2007] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Tom Gur et al., 2018], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Relaxed locally correctable codes with improved parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C:SigmaktoSigmanC : Sigma^k to Sigma^n that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [Ben-Sasson et al. (2006)] constructed a qq-query RLDC that encodes a message of length kk using a codeword of block length n=Oq(k1+O(1/sqrtq))n = O_q(k^{1+O(1/sqrt{q})}) for any sufficiently large qq, where Oq(cdot)O_q(cdot) hides some constant that depends only on qq. In this work we improve the parameters of [Ben-Sasson et al. (2006)] by constructing a qq-query RLDC that encodes a message of length kk using a codeword of block length Oq(k1+O(1/q))O_q(k^{1+O(1/q)}) for any sufficiently large qq. This construction matches (up to a multiplicative constant factor) the lower bounds of [Katz and Trevisan (2000), Woodruff (2007)] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Gur et al. (2018)], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    A Lower Bound for Relaxed Locally Decodable Codes

    Get PDF
    A locally decodable code (LDC) C \colon \bitset^k \to \bitset^n is an error correcting code wherein individual bits of the message can be recovered by only querying a few bits of a noisy codeword. LDCs found a myriad of applications both in theory and in practice, ranging from probabilistically checkable proofs to distributed storage. However, despite nearly two decades of extensive study, the best known constructions of O(1)O(1)-query LDCs have super-polynomial blocklength. The notion of relaxed LDCs is a natural relaxation of LDCs, which aims to bypass the foregoing barrier by requiring local decoding of nearly all individual message bits, yet allowing decoding failure (but not error) on the rest. State of the art constructions of O(1)O(1)-query relaxed LDCs achieve blocklength n=O(k1+γ)n = O\left(k^{1+ \gamma}\right) for an arbitrarily small constant γ\gamma. We prove a lower bound which shows that O(1)O(1)-query relaxed LDCs cannot achieve blocklength n=k1+o(1)n = k^{1+ o(1)}. This resolves an open problem raised by Goldreich in 2004

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Optimal lower bounds for 2-query locally decodable linear codes

    No full text
    Abstract. This paper presents essentially optimal lower bounds on the size of linear codes C: {0, 1} n →{0, 1} m which have the property that, for constants δ, ɛ&gt; 0, any bit of the message can be recovered with probability 1 + ɛ by an algorithm reading 2 only 2 bits of a codeword corrupted in up to δm positions. Such codes are known to be applicable to, among other things, the construction and analysis of information-theoretically secure private information retrieval schemes. In this work, we show that m must be at least 2 Ω ( δ 1−2ɛ n). Our results extend work by Goldreich, Karloff, Schulman, and Trevisan [GKST02], which is based heavily on methods developed by Katz and Trevisan [KT00]. The key to our improved bounds is an analysis which bypasses an intermediate reduction used in both prior works. The resulting improvement in the efficiency of the overall analysis is sufficient to achieve a lower bound optimal within a constant factor in the exponent. A construction of a locally decodable linear code matching this bound is presented.
    corecore