379 research outputs found

    Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument

    Get PDF
    A locally decodable code encodes n-bit strings x in m-bit codewords C(x), in such a way that one can recover any bit x_i from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries need exponential length: m=2^{Omega(n)}. Previously this was known only for linear codes (Goldreich et al. 02). Our proof shows that a 2-query LDC can be decoded with only 1 quantum query, and then proves an exponential lower bound for such 1-query locally quantum-decodable codes. We also show that q quantum queries allow more succinct LDCs than the best known LDCs with q classical queries. Finally, we give new classical lower bounds and quantum upper bounds for the setting of private information retrieval. In particular, we exhibit a quantum 2-server PIR scheme with O(n^{3/10}) qubits of communication, improving upon the O(n^{1/3}) bits of communication of the best known classical 2-server PIR.Comment: 16 pages Latex. 2nd version: title changed, large parts rewritten, some results added or improve

    Improved Lower Bounds for Locally Decodable Codes and Private Information Retrieval

    Full text link
    We prove new lower bounds for locally decodable codes and private information retrieval. We show that a 2-query LDC encoding n-bit strings over an l-bit alphabet, where the decoder only uses b bits of each queried position of the codeword, needs code length m = exp(Omega(n/(2^b Sum_{i=0}^b {l choose i}))) Similarly, a 2-server PIR scheme with an n-bit database and t-bit queries, where the user only needs b bits from each of the two l-bit answers, unknown to the servers, satisfies t = Omega(n/(2^b Sum_{i=0}^b {l choose i})). This implies that several known PIR schemes are close to optimal. Our results generalize those of Goldreich et al. who proved roughly the same bounds for linear LDCs and PIRs. Like earlier work by Kerenidis and de Wolf, our classical lower bounds are proved using quantum computational techniques. In particular, we give a tight analysis of how well a 2-input function can be computed from a quantum superposition of both inputs.Comment: 12 pages LaTeX, To appear in ICALP '0

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Query-Efficient Locally Decodable Codes of Subexponential Length

    Full text link
    We develop the algebraic theory behind the constructions of Yekhanin (2008) and Efremenko (2009), in an attempt to understand the ``algebraic niceness'' phenomenon in Zm\mathbb{Z}_m. We show that every integer m=pq=2t1m = pq = 2^t -1, where pp, qq and tt are prime, possesses the same good algebraic property as m=511m=511 that allows savings in query complexity. We identify 50 numbers of this form by computer search, which together with 511, are then applied to gain improvements on query complexity via Itoh and Suzuki's composition method. More precisely, we construct a 3r/23^{\lceil r/2\rceil}-query LDC for every positive integer r<104r<104 and a (3/4)512r\left\lfloor (3/4)^{51}\cdot 2^{r}\right\rfloor-query LDC for every integer r104r\geq 104, both of length NrN_{r}, improving the 2r2^r queries used by Efremenko (2009) and 32r23\cdot 2^{r-2} queries used by Itoh and Suzuki (2010). We also obtain new efficient private information retrieval (PIR) schemes from the new query-efficient LDCs.Comment: to appear in Computational Complexit

    High rate locally-correctable and locally-testable codes with sub-polynomial query complexity

    Full text link
    In this work, we construct the first locally-correctable codes (LCCs), and locally-testable codes (LTCs) with constant rate, constant relative distance, and sub-polynomial query complexity. Specifically, we show that there exist binary LCCs and LTCs with block length nn, constant rate (which can even be taken arbitrarily close to 1), constant relative distance, and query complexity exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})). Previously such codes were known to exist only with Ω(nβ)\Omega(n^{\beta}) query complexity (for constant β>0\beta > 0), and there were several, quite different, constructions known. Our codes are based on a general distance-amplification method of Alon and Luby~\cite{AL96_codes}. We show that this method interacts well with local correctors and testers, and obtain our main results by applying it to suitably constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant relative distance}. Along the way, we also construct LCCs and LTCs over large alphabets, with the same query complexity exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})), which additionally have the property of approaching the Singleton bound: they have almost the best-possible relationship between their rate and distance. This has the surprising consequence that asking for a large alphabet error-correcting code to further be an LCC or LTC with exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})) query complexity does not require any sacrifice in terms of rate and distance! Such a result was previously not known for any o(n)o(n) query complexity. Our results on LCCs also immediately give locally-decodable codes (LDCs) with the same parameters

    Locally Decodable Quantum Codes

    Get PDF
    We study a quantum analogue of locally decodable error-correcting codes. A q-query locally decodable quantum code encodes n classical bits in an m-qubit state, in such a way that each of the encoded bits can be recovered with high probability by a measurement on at most q qubits of the quantum code, even if a constant fraction of its qubits have been corrupted adversarially. We show that such a quantum code can be transformed into a classical q-query locally decodable code of the same length that can be decoded well on average (albeit with smaller success probability and noise-tolerance). This shows, roughly speaking, that q-query quantum codes are not significantly better than q-query classical codes, at least for constant or small q.Comment: 15 pages, LaTe

    Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation

    Get PDF
    We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares "don't know." Furthermore, if there is no noise on the data structure, it answers all queries correctly with high probability. Our model is the common generalization of a model proposed recently by de Wolf and the notion of "relaxed locally decodable codes" developed in the PCP literature. We measure the efficiency of a data structure in terms of its length, measured by the number of bits in its representation, and query-answering time, measured by the number of bit-probes to the (possibly corrupted) representation. In this work, we study two data structure problems: membership and polynomial evaluation. We show that these two problems have constructions that are simultaneously efficient and error-correcting.Comment: An abridged version of this paper appears in STACS 201

    Error-Correcting Data Structures

    Get PDF
    We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This new model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of the STACS 2009 conferenc

    Outlaw distributions and locally decodable codes

    Get PDF
    Locally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In this work, we give a new characterization of LDCs using distributions over Boolean functions whose expectation is hard to approximate (in~LL_\infty~norm) with a small number of samples. We coin the term `outlaw distributions' for such distributions since they `defy' the Law of Large Numbers. We show that the existence of outlaw distributions over sufficiently `smooth' functions implies the existence of constant query LDCs and vice versa. We give several candidates for outlaw distributions over smooth functions coming from finite field incidence geometry, additive combinatorics and from hypergraph (non)expanders. We also prove a useful lemma showing that (smooth) LDCs which are only required to work on average over a random message and a random message index can be turned into true LDCs at the cost of only constant factors in the parameters.Comment: A preliminary version of this paper appeared in the proceedings of ITCS 201
    corecore