420 research outputs found

    Locally Decodable Codes with Randomized Encoding

    Get PDF
    We initiate a study of locally decodable codes with randomized encoding. Standard locally decodable codes are error correcting codes with a deterministic encoding function and a randomized decoding function, such that any desired message bit can be recovered with good probability by querying only a small number of positions in the corrupted codeword. This allows one to recover any message bit very efficiently in sub-linear or even logarithmic time. Besides this straightforward application, locally decodable codes have also found many other applications such as private information retrieval, secure multiparty computation, and average-case complexity. However, despite extensive research, the tradeoff between the rate of the code and the number of queries is somewhat disappointing. For example, the best known constructions still need super-polynomially long codeword length even with a logarithmic number of queries, and need a polynomial number of queries to achieve a constant rate. In this paper, we show that by using a randomized encoding, in several models we can achieve significantly better rate-query tradeoff. In addition, our codes work for both the standard Hamming errors, and the more general and harder edit errors.Comment: 23 page

    Locally Decodable Quantum Codes

    Get PDF
    We study a quantum analogue of locally decodable error-correcting codes. A q-query locally decodable quantum code encodes n classical bits in an m-qubit state, in such a way that each of the encoded bits can be recovered with high probability by a measurement on at most q qubits of the quantum code, even if a constant fraction of its qubits have been corrupted adversarially. We show that such a quantum code can be transformed into a classical q-query locally decodable code of the same length that can be decoded well on average (albeit with smaller success probability and noise-tolerance). This shows, roughly speaking, that q-query quantum codes are not significantly better than q-query classical codes, at least for constant or small q.Comment: 15 pages, LaTe

    High rate locally-correctable and locally-testable codes with sub-polynomial query complexity

    Full text link
    In this work, we construct the first locally-correctable codes (LCCs), and locally-testable codes (LTCs) with constant rate, constant relative distance, and sub-polynomial query complexity. Specifically, we show that there exist binary LCCs and LTCs with block length nn, constant rate (which can even be taken arbitrarily close to 1), constant relative distance, and query complexity exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})). Previously such codes were known to exist only with Ω(nβ)\Omega(n^{\beta}) query complexity (for constant β>0\beta > 0), and there were several, quite different, constructions known. Our codes are based on a general distance-amplification method of Alon and Luby~\cite{AL96_codes}. We show that this method interacts well with local correctors and testers, and obtain our main results by applying it to suitably constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant relative distance}. Along the way, we also construct LCCs and LTCs over large alphabets, with the same query complexity exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})), which additionally have the property of approaching the Singleton bound: they have almost the best-possible relationship between their rate and distance. This has the surprising consequence that asking for a large alphabet error-correcting code to further be an LCC or LTC with exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})) query complexity does not require any sacrifice in terms of rate and distance! Such a result was previously not known for any o(n)o(n) query complexity. Our results on LCCs also immediately give locally-decodable codes (LDCs) with the same parameters

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    A Hypercontractive Inequality for Matrix-Valued Functions with Applications to Quantum Computing and LDCs

    Full text link
    The Bonami-Beckner hypercontractive inequality is a powerful tool in Fourier analysis of real-valued functions on the Boolean cube. In this paper we present a version of this inequality for matrix-valued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also present a number of applications. First, we analyze maps that encode nn classical bits into mm qubits, in such a way that each set of kk bits can be recovered with some probability by an appropriate measurement on the quantum encoding; we show that if m<0.7nm<0.7 n, then the success probability is exponentially small in kk. This result may be viewed as a direct product version of Nayak's quantum random access code bound. It in turn implies strong direct product theorems for the one-way quantum communication complexity of Disjointness and other problems. Second, we prove that error-correcting codes that are locally decodable with 2 queries require length exponential in the length of the encoded string. This gives what is arguably the first ``non-quantum'' proof of a result originally derived by Kerenidis and de Wolf using quantum information theory, and answers a question by Trevisan.Comment: This is the full version of a paper that will appear in the proceedings of the IEEE FOCS 08 conferenc

    Error-Correcting Data Structures

    Get PDF
    We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This new model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of the STACS 2009 conferenc

    Improved Lower Bounds for Locally Decodable Codes and Private Information Retrieval

    Full text link
    We prove new lower bounds for locally decodable codes and private information retrieval. We show that a 2-query LDC encoding n-bit strings over an l-bit alphabet, where the decoder only uses b bits of each queried position of the codeword, needs code length m = exp(Omega(n/(2^b Sum_{i=0}^b {l choose i}))) Similarly, a 2-server PIR scheme with an n-bit database and t-bit queries, where the user only needs b bits from each of the two l-bit answers, unknown to the servers, satisfies t = Omega(n/(2^b Sum_{i=0}^b {l choose i})). This implies that several known PIR schemes are close to optimal. Our results generalize those of Goldreich et al. who proved roughly the same bounds for linear LDCs and PIRs. Like earlier work by Kerenidis and de Wolf, our classical lower bounds are proved using quantum computational techniques. In particular, we give a tight analysis of how well a 2-input function can be computed from a quantum superposition of both inputs.Comment: 12 pages LaTeX, To appear in ICALP '0
    corecore