736 research outputs found
Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument
A locally decodable code encodes n-bit strings x in m-bit codewords C(x), in
such a way that one can recover any bit x_i from a corrupted codeword by
querying only a few bits of that word. We use a quantum argument to prove that
LDCs with 2 classical queries need exponential length: m=2^{Omega(n)}.
Previously this was known only for linear codes (Goldreich et al. 02). Our
proof shows that a 2-query LDC can be decoded with only 1 quantum query, and
then proves an exponential lower bound for such 1-query locally
quantum-decodable codes. We also show that q quantum queries allow more
succinct LDCs than the best known LDCs with q classical queries. Finally, we
give new classical lower bounds and quantum upper bounds for the setting of
private information retrieval. In particular, we exhibit a quantum 2-server PIR
scheme with O(n^{3/10}) qubits of communication, improving upon the O(n^{1/3})
bits of communication of the best known classical 2-server PIR.Comment: 16 pages Latex. 2nd version: title changed, large parts rewritten,
some results added or improve
Locally Decodable Quantum Codes
We study a quantum analogue of locally decodable error-correcting codes. A
q-query locally decodable quantum code encodes n classical bits in an m-qubit
state, in such a way that each of the encoded bits can be recovered with high
probability by a measurement on at most q qubits of the quantum code, even if a
constant fraction of its qubits have been corrupted adversarially. We show that
such a quantum code can be transformed into a classical q-query locally
decodable code of the same length that can be decoded well on average (albeit
with smaller success probability and noise-tolerance). This shows, roughly
speaking, that q-query quantum codes are not significantly better than q-query
classical codes, at least for constant or small q.Comment: 15 pages, LaTe
Improved Lower Bounds for Locally Decodable Codes and Private Information Retrieval
We prove new lower bounds for locally decodable codes and private information
retrieval. We show that a 2-query LDC encoding n-bit strings over an l-bit
alphabet, where the decoder only uses b bits of each queried position of the
codeword, needs code length m = exp(Omega(n/(2^b Sum_{i=0}^b {l choose i})))
Similarly, a 2-server PIR scheme with an n-bit database and t-bit queries,
where the user only needs b bits from each of the two l-bit answers, unknown to
the servers, satisfies t = Omega(n/(2^b Sum_{i=0}^b {l choose i})). This
implies that several known PIR schemes are close to optimal. Our results
generalize those of Goldreich et al. who proved roughly the same bounds for
linear LDCs and PIRs. Like earlier work by Kerenidis and de Wolf, our classical
lower bounds are proved using quantum computational techniques. In particular,
we give a tight analysis of how well a 2-input function can be computed from a
quantum superposition of both inputs.Comment: 12 pages LaTeX, To appear in ICALP '0
Error-Correcting Data Structures
We study data structures in the presence of adversarial noise. We want to
encode a given object in a succinct data structure that enables us to
efficiently answer specific queries about the object, even if the data
structure has been corrupted by a constant fraction of errors. This new model
is the common generalization of (static) data structures and locally decodable
error-correcting codes. The main issue is the tradeoff between the space used
by the data structure and the time (number of probes) needed to answer a query
about the encoded object. We prove a number of upper and lower bounds on
various natural error-correcting data structure problems. In particular, we
show that the optimal length of error-correcting data structures for the
Membership problem (where we want to store subsets of size s from a universe of
size n) is closely related to the optimal length of locally decodable codes for
s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of
the STACS 2009 conferenc
Query-Efficient Locally Decodable Codes of Subexponential Length
We develop the algebraic theory behind the constructions of Yekhanin (2008)
and Efremenko (2009), in an attempt to understand the ``algebraic niceness''
phenomenon in . We show that every integer ,
where , and are prime, possesses the same good algebraic property as
that allows savings in query complexity. We identify 50 numbers of this
form by computer search, which together with 511, are then applied to gain
improvements on query complexity via Itoh and Suzuki's composition method. More
precisely, we construct a -query LDC for every positive
integer and a -query
LDC for every integer , both of length , improving the
queries used by Efremenko (2009) and queries used by Itoh and
Suzuki (2010).
We also obtain new efficient private information retrieval (PIR) schemes from
the new query-efficient LDCs.Comment: to appear in Computational Complexit
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Outlaw distributions and locally decodable codes
Locally decodable codes (LDCs) are error correcting codes that allow for
decoding of a single message bit using a small number of queries to a corrupted
encoding. Despite decades of study, the optimal trade-off between query
complexity and codeword length is far from understood. In this work, we give a
new characterization of LDCs using distributions over Boolean functions whose
expectation is hard to approximate (in~~norm) with a small number of
samples. We coin the term `outlaw distributions' for such distributions since
they `defy' the Law of Large Numbers. We show that the existence of outlaw
distributions over sufficiently `smooth' functions implies the existence of
constant query LDCs and vice versa. We give several candidates for outlaw
distributions over smooth functions coming from finite field incidence
geometry, additive combinatorics and from hypergraph (non)expanders.
We also prove a useful lemma showing that (smooth) LDCs which are only
required to work on average over a random message and a random message index
can be turned into true LDCs at the cost of only constant factors in the
parameters.Comment: A preliminary version of this paper appeared in the proceedings of
ITCS 201
- …