100 research outputs found
Query-Efficient Locally Decodable Codes of Subexponential Length
We develop the algebraic theory behind the constructions of Yekhanin (2008)
and Efremenko (2009), in an attempt to understand the ``algebraic niceness''
phenomenon in . We show that every integer ,
where , and are prime, possesses the same good algebraic property as
that allows savings in query complexity. We identify 50 numbers of this
form by computer search, which together with 511, are then applied to gain
improvements on query complexity via Itoh and Suzuki's composition method. More
precisely, we construct a -query LDC for every positive
integer and a -query
LDC for every integer , both of length , improving the
queries used by Efremenko (2009) and queries used by Itoh and
Suzuki (2010).
We also obtain new efficient private information retrieval (PIR) schemes from
the new query-efficient LDCs.Comment: to appear in Computational Complexit
Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length
Locally decodable codes are error correcting codes with the extra property that, in order to retrieve the correct value of just one position of the input with high probability, it is sufficient to read a small number of positions of the corresponding,
possibly corrupted codeword. A breakthrough result by Yekhanin showed that 3-query linear locally decodable codes may have subexponential length.
The construction of Yekhanin, and the three query constructions that followed, achieve correctness only up to a certain limit which is for nonbinary codes, where an adversary is allowed to corrupt up to delta fraction of the codeword. The largest correctness for a subexponential length 3-query binary code is achieved in a construction by Woodruff, and it is below 1 - 3 delta.
We show that achieving slightly larger correctness (as a function of ) requires exponential codeword length for 3-query codes. Previously, there were no larger than quadratic lower bounds known for locally decodable codes with more than 2 queries, even in the case of 3-query linear codes. Our results hold for linear codes over arbitrary finite fields and for binary nonlinear codes.
Considering larger number of queries, we obtain lower bounds for q-query codes for q>3, under certain assumptions on the decoding algorithm that have been commonly used in previous constructions. We also prove bounds on the largest correctness achievable by these decoding algorithms, regardless of the length of the code. Our results explain the limitations on correctness in previous constructions using such decoding algorithms.
In addition, our results imply tradeoffs on the parameters of error correcting data structures
2-Server PIR with sub-polynomial communication
A 2-server Private Information Retrieval (PIR) scheme allows a user to
retrieve the th bit of an -bit database replicated among two servers
(which do not communicate) while not revealing any information about to
either server. In this work we construct a 1-round 2-server PIR with total
communication cost . This improves over the
currently known 2-server protocols which require communication and
matches the communication cost of known 3-server PIR schemes. Our improvement
comes from reducing the number of servers in existing protocols, based on
Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing
these protocols in an algebraic way (using polynomial interpolation) and
extending them using partial derivatives
New Constructions for Query-Efficient Locally Decodable Codes of Subexponential Length
A -locally decodable code
is an error-correcting code that encodes each message
to
and has the following property: For any such that
and each , the symbol
of can be recovered with probability at least by
a randomized decoding algorithm looking only at coordinates of .
The efficiency of a -locally decodable code is measured by the code length and the number of
queries. For any -query locally decodable code ,
the code length is conjectured to be exponential of , however, this was
disproved. Yekhanin [In Proc. of STOC, 2007] showed that there exists a 3-query
locally decodable code such that
assuming that the number of Mersenne primes is
infinite. For a 3-query locally decodable code ,
Efremenko [ECCC Report No.69, 2008] reduced the code length further to
, and also showed that for any
integer , there exists a -query locally decodable code such that and . In this paper, we present a query-efficient locally decodable
code and show that for any integer , there exists a -query locally
decodable code such that
and .Comment: 13 pages, 1 figure, 2 table
Error-Correcting Data Structures
We study data structures in the presence of adversarial noise. We want to
encode a given object in a succinct data structure that enables us to
efficiently answer specific queries about the object, even if the data
structure has been corrupted by a constant fraction of errors. This new model
is the common generalization of (static) data structures and locally decodable
error-correcting codes. The main issue is the tradeoff between the space used
by the data structure and the time (number of probes) needed to answer a query
about the encoded object. We prove a number of upper and lower bounds on
various natural error-correcting data structure problems. In particular, we
show that the optimal length of error-correcting data structures for the
Membership problem (where we want to store subsets of size s from a universe of
size n) is closely related to the optimal length of locally decodable codes for
s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of
the STACS 2009 conferenc
High rate locally-correctable and locally-testable codes with sub-polynomial query complexity
In this work, we construct the first locally-correctable codes (LCCs), and
locally-testable codes (LTCs) with constant rate, constant relative distance,
and sub-polynomial query complexity. Specifically, we show that there exist
binary LCCs and LTCs with block length , constant rate (which can even be
taken arbitrarily close to 1), constant relative distance, and query complexity
. Previously such codes were known to exist
only with query complexity (for constant ), and
there were several, quite different, constructions known.
Our codes are based on a general distance-amplification method of Alon and
Luby~\cite{AL96_codes}. We show that this method interacts well with local
correctors and testers, and obtain our main results by applying it to suitably
constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant
relative distance}.
Along the way, we also construct LCCs and LTCs over large alphabets, with the
same query complexity , which additionally have
the property of approaching the Singleton bound: they have almost the
best-possible relationship between their rate and distance. This has the
surprising consequence that asking for a large alphabet error-correcting code
to further be an LCC or LTC with query
complexity does not require any sacrifice in terms of rate and distance! Such a
result was previously not known for any query complexity.
Our results on LCCs also immediately give locally-decodable codes (LDCs) with
the same parameters
Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation
We construct efficient data structures that are resilient against a constant
fraction of adversarial noise. Our model requires that the decoder answers most
queries correctly with high probability and for the remaining queries, the
decoder with high probability either answers correctly or declares "don't
know." Furthermore, if there is no noise on the data structure, it answers all
queries correctly with high probability. Our model is the common generalization
of a model proposed recently by de Wolf and the notion of "relaxed locally
decodable codes" developed in the PCP literature.
We measure the efficiency of a data structure in terms of its length,
measured by the number of bits in its representation, and query-answering time,
measured by the number of bit-probes to the (possibly corrupted)
representation. In this work, we study two data structure problems: membership
and polynomial evaluation. We show that these two problems have constructions
that are simultaneously efficient and error-correcting.Comment: An abridged version of this paper appears in STACS 201
- …