14 research outputs found
A Storage-Efficient and Robust Private Information Retrieval Scheme Allowing Few Servers
Since the concept of locally decodable codes was introduced by Katz and
Trevisan in 2000, it is well-known that information the-oretically secure
private information retrieval schemes can be built using locally decodable
codes. In this paper, we construct a Byzantine ro-bust PIR scheme using the
multiplicity codes introduced by Kopparty et al. Our main contributions are on
the one hand to avoid full replica-tion of the database on each server; this
significantly reduces the global redundancy. On the other hand, to have a much
lower locality in the PIR context than in the LDC context. This shows that
there exists two different notions: LDC-locality and PIR-locality. This is made
possible by exploiting geometric properties of multiplicity codes
Some remarks on multiplicity codes
Multiplicity codes are algebraic error-correcting codes generalizing
classical polynomial evaluation codes, and are based on evaluating polynomials
and their derivatives. This small augmentation confers upon them better local
decoding, list-decoding and local list-decoding algorithms than their classical
counterparts. We survey what is known about these codes, present some
variations and improvements, and finally list some interesting open problems.Comment: 21 pages in Discrete Geometry and Algebraic Combinatorics, AMS
Contemporary Mathematics Series, 201
Linear-time list recovery of high-rate expander codes
We show that expander codes, when properly instantiated, are high-rate list
recoverable codes with linear-time list recovery algorithms. List recoverable
codes have been useful recently in constructing efficiently list-decodable
codes, as well as explicit constructions of matrices for compressive sensing
and group testing. Previous list recoverable codes with linear-time decoding
algorithms have all had rate at most 1/2; in contrast, our codes can have rate
for any . We can plug our high-rate codes into a
construction of Meir (2014) to obtain linear-time list recoverable codes of
arbitrary rates, which approach the optimal trade-off between the number of
non-trivial lists provided and the rate of the code. While list-recovery is
interesting on its own, our primary motivation is applications to
list-decoding. A slight strengthening of our result would implies linear-time
and optimally list-decodable codes for all rates, and our work is a step in the
direction of solving this important problem
High rate locally-correctable and locally-testable codes with sub-polynomial query complexity
In this work, we construct the first locally-correctable codes (LCCs), and
locally-testable codes (LTCs) with constant rate, constant relative distance,
and sub-polynomial query complexity. Specifically, we show that there exist
binary LCCs and LTCs with block length , constant rate (which can even be
taken arbitrarily close to 1), constant relative distance, and query complexity
. Previously such codes were known to exist
only with query complexity (for constant ), and
there were several, quite different, constructions known.
Our codes are based on a general distance-amplification method of Alon and
Luby~\cite{AL96_codes}. We show that this method interacts well with local
correctors and testers, and obtain our main results by applying it to suitably
constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant
relative distance}.
Along the way, we also construct LCCs and LTCs over large alphabets, with the
same query complexity , which additionally have
the property of approaching the Singleton bound: they have almost the
best-possible relationship between their rate and distance. This has the
surprising consequence that asking for a large alphabet error-correcting code
to further be an LCC or LTC with query
complexity does not require any sacrifice in terms of rate and distance! Such a
result was previously not known for any query complexity.
Our results on LCCs also immediately give locally-decodable codes (LDCs) with
the same parameters
Recommended from our members
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes