195 research outputs found
Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels
We study variants of locally decodable and locally correctable codes in computationally bounded, adversarial channels, under the assumption that collision-resistant hash functions exist, and with no public-key or private-key cryptographic setup. Specifically, we provide constructions of relaxed locally correctable and relaxed locally decodable codes over the binary alphabet, with constant information rate, and poly-logarithmic locality. Our constructions compare favorably with existing schemes built under much stronger cryptographic assumptions, and with their classical analogues in the computationally unbounded, Hamming channel. Our constructions crucially employ collision-resistant hash functions and local expander graphs, extending ideas from recent cryptographic constructions of memory-hard functions
Relaxed Local Correctability from Local Testing
We cement the intuitive connection between relaxed local correctability and
local testing by presenting a concrete framework for building a relaxed locally
correctable code from any family of linear locally testable codes with
sufficiently high rate. When instantiated using the locally testable codes of
Dinur et al. (STOC 2022), this framework yields the first asymptotically good
relaxed locally correctable and decodable codes with polylogarithmic query
complexity, which finally closes the superpolynomial gap between query lower
and upper bounds. Our construction combines high-rate locally testable codes of
various sizes to produce a code that is locally testable at every scale: we can
gradually "zoom in" to any desired codeword index, and a local tester at each
step certifies that the next, smaller restriction of the input has low error.
Our codes asymptotically inherit the rate and distance of any locally
testable code used in the final step of the construction. Therefore, our
technique also yields nonexplicit relaxed locally correctable codes with
polylogarithmic query complexity that have rate and distance approaching the
Gilbert-Varshamov bound.Comment: 18 page
Locally Decodable/Correctable Codes for Insertions and Deletions
Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms.
In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions.
Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length.
Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist
On list recovery of high-rate tensor codes
We continue the study of list recovery properties of high-rate tensor codes, initiated by Hemenway, Ron-Zewi, and Wootters (FOCS’17). In that work it was shown that the tensor product of an efficient (poly-time) high-rate globally list recoverable code is approximately locally list recoverable, as well as globally list recoverable in probabilistic near-linear time. This was used in turn to give the first capacity-achieving list decodable codes with (1) local list decoding algorithms, and with (2) probabilistic near-linear time global list decoding algorithms. This also yielded constant-rate codes approaching the Gilbert-Varshamov bound with probabilistic near-linear time global unique decoding algorithms. In the current work we obtain the following results: 1. The tensor product of an efficient (poly-time) high-rate globally list recoverable code is globally list recoverable in deterministic near-linear time. This yields in turn the first capacity-achieving list decodable codes with deterministic near-linear time global list decoding algorithms. It also gives constant-rate codes approaching the Gilbert-Varshamov bound with deterministic near-linear time global unique decoding algorithms. 2. If the base code is additionally locally correctable, then the tensor product is (genuinely) locally list recoverable. This yields in turn (non-explicit) constant-rate codes approaching the Gilbert- Varshamov bound that are locally correctable with query complexity and running time No(1). This improves over prior work by Gopi et. al. (SODA’17; IEEE Transactions on Information Theory’18) that only gave query complexity N" with rate that is exponentially small in 1/". 3. A nearly-tight combinatorial lower bound on output list size for list recovering high-rate tensor codes. This bound implies in turn a nearly-tight lower bound of N (1/ log logN) on the product of query complexity and output list size for locally list recovering high-rate tensor codes.</p
Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex
Reed-Muller codes are among the most important classes of locally correctable
codes. Currently local decoding of Reed-Muller codes is based on decoding on
lines or quadratic curves to recover one single coordinate. To recover multiple
coordinates simultaneously, the naive way is to repeat the local decoding for
recovery of a single coordinate. This decoding algorithm might be more
expensive, i.e., require higher query complexity. In this paper, we focus on
Reed-Muller codes with usual parameter regime, namely, the total degree of
evaluation polynomials is , where is the code alphabet size
(in fact, can be as big as in our setting). By introducing a novel
variation of codex, i.e., interleaved codex (the concept of codex has been used
for arithmetic secret sharing \cite{C11,CCX12}), we are able to locally recover
arbitrarily large number of coordinates of a Reed-Muller code
simultaneously at the cost of querying coordinates. It turns out that
our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that
accessing locations is in fact cheaper than repeating the procedure for
accessing a single location for times. Our estimation of success error
probability is based on error probability bound for -wise linearly
independent variables given in \cite{BR94}
High rate locally-correctable and locally-testable codes with sub-polynomial query complexity
In this work, we construct the first locally-correctable codes (LCCs), and
locally-testable codes (LTCs) with constant rate, constant relative distance,
and sub-polynomial query complexity. Specifically, we show that there exist
binary LCCs and LTCs with block length , constant rate (which can even be
taken arbitrarily close to 1), constant relative distance, and query complexity
. Previously such codes were known to exist
only with query complexity (for constant ), and
there were several, quite different, constructions known.
Our codes are based on a general distance-amplification method of Alon and
Luby~\cite{AL96_codes}. We show that this method interacts well with local
correctors and testers, and obtain our main results by applying it to suitably
constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant
relative distance}.
Along the way, we also construct LCCs and LTCs over large alphabets, with the
same query complexity , which additionally have
the property of approaching the Singleton bound: they have almost the
best-possible relationship between their rate and distance. This has the
surprising consequence that asking for a large alphabet error-correcting code
to further be an LCC or LTC with query
complexity does not require any sacrifice in terms of rate and distance! Such a
result was previously not known for any query complexity.
Our results on LCCs also immediately give locally-decodable codes (LDCs) with
the same parameters
Lower bounds for constant query affine-invariant LCCs and LTCs
Affine-invariant codes are codes whose coordinates form a vector space over a
finite field and which are invariant under affine transformations of the
coordinate space. They form a natural, well-studied class of codes; they
include popular codes such as Reed-Muller and Reed-Solomon. A particularly
appealing feature of affine-invariant codes is that they seem well-suited to
admit local correctors and testers.
In this work, we give lower bounds on the length of locally correctable and
locally testable affine-invariant codes with constant query complexity. We show
that if a code is an -query
locally correctable code (LCC), where is a finite field and
is a finite alphabet, then the number of codewords in is
at most . Also, we show that if
is an -query locally testable
code (LTC), then the number of codewords in is at most
. The dependence on in these
bounds is tight for constant-query LCCs/LTCs, since Guo, Kopparty and Sudan
(ITCS `13) construct affine-invariant codes via lifting that have the same
asymptotic tradeoffs. Note that our result holds for non-linear codes, whereas
previously, Ben-Sasson and Sudan (RANDOM `11) assumed linearity to derive
similar results.
Our analysis uses higher-order Fourier analysis. In particular, we show that
the codewords corresponding to an affine-invariant LCC/LTC must be far from
each other with respect to Gowers norm of an appropriate order. This then
allows us to bound the number of codewords, using known decomposition theorems
which approximate any bounded function in terms of a finite number of
low-degree non-classical polynomials, upto a small error in the Gowers norm
- …