15 research outputs found
List Decoding of Locally Repairable Codes
We show that locally repairable codes (LRCs) can be list decoded efficiently
beyond the Johnson radius for a large range of parameters by utilizing the
local error correction capabilities. The new decoding radius is derived and the
asymptotic behavior is analyzed. We give a general list decoding algorithm for
LRCs that achieves this radius along with an explicit realization for a class
of LRCs based on Reed-Solomon codes (Tamo-Barg LRCs). Further, a probabilistic
algorithm for unique decoding of low complexity is given and its success
probability analyzed
On the List Recoverability of Randomly Punctured Codes
We show that a random puncturing of a code with good distance is list recoverable beyond the Johnson bound. In particular, this implies that there are Reed-Solomon codes that are list recoverable beyond the Johnson bound. It was previously known that there are Reed-Solomon codes that do not have this property. As an immediate corollary to our main theorem, we obtain better degree bounds on unbalanced expanders that come from Reed-Solomon codes
On the List-Decodability of Random Linear Rank-Metric Codes
The list-decodability of random linear rank-metric codes is shown to match
that of random rank-metric codes. Specifically, an -linear
rank-metric code over of rate is shown to be (with high probability)
list-decodable up to fractional radius with lists of size at
most , where is a constant
depending only on and . This matches the bound for random rank-metric
codes (up to constant factors). The proof adapts the approach of Guruswami,
H\aa stad, Kopparty (STOC 2010), who established a similar result for the
Hamming metric case, to the rank-metric setting
It'll probably work out: improved list-decoding through random operations
In this work, we introduce a framework to study the effect of random
operations on the combinatorial list-decodability of a code. The operations we
consider correspond to row and column operations on the matrix obtained from
the code by stacking the codewords together as columns. This captures many
natural transformations on codes, such as puncturing, folding, and taking
subcodes; we show that many such operations can improve the list-decoding
properties of a code. There are two main points to this. First, our goal is to
advance our (combinatorial) understanding of list-decodability, by
understanding what structure (or lack thereof) is necessary to obtain it.
Second, we use our more general results to obtain a few interesting corollaries
for list decoding:
(1) We show the existence of binary codes that are combinatorially
list-decodable from fraction of errors with optimal rate
that can be encoded in linear time.
(2) We show that any code with relative distance, when randomly
folded, is combinatorially list-decodable fraction of errors with
high probability. This formalizes the intuition for why the folding operation
has been successful in obtaining codes with optimal list decoding parameters;
previously, all arguments used algebraic methods and worked only with specific
codes.
(3) We show that any code which is list-decodable with suboptimal list sizes
has many subcodes which have near-optimal list sizes, while retaining the error
correcting capabilities of the original code. This generalizes recent results
where subspace evasive sets have been used to reduce list sizes of codes that
achieve list decoding capacity
Randomly punctured Reed--Solomon codes achieve list-decoding capacity over linear-sized fields
Reed--Solomon codes are a classic family of error-correcting codes consisting
of evaluations of low-degree polynomials over a finite field on some sequence
of distinct field elements. They are widely known for their optimal
unique-decoding capabilities, but their list-decoding capabilities are not
fully understood. Given the prevalence of Reed-Solomon codes, a fundamental
question in coding theory is determining if Reed--Solomon codes can optimally
achieve list-decoding capacity.
A recent breakthrough by Brakensiek, Gopi, and Makam, established that
Reed--Solomon codes are combinatorially list-decodable all the way to capacity.
However, their results hold for randomly-punctured Reed--Solomon codes over an
exponentially large field size , where is the block length of the
code. A natural question is whether Reed--Solomon codes can still achieve
capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon
codes are list-decodable to capacity with field size . We show that
Reed--Solomon codes are list-decodable to capacity with linear field size
, which is optimal up to the constant factor. We also give evidence that
the ratio between the alphabet size and code length cannot be bounded
by an absolute constant.
Our proof is based on the proof of Guo and Zhang, and additionally exploits
symmetries of reduced intersection matrices. With our proof, which maintains a
hypergraph perspective of the list-decoding problem, we include an alternate
presentation of ideas of Brakensiek, Gopi, and Makam that more directly
connects the list-decoding problem to the GM-MDS theorem via a hypergraph
orientation theorem
On Generalization Bounds for Projective Clustering
Given a set of points, clustering consists of finding a partition of a point
set into clusters such that the center to which a point is assigned is as
close as possible. Most commonly, centers are points themselves, which leads to
the famous -median and -means objectives. One may also choose centers to
be dimensional subspaces, which gives rise to subspace clustering. In this
paper, we consider learning bounds for these problems. That is, given a set of
samples drawn independently from some unknown, but fixed distribution
, how quickly does a solution computed on converge to the
optimal clustering of ? We give several near optimal results. In
particular,
For center-based objectives, we show a convergence rate of
. This matches the known optimal bounds
of [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016]
and [Bartlett, Linder, and Lugosi, IEEE Trans. Inf. Theory 1998] for -means
and extends it to other important objectives such as -median.
For subspace clustering with -dimensional subspaces, we show a convergence
rate of . These are the first
provable bounds for most of these problems. For the specific case of projective
clustering, which generalizes -means, we show a convergence rate of
is necessary, thereby proving that the
bounds from [Fefferman, Mitter, and Narayanan, Journal of the Mathematical
Society 2016] are essentially optimal