1,380 research outputs found
List Decoding Tensor Products and Interleaved Codes
We design the first efficient algorithms and prove new combinatorial bounds
for list decoding tensor products of codes and interleaved codes. We show that
for {\em every} code, the ratio of its list decoding radius to its minimum
distance stays unchanged under the tensor product operation (rather than
squaring, as one might expect). This gives the first efficient list decoders
and new combinatorial bounds for some natural codes including multivariate
polynomials where the degree in each variable is bounded. We show that for {\em
every} code, its list decoding radius remains unchanged under -wise
interleaving for an integer . This generalizes a recent result of Dinur et
al \cite{DGKS}, who proved such a result for interleaved Hadamard codes
(equivalently, linear transformations). Using the notion of generalized Hamming
weights, we give better list size bounds for {\em both} tensoring and
interleaving of binary linear codes. By analyzing the weight distribution of
these codes, we reduce the task of bounding the list size to bounding the
number of close-by low-rank codewords. For decoding linear transformations,
using rank-reduction together with other ideas, we obtain list size bounds that
are tight over small fields.Comment: 32 page
Improving Distributed Gradient Descent Using Reed-Solomon Codes
Today's massively-sized datasets have made it necessary to often perform
computations on them in a distributed manner. In principle, a computational
task is divided into subtasks which are distributed over a cluster operated by
a taskmaster. One issue faced in practice is the delay incurred due to the
presence of slow machines, known as \emph{stragglers}. Several schemes,
including those based on replication, have been proposed in the literature to
mitigate the effects of stragglers and more recently, those inspired by coding
theory have begun to gain traction. In this work, we consider a distributed
gradient descent setting suitable for a wide class of machine learning
problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and
present a deterministic scheme that, for a prescribed per-machine computational
effort, recovers the gradient from the least number of machines
theoretically permissible, via an decoding algorithm. We also provide
a theoretical delay model which can be used to minimize the expected waiting
time per computation by optimally choosing the parameters of the scheme.
Finally, we supplement our theoretical findings with numerical results that
demonstrate the efficacy of the method and its advantages over competing
schemes
Reed-Muller codes for random erasures and errors
This paper studies the parameters for which Reed-Muller (RM) codes over
can correct random erasures and random errors with high probability,
and in particular when can they achieve capacity for these two classical
channels. Necessarily, the paper also studies properties of evaluations of
multi-variate polynomials on random sets of inputs.
For erasures, we prove that RM codes achieve capacity both for very high rate
and very low rate regimes. For errors, we prove that RM codes achieve capacity
for very low rate regimes, and for very high rates, we show that they can
uniquely decode at about square root of the number of errors at capacity.
The proofs of these four results are based on different techniques, which we
find interesting in their own right. In particular, we study the following
questions about , the matrix whose rows are truth tables of all
monomials of degree in variables. What is the most (resp. least)
number of random columns in that define a submatrix having full column
rank (resp. full row rank) with high probability? We obtain tight bounds for
very small (resp. very large) degrees , which we use to show that RM codes
achieve capacity for erasures in these regimes.
Our decoding from random errors follows from the following novel reduction.
For every linear code of sufficiently high rate we construct a new code
, also of very high rate, such that for every subset of coordinates, if
can recover from erasures in , then can recover from errors in .
Specializing this to RM codes and using our results for erasures imply our
result on unique decoding of RM codes at high rate.
Finally, two of our capacity achieving results require tight bounds on the
weight distribution of RM codes. We obtain such bounds extending the recent
\cite{KLP} bounds from constant degree to linear degree polynomials
LEDAkem: a post-quantum key encapsulation mechanism based on QC-LDPC codes
This work presents a new code-based key encapsulation mechanism (KEM) called
LEDAkem. It is built on the Niederreiter cryptosystem and relies on
quasi-cyclic low-density parity-check codes as secret codes, providing high
decoding speeds and compact keypairs. LEDAkem uses ephemeral keys to foil known
statistical attacks, and takes advantage of a new decoding algorithm that
provides faster decoding than the classical bit-flipping decoder commonly
adopted in this kind of systems. The main attacks against LEDAkem are
investigated, taking into account quantum speedups. Some instances of LEDAkem
are designed to achieve different security levels against classical and quantum
computers. Some performance figures obtained through an efficient C99
implementation of LEDAkem are provided.Comment: 21 pages, 3 table
- …