1,033 research outputs found
Some remarks on multiplicity codes
Multiplicity codes are algebraic error-correcting codes generalizing
classical polynomial evaluation codes, and are based on evaluating polynomials
and their derivatives. This small augmentation confers upon them better local
decoding, list-decoding and local list-decoding algorithms than their classical
counterparts. We survey what is known about these codes, present some
variations and improvements, and finally list some interesting open problems.Comment: 21 pages in Discrete Geometry and Algebraic Combinatorics, AMS
Contemporary Mathematics Series, 201
Decoding Reed-Muller codes over product sets
We give a polynomial time algorithm to decode multivariate polynomial codes
of degree up to half their minimum distance, when the evaluation points are
an arbitrary product set , for every . Previously known
algorithms can achieve this only if the set has some very special algebraic
structure, or if the degree is significantly smaller than . We also
give a near-linear time randomized algorithm, which is based on tools from
list-decoding, to decode these codes from nearly half their minimum distance,
provided .
Our result gives an -dimensional generalization of the well known decoding
algorithms for Reed-Solomon codes, and can be viewed as giving an algorithmic
version of the Schwartz-Zippel lemma.Comment: 25 pages, 0 figure
List Decoding Tensor Products and Interleaved Codes
We design the first efficient algorithms and prove new combinatorial bounds
for list decoding tensor products of codes and interleaved codes. We show that
for {\em every} code, the ratio of its list decoding radius to its minimum
distance stays unchanged under the tensor product operation (rather than
squaring, as one might expect). This gives the first efficient list decoders
and new combinatorial bounds for some natural codes including multivariate
polynomials where the degree in each variable is bounded. We show that for {\em
every} code, its list decoding radius remains unchanged under -wise
interleaving for an integer . This generalizes a recent result of Dinur et
al \cite{DGKS}, who proved such a result for interleaved Hadamard codes
(equivalently, linear transformations). Using the notion of generalized Hamming
weights, we give better list size bounds for {\em both} tensoring and
interleaving of binary linear codes. By analyzing the weight distribution of
these codes, we reduce the task of bounding the list size to bounding the
number of close-by low-rank codewords. For decoding linear transformations,
using rank-reduction together with other ideas, we obtain list size bounds that
are tight over small fields.Comment: 32 page
Low-degree tests at large distances
We define tests of boolean functions which distinguish between linear (or
quadratic) polynomials, and functions which are very far, in an appropriate
sense, from these polynomials. The tests have optimal or nearly optimal
trade-offs between soundness and the number of queries.
In particular, we show that functions with small Gowers uniformity norms
behave ``randomly'' with respect to hypergraph linearity tests.
A central step in our analysis of quadraticity tests is the proof of an
inverse theorem for the third Gowers uniformity norm of boolean functions.
The last result has also a coding theory application. It is possible to
estimate efficiently the distance from the second-order Reed-Muller code on
inputs lying far beyond its list-decoding radius
Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property
Compressed Sensing aims to capture attributes of -sparse signals using
very few measurements. In the standard Compressed Sensing paradigm, the
\m\times \n measurement matrix \A is required to act as a near isometry on
the set of all -sparse signals (Restricted Isometry Property or RIP).
Although it is known that certain probabilistic processes generate \m \times
\n matrices that satisfy RIP with high probability, there is no practical
algorithm for verifying whether a given sensing matrix \A has this property,
crucial for the feasibility of the standard recovery algorithms. In contrast
this paper provides simple criteria that guarantee that a deterministic sensing
matrix satisfying these criteria acts as a near isometry on an overwhelming
majority of -sparse signals; in particular, most such signals have a unique
representation in the measurement domain. Probability still plays a critical
role, but it enters the signal model rather than the construction of the
sensing matrix. We require the columns of the sensing matrix to form a group
under pointwise multiplication. The construction allows recovery methods for
which the expected performance is sub-linear in \n, and only quadratic in
\m; the focus on expected performance is more typical of mainstream signal
processing than the worst-case analysis that prevails in standard Compressed
Sensing. Our framework encompasses many families of deterministic sensing
matrices, including those formed from discrete chirps, Delsarte-Goethals codes,
and extended BCH codes.Comment: 16 Pages, 2 figures, to appear in IEEE Journal of Selected Topics in
Signal Processing, the special issue on Compressed Sensin
- …