27,331 research outputs found
Generalization of the Lee-O'Sullivan List Decoding for One-Point AG Codes
We generalize the list decoding algorithm for Hermitian codes proposed by Lee
and O'Sullivan based on Gr\"obner bases to general one-point AG codes, under an
assumption weaker than one used by Beelen and Brander. Our generalization
enables us to apply the fast algorithm to compute a Gr\"obner basis of a module
proposed by Lee and O'Sullivan, which was not possible in another
generalization by Lax.Comment: article.cls, 14 pages, no figure. The order of authors was changed.
To appear in Journal of Symbolic Computation. This is an extended journal
paper version of our earlier conference paper arXiv:1201.624
Semidefinite programming, multivariate orthogonal polynomials, and codes in spherical caps
We apply the semidefinite programming approach developed in
arxiv:math.MG/0608426 to obtain new upper bounds for codes in spherical caps.
We compute new upper bounds for the one-sided kissing number in several
dimensions where we in particular get a new tight bound in dimension 8.
Furthermore we show how to use the SDP framework to get analytic bounds.Comment: 15 pages, (v2) referee comments and suggestions incorporate
Towards practical minimum-entropy universal decoding
Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance
On Approximating the Sum-Rate for Multiple-Unicasts
We study upper bounds on the sum-rate of multiple-unicasts. We approximate
the Generalized Network Sharing Bound (GNS cut) of the multiple-unicasts
network coding problem with independent sources. Our approximation
algorithm runs in polynomial time and yields an upper bound on the joint source
entropy rate, which is within an factor from the GNS cut. It
further yields a vector-linear network code that achieves joint source entropy
rate within an factor from the GNS cut, but \emph{not} with
independent sources: the code induces a correlation pattern among the sources.
Our second contribution is establishing a separation result for vector-linear
network codes: for any given field there exist networks for which
the optimum sum-rate supported by vector-linear codes over for
independent sources can be multiplicatively separated by a factor of
, for any constant , from the optimum joint entropy
rate supported by a code that allows correlation between sources. Finally, we
establish a similar separation result for the asymmetric optimum vector-linear
sum-rates achieved over two distinct fields and
for independent sources, revealing that the choice of field
can heavily impact the performance of a linear network code.Comment: 10 pages; Shorter version appeared at ISIT (International Symposium
on Information Theory) 2015; some typos correcte
A library of Taylor models for PVS automatic proof checker
We present in this paper a library to compute with Taylor models, a technique
extending interval arithmetic to reduce decorrelation and to solve differential
equations. Numerical software usually produces only numerical results. Our
library can be used to produce both results and proofs. As seen during the
development of Fermat's last theorem reported by Aczel 1996, providing a proof
is not sufficient. Our library provides a proof that has been thoroughly
scrutinized by a trustworthy and tireless assistant. PVS is an automatic proof
assistant that has been fairly developed and used and that has no internal
connection with interval arithmetic or Taylor models. We built our library so
that PVS validates each result as it is produced. As producing and validating a
proof, is and will certainly remain a bigger task than just producing a
numerical result our library will never be a replacement to imperative
implementations of Taylor models such as Cosy Infinity. Our library should
mainly be used to validate small to medium size results that are involved in
safety or life critical applications
- …