5,610 research outputs found
Entanglement-assisted zero-error source-channel coding
We study the use of quantum entanglement in the zero-error source-channel
coding problem. Here, Alice and Bob are connected by a noisy classical one-way
channel, and are given correlated inputs from a random source. Their goal is
for Bob to learn Alice's input while using the channel as little as possible.
In the zero-error regime, the optimal rates of source codes and channel codes
are given by graph parameters known as the Witsenhausen rate and Shannon
capacity, respectively. The Lov\'asz theta number, a graph parameter defined by
a semidefinite program, gives the best efficiently-computable upper bound on
the Shannon capacity and it also upper bounds its entanglement-assisted
counterpart. At the same time it was recently shown that the Shannon capacity
can be increased if Alice and Bob may use entanglement.
Here we partially extend these results to the source-coding problem and to
the more general source-channel coding problem. We prove a lower bound on the
rate of entanglement-assisted source-codes in terms Szegedy's number (a
strengthening of the theta number). This result implies that the theta number
lower bounds the entangled variant of the Witsenhausen rate. We also show that
entanglement can allow for an unbounded improvement of the asymptotic rate of
both classical source codes and classical source-channel codes. Our separation
results use low-degree polynomials due to Barrington, Beigel and Rudich,
Hadamard matrices due to Xia and Liu and a new application of remote state
preparation.Comment: Title has been changed. Previous title was 'Zero-error source-channel
coding with entanglement'. Corrected an error in Lemma 1.
Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms
Mathematical programming is a branch of applied mathematics and has recently
been used to derive new decoding approaches, challenging established but often
heuristic algorithms based on iterative message passing. Concepts from
mathematical programming used in the context of decoding include linear,
integer, and nonlinear programming, network flows, notions of duality as well
as matroid and polyhedral theory. This survey article reviews and categorizes
decoding methods based on mathematical programming approaches for binary linear
codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory.
Published July 201
Fast Decoder for Overloaded Uniquely Decodable Synchronous Optical CDMA
In this paper, we propose a fast decoder algorithm for uniquely decodable
(errorless) code sets for overloaded synchronous optical code-division
multiple-access (O-CDMA) systems. The proposed decoder is designed in a such a
way that the users can uniquely recover the information bits with a very simple
decoder, which uses only a few comparisons. Compared to maximum-likelihood (ML)
decoder, which has a high computational complexity for even moderate code
lengths, the proposed decoder has much lower computational complexity.
Simulation results in terms of bit error rate (BER) demonstrate that the
performance of the proposed decoder for a given BER requires only 1-2 dB higher
signal-to-noise ratio (SNR) than the ML decoder.Comment: arXiv admin note: substantial text overlap with arXiv:1806.0395
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion Approach
One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is
based on using multiple trials of a simple RS decoding algorithm in combination
with erasing or flipping a set of symbols or bits in each trial. This paper
presents a framework based on rate-distortion (RD) theory to analyze these
multiple-decoding algorithms. By defining an appropriate distortion measure
between an error pattern and an erasure pattern, the successful decoding
condition, for a single errors-and-erasures decoding trial, becomes equivalent
to distortion being less than a fixed threshold. Finding the best set of
erasure patterns also turns into a covering problem which can be solved
asymptotically by rate-distortion theory. Thus, the proposed approach can be
used to understand the asymptotic performance-versus-complexity trade-off of
multiple errors-and-erasures decoding of RS codes.
This initial result is also extended a few directions. The rate-distortion
exponent (RDE) is computed to give more precise results for moderate
blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are
analyzed using this framework. Analytical and numerical computations of the RD
and RDE functions are also presented. Finally, simulation results show that
sets of erasure patterns designed using the proposed methods outperform other
algorithms with the same number of decoding trials.Comment: to appear in the IEEE Transactions on Information Theory (Special
Issue on Facets of Coding Theory: from Algorithms to Networks
On the BICM Capacity
Optimal binary labelings, input distributions, and input alphabets are
analyzed for the so-called bit-interleaved coded modulation (BICM) capacity,
paying special attention to the low signal-to-noise ratio (SNR) regime. For
8-ary pulse amplitude modulation (PAM) and for 0.75 bit/symbol, the folded
binary code results in a higher capacity than the binary reflected gray code
(BRGC) and the natural binary code (NBC). The 1 dB gap between the additive
white Gaussian noise (AWGN) capacity and the BICM capacity with the BRGC can be
almost completely removed if the input symbol distribution is properly
selected. First-order asymptotics of the BICM capacity for arbitrary input
alphabets and distributions, dimensions, mean, variance, and binary labeling
are developed. These asymptotics are used to define first-order optimal (FOO)
constellations for BICM, i.e. constellations that make BICM achieve the Shannon
limit -1.59 \tr{dB}. It is shown that the \Eb/N_0 required for reliable
transmission at asymptotically low rates in BICM can be as high as infinity,
that for uniform input distributions and 8-PAM there are only 72 classes of
binary labelings with a different first-order asymptotic behavior, and that
this number is reduced to only 26 for 8-ary phase shift keying (PSK). A general
answer to the question of FOO constellations for BICM is also given: using the
Hadamard transform, it is found that for uniform input distributions, a
constellation for BICM is FOO if and only if it is a linear projection of a
hypercube. A constellation based on PAM or quadrature amplitude modulation
input alphabets is FOO if and only if they are labeled by the NBC; if the
constellation is based on PSK input alphabets instead, it can never be FOO if
the input alphabet has more than four points, regardless of the labeling.Comment: Submitted to the IEEE Transactions on Information Theor
Correcting a Fraction of Errors in Nonbinary Expander Codes with Linear Programming
A linear-programming decoder for \emph{nonbinary} expander codes is
presented. It is shown that the proposed decoder has the maximum-likelihood
certificate properties. It is also shown that this decoder corrects any pattern
of errors of a relative weight up to approximately 1/4 \delta_A \delta_B (where
\delta_A and \delta_B are the relative minimum distances of the constituent
codes).Comment: Part of this work was presented at the IEEE International Symposium
on Information Theory 2009, Seoul, Kore
- …