576 research outputs found
Jar Decoding: Non-Asymptotic Converse Coding Theorems, Taylor-Type Expansion, and Optimality
Recently, a new decoding rule called jar decoding was proposed; under jar
decoding, a non-asymptotic achievable tradeoff between the coding rate and word
error probability was also established for any discrete input memoryless
channel with discrete or continuous output (DIMC). Along the path of
non-asymptotic analysis, in this paper, it is further shown that jar decoding
is actually optimal up to the second order coding performance by establishing
new non-asymptotic converse coding theorems, and determining the Taylor
expansion of the (best) coding rate of finite block length for
any block length and word error probability up to the second
order. Finally, based on the Taylor-type expansion and the new converses, two
approximation formulas for (dubbed "SO" and "NEP") are
provided; they are further evaluated and compared against some of the best
bounds known so far, as well as the normal approximation of
revisited recently in the literature. It turns out that while the normal
approximation is all over the map, i.e. sometime below achievable bounds and
sometime above converse bounds, the SO approximation is much more reliable as
it is always below converses; in the meantime, the NEP approximation is the
best among the three and always provides an accurate estimation for . An important implication arising from the Taylor-type expansion of
is that in the practical non-asymptotic regime, the optimal
marginal codeword symbol distribution is not necessarily a capacity achieving
distribution.Comment: submitted to IEEE Transaction on Information Theory in April, 201
On Linear Operator Channels over Finite Fields
Motivated by linear network coding, communication channels perform linear
operation over finite fields, namely linear operator channels (LOCs), are
studied in this paper. For such a channel, its output vector is a linear
transform of its input vector, and the transformation matrix is randomly and
independently generated. The transformation matrix is assumed to remain
constant for every T input vectors and to be unknown to both the transmitter
and the receiver. There are NO constraints on the distribution of the
transformation matrix and the field size.
Specifically, the optimality of subspace coding over LOCs is investigated. A
lower bound on the maximum achievable rate of subspace coding is obtained and
it is shown to be tight for some cases. The maximum achievable rate of
constant-dimensional subspace coding is characterized and the loss of rate
incurred by using constant-dimensional subspace coding is insignificant.
The maximum achievable rate of channel training is close to the lower bound
on the maximum achievable rate of subspace coding. Two coding approaches based
on channel training are proposed and their performances are evaluated. Our
first approach makes use of rank-metric codes and its optimality depends on the
existence of maximum rank distance codes. Our second approach applies linear
coding and it can achieve the maximum achievable rate of channel training. Our
code designs require only the knowledge of the expectation of the rank of the
transformation matrix. The second scheme can also be realized ratelessly
without a priori knowledge of the channel statistics.Comment: 53 pages, 3 figures, submitted to IEEE Transaction on Information
Theor
Capacity Analysis of Linear Operator Channels over Finite Fields
Motivated by communication through a network employing linear network coding,
capacities of linear operator channels (LOCs) with arbitrarily distributed
transfer matrices over finite fields are studied. Both the Shannon capacity
and the subspace coding capacity are analyzed. By establishing
and comparing lower bounds on and upper bounds on , various
necessary conditions and sufficient conditions such that are
obtained. A new class of LOCs such that is identified, which
includes LOCs with uniform-given-rank transfer matrices as special cases. It is
also demonstrated that is strictly less than for a broad
class of LOCs. In general, an optimal subspace coding scheme is difficult to
find because it requires to solve the maximization of a non-concave function.
However, for a LOC with a unique subspace degradation, can be
obtained by solving a convex optimization problem over rank distribution.
Classes of LOCs with a unique subspace degradation are characterized. Since
LOCs with uniform-given-rank transfer matrices have unique subspace
degradations, some existing results on LOCs with uniform-given-rank transfer
matrices are explained from a more general way.Comment: To appear in IEEE Transactions on Information Theor
Analysis on tailed distributed arithmetic codes for uniform binary sources
Distributed Arithmetic Coding (DAC) is a variant of Arithmetic Coding (AC) that can realise Slepian-Wolf Coding (SWC) in a nonlinear way. In the previous work, we defined Codebook Cardinality Spectrum (CCS) and Hamming Distance Spectrum (HDS) for DAC. In this paper, we make use of CCS and HDS to analyze tailed DAC, a form of DAC mapping the last few symbols of each source block onto non-overlapped intervals as traditional AC. We first derive the exact HDS formula for tailless DAC, a form of DAC mapping all symbols of each source block onto overlapped intervals, and show that the HDS formula previously given is actually an approximate version. Then the HDS formula is extended to tailed DAC. We also deduce the average codebook cardinality, which is closely related to decoding complexity, and rate loss of tailed DAC with the help of CCS. The effects of tail length are extensively analyzed. It is revealed that by increasing tail length to a value not close to the bitstream length, closely-spaced codewords within the same codebook can be removed at the cost of a higher decoding complexity and a larger rate loss. Finally, theoretical analyses are verified by experiments
Hamming distance spectrum of DAC codes for equiprobable binary sources
Distributed Arithmetic Coding (DAC) is an effective technique for implementing Slepian-Wolf coding (SWC). It has been shown that a DAC code partitions source space into unequal-size codebooks, so that the overall performance of DAC codes depends on the cardinality and structure of these codebooks. The problem of DAC codebook cardinality has been solved by the so-called Codebook Cardinality Spectrum (CCS). This paper extends the previous work on CCS by studying the problem of DAC codebook structure.We define Hamming Distance Spectrum (HDS) to describe DAC codebook structure and propose a mathematical method to calculate the HDS of DAC codes. The theoretical analyses are verified by experimental results
- …