19,958 research outputs found
Finite-Length Scaling of Polar Codes
Consider a binary-input memoryless output-symmetric channel . Such a
channel has a capacity, call it , and for any and strictly
positive constant we know that we can construct a coding scheme
that allows transmission at rate with an error probability not exceeding
. Assume now that we let the rate tend to and we ask how
we have to "scale" the blocklength in order to keep the error probability
fixed to . We refer to this as the "finite-length scaling" behavior.
This question was addressed by Strassen as well as Polyanskiy, Poor and Verdu,
and the result is that must grow at least as the square of the reciprocal
of .
Polar codes are optimal in the sense that they achieve capacity. In this
paper, we are asking to what degree they are also optimal in terms of their
finite-length behavior. Our approach is based on analyzing the dynamics of the
un-polarized channels. The main results of this paper can be summarized as
follows. Consider the sum of Bhattacharyya parameters of sub-channels chosen
(by the polar coding scheme) to transmit information. If we require this sum to
be smaller than a given value , then the required block-length
scales in terms of the rate as , where is a positive
constant that depends on and , and .
Also, we show that with the same requirement on the sum of Bhattacharyya
parameters, the block-length scales in terms of the rate like , where is a constant that
depends on and , and .Comment: In IEEE Transactions on Information Theory, 201
On the Finite Length Scaling of Ternary Polar Codes
The polarization process of polar codes over a ternary alphabet is studied.
Recently it has been shown that the scaling of the blocklength of polar codes
with prime alphabet size scales polynomially with respect to the inverse of the
gap between code rate and channel capacity. However, except for the binary
case, the degree of the polynomial in the bound is extremely large. In this
work, it is shown that a much lower degree polynomial can be computed
numerically for the ternary case. Similar results are conjectured for the
general case of prime alphabet size.Comment: Submitted to ISIT 201
From Polar to Reed-Muller Codes: a Technique to Improve the Finite-Length Performance
We explore the relationship between polar and RM codes and we describe a
coding scheme which improves upon the performance of the standard polar code at
practical block lengths. Our starting point is the experimental observation
that RM codes have a smaller error probability than polar codes under MAP
decoding. This motivates us to introduce a family of codes that "interpolates"
between RM and polar codes, call this family , where is
the original polar code, and is an RM code.
Based on numerical observations, we remark that the error probability under MAP
decoding is an increasing function of . MAP decoding has in general
exponential complexity, but empirically the performance of polar codes at
finite block lengths is boosted by moving along the family even under low-complexity decoding schemes such as, for instance,
belief propagation or successive cancellation list decoder. We demonstrate the
performance gain via numerical simulations for transmission over the erasure
channel as well as the Gaussian channel.Comment: 8 pages, 7 figures, in IEEE Transactions on Communications, 2014 and
in ISIT'1
How to Achieve the Capacity of Asymmetric Channels
We survey coding techniques that enable reliable transmission at rates that
approach the capacity of an arbitrary discrete memoryless channel. In
particular, we take the point of view of modern coding theory and discuss how
recent advances in coding for symmetric channels help provide more efficient
solutions for the asymmetric case. We consider, in more detail, three basic
coding paradigms.
The first one is Gallager's scheme that consists of concatenating a linear
code with a non-linear mapping so that the input distribution can be
appropriately shaped. We explicitly show that both polar codes and spatially
coupled codes can be employed in this scenario. Furthermore, we derive a
scaling law between the gap to capacity, the cardinality of the input and
output alphabets, and the required size of the mapper.
The second one is an integrated scheme in which the code is used both for
source coding, in order to create codewords distributed according to the
capacity-achieving input distribution, and for channel coding, in order to
provide error protection. Such a technique has been recently introduced by
Honda and Yamamoto in the context of polar codes, and we show how to apply it
also to the design of sparse graph codes.
The third paradigm is based on an idea of B\"ocherer and Mathar, and
separates the two tasks of source coding and channel coding by a chaining
construction that binds together several codewords. We present conditions for
the source code and the channel code, and we describe how to combine any source
code with any channel code that fulfill those conditions, in order to provide
capacity-achieving schemes for asymmetric channels. In particular, we show that
polar codes, spatially coupled codes, and homophonic codes are suitable as
basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published
in IEEE Trans. Inform. Theor
Partitioned List Decoding of Polar Codes: Analysis and Improvement of Finite Length Performance
Polar codes represent one of the major recent breakthroughs in coding theory
and, because of their attractive features, they have been selected for the
incoming 5G standard. As such, a lot of attention has been devoted to the
development of decoding algorithms with good error performance and efficient
hardware implementation. One of the leading candidates in this regard is
represented by successive-cancellation list (SCL) decoding. However, its
hardware implementation requires a large amount of memory. Recently, a
partitioned SCL (PSCL) decoder has been proposed to significantly reduce the
memory consumption. In this paper, we examine the paradigm of PSCL decoding
from both theoretical and practical standpoints: (i) by changing the
construction of the code, we are able to improve the performance at no
additional computational, latency or memory cost, (ii) we present an optimal
scheme to allocate cyclic redundancy checks (CRCs), and (iii) we provide an
upper bound on the list size that allows MAP performance.Comment: 2017 IEEE Global Communications Conference (GLOBECOM
- …