32 research outputs found
Improved Successive Cancellation Decoding of Polar Codes
As improved versions of successive cancellation (SC) decoding algorithm,
successive cancellation list (SCL) decoding and successive cancellation stack
(SCS) decoding are used to improve the finite-length performance of polar
codes. Unified descriptions of SC, SCL and SCS decoding algorithms are given as
path searching procedures on the code tree of polar codes. Combining the ideas
of SCL and SCS, a new decoding algorithm named successive cancellation hybrid
(SCH) is proposed, which can achieve a better trade-off between computational
complexity and space complexity. Further, to reduce the complexity, a pruning
technique is proposed to avoid unnecessary path searching operations.
Performance and complexity analysis based on simulations show that, with proper
configurations, all the three improved successive cancellation (ISC) decoding
algorithms can have a performance very close to that of maximum-likelihood (ML)
decoding with acceptable complexity. Moreover, with the help of the proposed
pruning technique, the complexities of ISC decoders can be very close to that
of SC decoder in the moderate and high signal-to-noise ratio (SNR) regime.Comment: This paper is modified and submitted to IEEE Transactions on
Communication
Sublinear Latency for Simplified Successive Cancellation Decoding of Polar Codes
This work analyzes the latency of the simplified successive cancellation
(SSC) decoding scheme for polar codes proposed by Alamdar-Yazdi and
Kschischang. It is shown that, unlike conventional successive cancellation
decoding, where latency is linear in the block length, the latency of SSC
decoding is sublinear. More specifically, the latency of SSC decoding is
, where is the block length and is the scaling
exponent of the channel, which captures the speed of convergence of the rate to
capacity. Numerical results demonstrate the tightness of the bound and show
that most of the latency reduction arises from the parallel decoding of
subcodes of rate or .Comment: 20 pages, 6 figures, presented in part at ISIT 2020 and accepted in
IEEE Transactions on Wireless Communication
A Split-Reduced Successive Cancellation List Decoder for Polar Codes
This paper focuses on low complexity successive cancellation list (SCL)
decoding of polar codes. In particular, using the fact that splitting may be
unnecessary when the reliability of decoding the unfrozen bit is sufficiently
high, a novel splitting rule is proposed. Based on this rule, it is conjectured
that, if the correct path survives at some stage, it tends to survive till
termination without splitting with high probability. On the other hand, the
incorrect paths are more likely to split at the following stages. Motivated by
these observations, a simple counter that counts the successive number of
stages without splitting is introduced for each decoding path to facilitate the
identification of correct and incorrect path. Specifically, any path with
counter value larger than a predefined threshold \omega is deemed to be the
correct path, which will survive at the decoding stage, while other paths with
counter value smaller than the threshold will be pruned, thereby reducing the
decoding complexity. Furthermore, it is proved that there exists a unique
unfrozen bit u_{N-K_1+1}, after which the successive cancellation decoder
achieves the same error performance as the maximum likelihood decoder if all
the prior unfrozen bits are correctly decoded, which enables further complexity
reduction. Simulation results demonstrate that the proposed low complexity SCL
decoder attains performance similar to that of the conventional SCL decoder,
while achieving substantial complexity reduction.Comment: Accepted for publication in IEEE Journal on Selected Areas in
Communications - Special Issue on Recent Advances In Capacity Approaching
Code
Parallelism versus Latency in Simplified Successive-Cancellation Decoding of Polar Codes
This paper characterizes the latency of the simplified
successive-cancellation (SSC) decoding scheme for polar codes under hardware
resource constraints. In particular, when the number of processing elements
that can perform SSC decoding operations in parallel is limited, as is the case
in practice, the latency of SSC decoding is
, where is
the block length of the code and is the scaling exponent of the channel.
Three direct consequences of this bound are presented. First, in a
fully-parallel implementation where , the latency of SSC
decoding is , which is sublinear in the block
length. This recovers a result from our earlier work. Second, in a fully-serial
implementation where , the latency of SSC decoding scales as
. The multiplicative constant is also
calculated: we show that the latency of SSC decoding when is given by
. Third, in a semi-parallel
implementation, the smallest that gives the same latency as that of the
fully-parallel implementation is . The tightness of our bound on
SSC decoding latency and the applicability of the foregoing results is
validated through extensive simulations
Hardware implementation aspects of polar decoders and ultra high-speed LDPC decoders
The goal of channel coding is to detect and correct errors that appear during the transmission of information. In the past few decades, channel coding has become an integral part of most communications standards as it improves the energy-efficiency of transceivers manyfold while only requiring a modest investment in terms of the required digital signal processing capabilities. The most commonly used channel codes in modern standards are low-density parity-check (LDPC) codes and Turbo codes, which were the first two types of codes to approach the capacity of several channels while still being practically implementable in hardware. The decoding algorithms for LDPC codes, in particular, are highly parallelizable and suitable for high-throughput applications. A new class of channel codes, called polar codes, was introduced recently. Polar codes have an explicit construction and low-complexity encoding and successive cancellation (SC) decoding algorithms. Moreover, polar codes are provably capacity achieving over a wide range of channels, making them very attractive from a theoretical perspective. Unfortunately, polar codes under standard SC decoding cannot compete with the LDPC and Turbo codes that are used in current standards in terms of their error-correcting performance. For this reason, several improved SC-based decoding algorithms have been introduced. The most prominent SC-based decoding algorithm is the successive cancellation list (SCL) decoding algorithm, which is powerful enough to approach the error-correcting performance of LDPC codes. The original SCL decoding algorithm was described in an arithmetic domain that is not well-suited for hardware implementations and is not clear how an efficient SCL decoder architecture can be implemented. To this end, in this thesis, we re-formulate the SCL decoding algorithm in two distinct arithmetic domains, we describe efficient hardware architectures to implement the resulting SCL decoders, and we compare the decoders with existing LDPC and Turbo decoders in terms of their error-correcting performance and their implementation efficiency. Due to the ongoing technology scaling, the feature sizes of integrated circuits keep shrinking at a remarkable pace. As transistors and memory cells keep shrinking, it becomes increasingly difficult and costly (in terms of both area and power) to ensure that the implemented digital circuits always operate correctly. Thus, manufactured digital signal processing circuits, including channel decoder circuits, may not always operate correctly. Instead of discarding these faulty dies or using costly circuit-level fault mitigation mechanisms, an alternative approach is to try to live with certain malfunctions, provided that the algorithm implemented by the circuit is sufficiently fault-tolerant. In this spirit, in this thesis we examine decoding of polar codes and LDPC codes under the assumption that the memories that are used within the decoders are not fully reliable. We show that, in both cases, there is inherent fault-tolerance and we also propose some methods to reduce the effect of memory faults on the error-correcting performance of the considered decoders
Complexity and second moment of the mathematical theory of communication
The performance of an error correcting code is evaluated by its block error probability, code rate, and encoding and decoding complexity. The performance of a series of codes is evaluated by, as the block lengths approach infinity, whether their block error probabilities decay to zero, whether their code rates converge to channel capacity, and whether their growth in complexities stays under control.
Over any discrete memoryless channel, I build codes such that: for one, their block error probabilities and code rates scale like random codesâ; and for two, their encoding and decoding complexities scale like polar codesâ. Quantitatively, for any constants Ï, Ï > 0 such that Ï+2Ï < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability exp(âNÏ), code rate NâÏ less than the channel capacity, and encoding and decoding complexity
O(N logN) per code block.
Over any discrete memoryless channel, I also build codes such that: for one, they achieve channel capacity rapidly; and for two, their encoding and decoding complexities outperform all known codes over non-BEC channels. Quantitatively, for any constants Ï, Ï > 0 such that 2Ï < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability
exp(â(logN)Ï ), code rate NâÏ less than the channel capacity, and encoding and decoding complexity O(N log(logN)) per code block.
The two aforementioned results are built upon two pillarsâa versatile framework that generates codes on the basis of channel polarization, and a calculusâprobability machinery that evaluates the performances of codes.
The framework that generates codes and the machinery that evaluates codes can be extended to many other scenarios in network information theory. To name a few: lossless compression with side information, lossy compression, SlepianâWolf problem, WynerâZiv Problem, multiple access channel, wiretap channel of type I, and broadcast channel. In each scenario, the adapted notions of block error probability and code rate approach their limits at the same paces as specified above
Lossless data compression with polar codes
Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 60-62.In this study, lossless polar compression schemes are proposed for finite source
alphabets in the noiseless setting. In the first part, lossless polar source coding
scheme for binary memoryless sources introduced by Arıkan is extended to general
prime-size alphabets. In addition to the conventional successive cancellation
decoding (SC-D), successive cancellation list decoding (SCL-D) is utilized for improved
performance at practical block-lengths. For code construction, greedy approximation
method for density evolution, proposed by Tal and Vardy, is adapted
to non-binary alphabets. In the second part, a variable-length, zero-error polar
compression scheme for prime-size alphabets based on the work of Cronie and Korada
is developed. It is shown numerically that this scheme provides rates close
to minimum source coding rate at practical block-lengths under SC-D, while
achieving the minimum source coding rate asymptotically in the block-length.
For improved performance at practical block-lengths, a scheme based on SCL-D
is developed. The proposed schemes are generalized to arbitrary finite source
alphabets by using a multi-level approach. For practical applications, robustness
of the zero-error source coding scheme with respect to uncertainty in source distribution
is investigated. Based on this robustness investigation, it is shown that
a class of prebuilt information sets can be used at practical block-lengths instead
of constructing a specific information set for every source distribution. Since the
compression schemes proposed in this thesis are not universal, probability distribution
of a source must be known at the receiver for reconstruction. In the
presence of source uncertainty, this requires the transmitter to inform the receiver
about the source distribution. As a solution to this problem, a sequential quantization
with scaling algorithm is proposed to transmit the probability distribution
of the source together with the compressed word in an efficient way.Ăaycı, SemihM.S