8 research outputs found
On the Construction and Decoding of Concatenated Polar Codes
A scheme for concatenating the recently invented polar codes with interleaved
block codes is considered. By concatenating binary polar codes with interleaved
Reed-Solomon codes, we prove that the proposed concatenation scheme captures
the capacity-achieving property of polar codes, while having a significantly
better error-decay rate. We show that for any , and total frame
length , the parameters of the scheme can be set such that the frame error
probability is less than , while the scheme is still
capacity achieving. This improves upon 2^{-N^{0.5-\eps}}, the frame error
probability of Arikan's polar codes. We also propose decoding algorithms for
concatenated polar codes, which significantly improve the error-rate
performance at finite block lengths while preserving the low decoding
complexity
An efficient length- and rate-preserving concatenation of polar and repetition codes
We improve the method in \cite{Seidl:10} for increasing the finite-lengh
performance of polar codes by protecting specific, less reliable symbols with
simple outer repetition codes. Decoding of the scheme integrates easily in the
known successive decoding algorithms for polar codes. Overall rate and block
length remain unchanged, the decoding complexity is at most doubled. A
comparison to related methods for performance improvement of polar codes is
drawn.Comment: to be presented at International Zurich Seminar (IZS) 201
Scalable Successive-Cancellation Hardware Decoder for Polar Codes
Polar codes, discovered by Ar{\i}kan, are the first error-correcting codes
with an explicit construction to provably achieve channel capacity,
asymptotically. However, their error-correction performance at finite lengths
tends to be lower than existing capacity-approaching schemes. Using the
successive-cancellation algorithm, polar decoders can be designed for very long
codes, with low hardware complexity, leveraging the regular structure of such
codes. We present an architecture and an implementation of a scalable hardware
decoder based on this algorithm. This design is shown to scale to code lengths
of up to N = 2^20 on an Altera Stratix IV FPGA, limited almost exclusively by
the amount of available SRAM
Bhattacharyya parameter of monomials codes for the Binary Erasure Channel: from pointwise to average reliability
Monomial codes were recently equipped with partial order relations, fact that
allowed researchers to discover structural properties and efficient algorithm
for constructing polar codes. Here, we refine the existing order relations in
the particular case of Binary Erasure Channel. The new order relation takes us
closer to the ultimate order relation induced by the pointwise evaluation of
the Bhattacharyya parameter of the synthetic channels. The best we can hope for
is still a partial order relation. To overcome this issue we appeal to related
technique from network theory. Reliability network theory was recently used in
the context of polar coding and more generally in connection with decreasing
monomial codes. In this article, we investigate how the concept of average
reliability is applied for polar codes designed for the binary erasure channel.
Instead of minimizing the error probability of the synthetic channels, for a
particular value of the erasure parameter p, our codes minimize the average
error probability of the synthetic channels. By means of basic network theory
results we determine a closed formula for the average reliability of a
particular synthetic channel, that recently gain the attention of researchers.Comment: 21 pages, 5 figures, 3 tables. Submitted for possible publicatio