200 research outputs found
Low Complexity Belief Propagation Polar Code Decoders
Since its invention, polar code has received a lot of attention because of
its capacity-achieving performance and low encoding and decoding complexity.
Successive cancellation decoding (SCD) and belief propagation decoding (BPD)
are two of the most popular approaches for decoding polar codes. SCD is able to
achieve good error-correcting performance and is less computationally expensive
as compared to BPD. However SCDs suffer from long latency and low throughput
due to the serial nature of the successive cancellation algorithm. BPD is
parallel in nature and hence is more attractive for high throughput
applications. However since it is iterative in nature, the required latency and
energy dissipation increases linearly with the number of iterations. In this
work, we borrow the idea of SCD and propose a novel scheme based on
sub-factor-graph freezing to reduce the average number of computations as well
as the average number of iterations required by BPD, which directly translates
into lower latency and energy dissipation. Simulation results show that the
proposed scheme has no performance degradation and achieves significant
reduction in computation complexity over the existing methods.Comment: 6 page
Comparison of Polar Decoders with Existing Low-Density Parity-Check and Turbo Decoders
Polar codes are a recently proposed family of provably capacity-achieving
error-correction codes that received a lot of attention. While their
theoretical properties render them interesting, their practicality compared to
other types of codes has not been thoroughly studied. Towards this end, in this
paper, we perform a comparison of polar decoders against LDPC and Turbo
decoders that are used in existing communications standards. More specifically,
we compare both the error-correction performance and the hardware efficiency of
the corresponding hardware implementations. This comparison enables us to
identify applications where polar codes are superior to existing
error-correction coding solutions as well as to determine the most promising
research direction in terms of the hardware implementation of polar decoders.Comment: Fixes small mistakes from the paper to appear in the proceedings of
IEEE WCNC 2017. Results were presented in the "Polar Coding in Wireless
Communications: Theory and Implementation" Worksho
Approximate MIMO Iterative Processing with Adjustable Complexity Requirements
Targeting always the best achievable bit error rate (BER) performance in
iterative receivers operating over multiple-input multiple-output (MIMO)
channels may result in significant waste of resources, especially when the
achievable BER is orders of magnitude better than the target performance (e.g.,
under good channel conditions and at high signal-to-noise ratio (SNR)). In
contrast to the typical iterative schemes, a practical iterative decoding
framework that approximates the soft-information exchange is proposed which
allows reduced complexity sphere and channel decoding, adjustable to the
transmission conditions and the required bit error rate. With the proposed
approximate soft information exchange the performance of the exact soft
information can still be reached with significant complexity gains.Comment: The final version of this paper appears in IEEE Transactions on
Vehicular Technolog
Turbo Decoder with early stopping criteria
The turbo code used in the 3GPP Long Term Evolution(LTE) standard have been chosen specifically to simplify parallel turbo decoding and thus achieving higher throughputs. The higher data rates however leads to an increased computational complexity and thus a higher power and energy consumption of the decoder. This report presents a turbo decoder for the LTE standard with a stopping crite- ria aimed to reduce the power and energy consumption of the turbo decoder. The decoder can be configured to use 1,2 ,4 ,8 or 16 MAP decoders in parallel achiev- ing a throughput of 110 Mb/s for 7 iterations when running at a clock frequency of 200 MHz. The decoder were synthesised with 65 nm low power libraries with an area of 1.6 mm 2 . The post-synthesis simulations shows that the stopping cri- teria can lead to a significant lower energy consumption with no performance loss.The cellular market are constantly growing with more users every day. Today the smart- phone and tablet are common commodities which are able to both stream music as well as high definition video. The increasing amount of user in combination with the increasing data rate requirements puts high demands on the mobile operators networks. However the frequency spectrum is crowded with dif- ferent competing technologies and thus the available bandwidth are scarce. Ensuring a reliable communication and efficient use of the available resources are thus vital
Improving Network-on-Chip-based Turbo Decoder Architectures
In this work novel results concerning Networkon- Chip-based turbo decoder architectures are presented. Stemming from previous publications, this work concentrates first on improving the throughput by exploiting adaptive-bandwidth-reduction techniques. This technique shows in the best case an improvement of more than 60 Mb/s. Moreover, it is known that double-binary turbo decoders require higher area than binary ones. This characteristic has the negative effect of increasing the data width of the network nodes. Thus, the second contribution of this work is to reduce the network complexity to support doublebinary codes, by exploiting bit-level and pseudo-floatingpoint representation of the extrinsic information. These two techniques allow for an area reduction of up to more than the 40 % with a performance degradation of about 0.2 d
Design and implementation of a near maximum likelihood decoder for Cortex codes
International audienceThe Cortex codes form an emerging family among the rate-1/2 self-dual systematic linear block codes with good distance properties. This paper investigates the challenging issue of designing an efficient Maximum Likelihood (ML) decoder for Cortex codes. It first reviews a dedicated architecture that takes advantage of the particular structure of this code to simplify the decoding. Then, we propose a technique to improve the architecture by the generation of an optimal list of binary vectors. An optimal stopping criterion is also proposed. Simulation results show that the proposed architecture achieves an excellent performance/complexity trade-off for short Cortex codes. The proposed decoder architecture has been implemented on an FPGA device for the (24,12,8) Cortex code. This implementation supports an information throughput of 225 Mb/s. At a signal-tonoise ratio Eb/No=8 dB, the Bit Error Rate equals 2 Ă 10^â10, which is close to the performance of the Maximum Likelihood decoder
- âŠ