1,375 research outputs found

    Density Evolution and Functional Threshold for the Noisy Min-Sum Decoder

    Full text link
    This paper investigates the behavior of the Min-Sum decoder running on noisy devices. The aim is to evaluate the robustness of the decoder in the presence of computation noise, e.g. due to faulty logic in the processing units, which represents a new source of errors that may occur during the decoding process. To this end, we first introduce probabilistic models for the arithmetic and logic units of the the finite-precision Min-Sum decoder, and then carry out the density evolution analysis of the noisy Min-Sum decoder. We show that in some particular cases, the noise introduced by the device can help the Min-Sum decoder to escape from fixed points attractors, and may actually result in an increased correction capacity with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold. The behavior of the noisy decoder is demonstrated in the asymptotic limit of the code-length -- by using "noisy" density evolution equations -- and it is also verified in the finite-length case by Monte-Carlo simulation.Comment: 46 pages (draft version); extended version of the paper with same title, submitted to IEEE Transactions on Communication

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Decoding of Non-Binary LDPC Codes Using the Information Bottleneck Method

    Full text link
    Recently, a novel lookup table based decoding method for binary low-density parity-check codes has attracted considerable attention. In this approach, mutual-information maximizing lookup tables replace the conventional operations of the variable nodes and the check nodes in message passing decoding. Moreover, the exchanged messages are represented by integers with very small bit width. A machine learning framework termed the information bottleneck method is used to design the corresponding lookup tables. In this paper, we extend this decoding principle from binary to non-binary codes. This is not a straightforward extension, but requires a more sophisticated lookup table design to cope with the arithmetic in higher order Galois fields. Provided bit error rate simulations show that our proposed scheme outperforms the log-max decoding algorithm and operates close to sum-product decoding.Comment: This paper has been presented at IEEE International Conference on Communications (ICC'19) in Shangha

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor

    Mapper and Demapper for LDPC-Coded Systems

    Get PDF
    Low Density Parity Check (LDPC) codes are state-of-art error correcting codes, included in several standards for broadcast transmissions. Iterative softdecision decoding algorithms for LDPC codes reach excellent error correction capability; their performance, however, is strongly affected by finite-precision issues in the representation of inner variables. Great attention has been paid, in recent literature, to the topic of quantization for LDPC decoders, but mostly focusing on binary modulations and analyzing finite precision effects in a disaggregrated manner, i.e., considering separately each block of the receiver. Modern telecommunication standards, instead, often adopt high order modulation schemes, e.g. M-QAM, with the aim to achieve large spectral efficiency. This puts additional quantization problems, that have been poorly debated in previous literature. This paper discusses the choice of suitable quantization characteristics for both the decoder messages and the received samples in LDPC-coded systems using M-QAM schemes. The analysis involves also the demapper block, that provides initial likelihood values for the decoder, by relating its quantization strategy with that of the decoder. A signal label for a signal in a 2m-ary modulation scheme is simply the m-bit pattern assigned to the signal. A mapping strategy refers to the grouping of bits within a codeword, where each mbit group is used to select a 2m-ary signal in accordance with the signal labels. The most obvious mapping strategy is to use each group of m consecutive bits to select a signal. . We will call this the consecutive-bit (CB) mapping strategy. An alternative strategy is the bit-reliability (BR) mapping strategy which will be described below. A new demapper version, based on approximate expressions, is also presented, that introduces a slight deviation from the ideal case but yields a low complexity hardware implementation
    corecore