403 research outputs found

    LDPC coded OFDM and its application to DVB-T2, DVB-S2 and IEEE 80216e

    Get PDF
    Since the invention of Information Theory by Shannon in 1948, coding theorists have been trying to come up with coding schemes that will achieve capacity dictated by Shannon’s Theorem. The most successful two coding schemes among many are the LDPCs and Turbo codes. In this thesis, we focus on LDPC codes and in particular their usage by the second generation terrestrial digital video broadcasting (DVB-T2), second generation satellite digital video broadcasting (DVB-S2) and IEEE 802.16e mobile WiMAX standards. Low Density Parity Check (LDPC) block codes were invented by Gallager in 1962 and they can achieve near Shannon limit performance on a wide variety of fading channels. LDPC codes are included in the DVB-T2 and DVB-S2 standards because of their excellent error-correcting capabilities. LDPC coding has also been adopted as an optional error correcting scheme in IEEE 802.16e mobile WiMAX. This thesis focuses on the bit error rate (BER) and PSNR performance analysis of DVB-T2, DVB-S2 and IEEE 802.16e transmission using LDPC coding under additive white Gaussian noise (AWGN) and Rayleigh Fading channel scenarios

    A new high performance LDPC code for DVB-S2

    Get PDF
    In spite of their powerful error correcting capability, Low Density Parity Check codes (LDPC) have been ignored due to their high complexity. Around three decades after the invention of this code, researchers returned back their attention to it and tried to make some significant improvements in complexity. As the result, the LDPC codes were widely considered in the next generation error correcting codes in telecommunication systems. In 2005, the new standard for Digital Video Broadcasting (DVB-S2) used LDPC codes as its channel coding scheme. The features of the above mentioned code allow a transmission near the Shannon limit. In this thesis, we first review the LDPC codes in general, and then we present the encoding and decoding scheme which is used in the DVB-S2 standard. We discuss regular and irregular LDPC codes and compare the advantages and disadvantages of these codes with each other. In this thesis, we consider a higher block length for the LDPC code compared to DVB-S2 standard to improve the performance. We propose an efficient hybrid parity check matrix for this code. This parity check matrix has the same number of base addresses as the case for DVB-S2 which processes high block length with the same complexity. At the end, simulation results are provided to show the improvement in the performanc

    On Computing Shannon’s Sphere Packing Bound and Applications

    Get PDF
    file: :home/zaki/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Ahmed, Ambroze, Tomlinson - 2007 - On Computing Shannon’s Sphere Packing Bound and Applications.pdf:pdf keywords: SPB mendeley-tags: SPBA new method to numerically evalu- ate Shannon’s lower bound is presented in this pa- per. This new method is based on the Incomplete Beta function and permits the exact evaluation of the Sphere Packing Bound for a large range of code sizes, rates and probability of error. Comparisons with cur- rent standards (DVB–RCS, DVB–S2 and 3GPP) are also presented and discussed. It is shown that cur- rent standard coding schemes are about 0.6dB from the Shannon Limit corrected for Binary Signalling

    Design issues for the Generic Stream Encapsulation (GSE) of IP datagrams over DVB-S2

    Get PDF
    The DVB-S2 standard has brought an unprecedented degree of novelty and flexibility in the way IP datagrams or other network level packets can be transmitted over DVB satellite links, with the introduction of an IP-friendly link layer - he continuous Generic Streams - and the adaptive combination of advanced error coding, modulation and spectrum management techniques. Recently approved by the DVB, the Generic Stream Encapsulation (GSE) used for carrying IP datagrams over DVBS2 implements solutions stemmed from a design rationale quite different from the one behind IP encapsulation schemes over its predecessor DVB-S. This paper highlights GSE's original design choices under the perspective of DVB-S2's innovative features and possibilities

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude
    • 

    corecore