137 research outputs found

    Efficient decoder design for error correcting codes

    Get PDF
    Error correctiong codes (ECC) are widly used in applications to correct errors in data transmission over unreliable or noisy communication channels. Recently, two kinds of promising codes attracted lots of research interest because they provide excellent error correction performance. One is non-binary LDPC codes, and the other is polar codes. This dissertation focuses on efficient decoding algorithms and decoder design for thesetwo types of codes.Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. We propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the ex-isting majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selectingintermediate hard decisions, and changing reliability information.Polar codes are of great interests because they provably achieve the symmetric capacity of discrete memoryless channels with arbitrary input alphabet sizes an explicit construction. Most existing decoding algorithms of polar codes are based on bit-wise hard or soft decisions. We propose symbol-decision successive cancellation (SC) and successive cancellation list (SCL) decoders for polar codes, which use symbol-wise hard or soft decisions for higher throughput or better error performance. Then wepropose to use a recursive channel combination to calculate symbol-wise channel transition probabilities, which lead to symbol decisions. Our proposed recursive channel combination has lower complexity than simply combining bit-wise channel transition probabilities. The similarity between our proposed method and Arıkan’s channel transformations also helps to share hardware resources between calculating bit- and symbol-wise channel transition probabilities. To reduce the complexity of the list pruning, atwo-stage list pruning network is proposed to provide a trade-off between the error performance and the complexity of the symbol-decision SCL decoder. Since memory is a significant part of SCL decoders, we also propose a pre-computation memory-saving technique to reduce memory requirement of an SCL decoder.To reduce the complexity of the recursive channel combination further, we propose an approximate ML (AML) decoding unit for SCL decoders. In particular, we investigate the distribution of frozen bits of polar codes designed for both the binary erasure and additive white Gaussian noise channels, and take advantage of the distribution to reduce the complexity of the AML decoding unit, improving the throughput-area efficiency of SCL decoders.Furthermore, to adapt to variable throughput or latency requirements which exist widely in current communication applications, a multi-mode SCL decoder with variable list sizes and parallelism is proposed. If high throughput or small latency is required, the decoder decodes multiple received words in parallel with a small list size. However, if error performance is of higher priority, the multi-mode decoder switches to a serialmode with a bigger list size. Therefore, the multi-mode SCL decoder provides a flexible tradeoff between latency, throughput and error performance at the expense of small overhead

    Digital VLSI Architectures for Advanced Channel Decoders

    Get PDF
    Error-correcting codes are strongly adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probes. New and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. This work aims to focus on Polar codes, which are a recent class of channel codes with the proven ability to reduce decoding error probability arbitrarily small as the block-length is increased, provided that the code rate is less than the capacity of the channel. This property and the recursive code-construction of this algorithms attracted wide interest from the communications community. Hardware architectures with reduced complexity can efficiently implement a polar codes decoder using either successive cancellation approximation or belief propagation algorithms. The latter offers higher throughput at high signal-to-noise ratio thanks to the inherently parallel decision-making capability of such decoder type. A new analysis on belief propagation scheduling algorithms for polar codes and on interconnection structure of the decoding trellis not covered in literature is also presented. It allowed to achieve an hardware implementation that increase the maximum information throughput under belief propagation decoding while also minimizing the implementation complexity

    Improve the Usability of Polar Codes: Code Construction, Performance Enhancement and Configurable Hardware

    Full text link
    Error-correcting codes (ECC) have been widely used for forward error correction (FEC) in modern communication systems to dramatically reduce the signal-to-noise ratio (SNR) needed to achieve a given bit error rate (BER). Newly invented polar codes have attracted much interest because of their capacity-achieving potential, efficient encoder and decoder implementation, and flexible architecture design space.This dissertation is aimed at improving the usability of polar codes by providing a practical code design method, new approaches to improve the performance of polar code, and a configurable hardware design that adapts to various specifications. State-of-the-art polar codes are used to achieve extremely low error rates. In this work, high-performance FPGA is used in prototyping polar decoders to catch rare-case errors for error-correcting performance verification and error analysis. To discover the polarization characteristics and error patterns of polar codes, an FPGA emulation platform for belief-propagation (BP) decoding is built by a semi-automated construction flow. The FPGA-based emulation achieves significant speedup in large-scale experiments involving trillions of data frames. The platform is a key enabler of this work. The frozen set selection of polar codes, known as bit selection, is critical to the error-correcting performance of polar codes. A simulation-based in-order bit selection method is developed to evaluate the error rate of each bit using Monte Carlo simulations. The frozen set is selected based on the bit reliability ranking. The resulting code construction exhibits up to 1 dB coding gain with respect to the conventional bit selection. To further improve the coding gain of BP decoder for low-error-rate applications, the decoding error mechanisms are studied and analyzed, and the errors are classified based on their distinct signatures. Error detection is enabled by low-cost CRC concatenation, and post-processing algorithms targeting at each type of the error is designed to mitigate the vast majority of the decoding errors. The post-processor incurs only a small implementation overhead, but it provides more than an order of magnitude improvement of the error-correcting performance. The regularity of the BP decoder structure offers many hardware architecture choices. Silicon area, power consumption, throughput and latency can be traded to reach the optimal design points for practical use cases. A comprehensive design space exploration reveals several practical architectures at different design points. The scalability of each architecture is also evaluated based on the implementation candidates. For dynamic communication channels, such as wireless channels in the upcoming 5G applications, multiple codes of different lengths and code rates are needed to t varying channel conditions. To minimize implementation cost, a universal decoder architecture is proposed to support multiple codes through hardware reuse. A 40nm length- and rate-configurable polar decoder ASIC is demonstrated to fit various communication environments and service requirements.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140817/1/shuangsh_1.pd

    Wide-band mixing DACs with high spectral purity

    Get PDF

    Modulation, Coding, and Receiver Design for Gigabit mmWave Communication

    Get PDF
    While wireless communication has become an ubiquitous part of our daily life and the world around us, it has not been able yet to deliver the multi-gigabit throughput required for applications like high-definition video transmission or cellular backhaul communication. The throughput limitation of current wireless systems is mainly the result of a shortage of spectrum and the problem of congestion. Recent advancements in circuit design allow the realization of analog frontends for mmWave frequencies between 30GHz and 300GHz, making abundant unused spectrum accessible. However, the transition to mmWave carrier frequencies and GHz bandwidths comes with new challenges for wireless receiver design. Large variations of the channel conditions and high symbol rates require flexible but power-efficient receiver designs. This thesis investigates receiver algorithms and architectures that enable multi-gigabit mmWave communication. Using a system-level approach, the design options between low-power time-domain and power-hungry frequency-domain signal processing are explored. The system discussion is started with an analysis of the problem of parameter synchronization in mmWave systems and its impact on system design. The proposed synchronization architecture extends known synchronization techniques to provide greater flexibility regarding the operating environments and for system efficiency optimization. For frequency-selective environments, versatile single-carrier frequency domain equalization (SC-FDE) offers not only excellent channel equalization, but also the possibility to integrate additional baseband tasks without overhead. Hence, the high initial complexity of SC-FDE needs to be put in perspective to the complexity savings in the other parts of the baseband. Furthermore, an extension to the SC-FDE architecture is proposed that allows an adaptation of the equalization complexity by switching between a cyclic-prefix mode and a reduced block length overlap-save mode based on the delay spread. Approaching the problem of complexity adaptation from time-domain, a high-speed hardware architecture for the delayed decision feedback sequence estimation (DDFSE) algorithm is presented. DDFSE uses decision feedback to reduce the complexity of the sequence estimation and allows to set the system performance between the performance of full maximum-likelihood detection and pure decision feedback equalization. An implementation of the DDFSE architecture is demonstrated as part of an all-digital IEEE802.11ad baseband ASIC manufactured in 40nm CMOS. A flexible architecture for wideband mmWave receivers based on complex sub-sampling is presented. Complex sub-sampling combines the design advantages of sub-sampling receivers with the flexibility of direct-conversion receivers using a single passive component and a digital compensation scheme. Feasibility of the architecture is proven with a 16Gb/s hardware demonstrator. The demonstrator is used to explore the potential gain of non-equidistant constellations for high-throughput mmWave links. Specifically crafted amplitude phase-shift keying (APSK) modulation achieve 1dB average mutual information (AMI) advantage over quadrature amplitude modulation (QAM) in simulation and on the testbed hardware. The AMI advantage of APSK can be leveraged for a practical transmission using Polar codes which are trained specifically for the constellation

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    Time and frequency domain algorithms for speech coding

    Get PDF
    The promise of digital hardware economies (due to recent advances in VLSI technology), has focussed much attention on more complex and sophisticated speech coding algorithms which offer improved quality at relatively low bit rates. This thesis describes the results (obtained from computer simulations) of research into various efficient (time and frequency domain) speech encoders operating at a transmission bit rate of 16 Kbps. In the time domain, Adaptive Differential Pulse Code Modulation (ADPCM) systems employing both forward and backward adaptive prediction were examined. A number of algorithms were proposed and evaluated, including several variants of the Stochastic Approximation Predictor (SAP). A Backward Block Adaptive (BBA) predictor was also developed and found to outperform the conventional stochastic methods, even though its complexity in terms of signal processing requirements is lower. A simplified Adaptive Predictive Coder (APC) employing a single tap pitch predictor considered next provided a slight improvement in performance over ADPCM, but with rather greater complexity. The ultimate test of any speech coding system is the perceptual performance of the received speech. Recent research has indicated that this may be enhanced by suitable control of the noise spectrum according to the theory of auditory masking. Various noise shaping ADPCM configurations were examined, and it was demonstrated that a proposed pre-/post-filtering arrangement which exploits advantageously the predictor-quantizer interaction, leads to the best subjective performance in both forward and backward prediction systems. Adaptive quantization is instrumental to the performance of ADPCM systems. Both the forward adaptive quantizer (AQF) and the backward oneword memory adaptation (AQJ) were examined. In addition, a novel method of decreasing quantization noise in ADPCM-AQJ coders, which involves the application of correction to the decoded speech samples, provided reduced output noise across the spectrum, with considerable high frequency noise suppression. More powerful (and inevitably more complex) frequency domain speech coders such as the Adaptive Transform Coder (ATC) and the Sub-band Coder (SBC) offer good quality speech at 16 Kbps. To reduce complexity and coding delay, whilst retaining the advantage of sub-band coding, a novel transform based split-band coder (TSBC) was developed and found to compare closely in performance with the SBC. To prevent the heavy side information requirement associated with a large number of bands in split-band coding schemes from impairing coding accuracy, without forgoing the efficiency provided by adaptive bit allocation, a method employing AQJs to code the sub-band signals together with vector quantization of the bit allocation patterns was also proposed. Finally, 'pipeline' methods of bit allocation and step size estimation (using the Fast Fourier Transform (FFT) on the input signal) were examined. Such methods, although less accurate, are nevertheless useful in limiting coding delay associated with SRC schemes employing Quadrature Mirror Filters (QMF)
    • …
    corecore