64 research outputs found

    Hardware implementation aspects of polar decoders and ultra high-speed LDPC decoders

    Get PDF
    The goal of channel coding is to detect and correct errors that appear during the transmission of information. In the past few decades, channel coding has become an integral part of most communications standards as it improves the energy-efficiency of transceivers manyfold while only requiring a modest investment in terms of the required digital signal processing capabilities. The most commonly used channel codes in modern standards are low-density parity-check (LDPC) codes and Turbo codes, which were the first two types of codes to approach the capacity of several channels while still being practically implementable in hardware. The decoding algorithms for LDPC codes, in particular, are highly parallelizable and suitable for high-throughput applications. A new class of channel codes, called polar codes, was introduced recently. Polar codes have an explicit construction and low-complexity encoding and successive cancellation (SC) decoding algorithms. Moreover, polar codes are provably capacity achieving over a wide range of channels, making them very attractive from a theoretical perspective. Unfortunately, polar codes under standard SC decoding cannot compete with the LDPC and Turbo codes that are used in current standards in terms of their error-correcting performance. For this reason, several improved SC-based decoding algorithms have been introduced. The most prominent SC-based decoding algorithm is the successive cancellation list (SCL) decoding algorithm, which is powerful enough to approach the error-correcting performance of LDPC codes. The original SCL decoding algorithm was described in an arithmetic domain that is not well-suited for hardware implementations and is not clear how an efficient SCL decoder architecture can be implemented. To this end, in this thesis, we re-formulate the SCL decoding algorithm in two distinct arithmetic domains, we describe efficient hardware architectures to implement the resulting SCL decoders, and we compare the decoders with existing LDPC and Turbo decoders in terms of their error-correcting performance and their implementation efficiency. Due to the ongoing technology scaling, the feature sizes of integrated circuits keep shrinking at a remarkable pace. As transistors and memory cells keep shrinking, it becomes increasingly difficult and costly (in terms of both area and power) to ensure that the implemented digital circuits always operate correctly. Thus, manufactured digital signal processing circuits, including channel decoder circuits, may not always operate correctly. Instead of discarding these faulty dies or using costly circuit-level fault mitigation mechanisms, an alternative approach is to try to live with certain malfunctions, provided that the algorithm implemented by the circuit is sufficiently fault-tolerant. In this spirit, in this thesis we examine decoding of polar codes and LDPC codes under the assumption that the memories that are used within the decoders are not fully reliable. We show that, in both cases, there is inherent fault-tolerance and we also propose some methods to reduce the effect of memory faults on the error-correcting performance of the considered decoders

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Comparative study of a time diversity scheme applied to G3 systems for narrowband power-line communications

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in ful lment of the requirements for the degree of Masters of Science in Engineering (Electrical). Johannesburg, 2016Power-line communications can be used for the transfer of data across electrical net- works in applications such as automatic meter reading in smart grid technology. As the power-line channel is harsh and plagued with non-Gaussian noise, robust forward error correction schemes are required. This research is a comparative study where a Luby transform code is concatenated with power-line communication systems provided by an up-to-date standard published by electricit e R eseau Distribution France named G3 PLC. Both decoding using Gaussian elimination and belief propagation are imple- mented to investigate and characterise their behaviour through computer simulations in MATLAB. Results show that a bit error rate performance improvement is achiev- able under non worst-case channel conditions using a Gaussian elimination decoder. An adaptive system is thus recommended which decodes using Gaussian elimination and which has the appropriate data rate. The added complexity can be well tolerated especially on the receiver side in automatic meter reading systems due to the network structure being built around a centralised agent which possesses more resources.MT201

    Changing edges in graphical model algorithms

    Get PDF
    Graphical models are used to describe the interactions in structures, such as the nodes in decoding circuits, agents in small-world networks, and neurons in our brains. These structures are often not static and can change over time, resulting in removal of edges, extra nodes, or changes in weights of the links in the graphs. For example, wires in message-passing decoding circuits can be misconnected due to process variation in nanoscale manufacturing or circuit aging, the style of passes among soccer players can change based on the team's strategy, and the connections among neurons can be broken due to Alzheimer's disease. The effects of these changes in graphs can reveal useful information and inspire approaches to understand some challenging problems. In this work, we investigate the dynamic changes of edges in graphs and develop mathematical tools to analyze the effects of these changes by embedding the graphical models in two applications. The first half of the work is about the performance of message-passing LDPC decoders in the presence of permanently and transiently missing connections, which is equivalent to the removal of edges in the codes' graphical representation Tanner graphs. We prove concentration and convergence theorems that validate the use of density evolution performance analysis and conclude that arbitrarily small error probability is not possible for decoders with missing connections. However, we find suitably defined decoding thresholds for communication systems with binary erasure channels under peeling decoding, as well as binary symmetric channels under Gallager A and B decoding. We see that decoding is robust to missing wires, as decoding thresholds degrade smoothly. Surprisingly, we discovered the stochastic facilitation (SF) phenomenon in Gallager B decoders where having more missing connections helps improve the decoding thresholds under some conditions. The second half of the work is about the advantages of the semi-metric property of complex weighted networks. Nodes in graphs represent elements in systems and edges describe the level of interactions among the nodes. A semi-metric edge in a graph, which violates the triangle inequality, indicates that there is another latent relation between the pair of nodes connected by the edge. We show the equivalence between modelling a sporting event using a stochastic Markov chain and an algebraic diffusion process, and we also show that using the algebraic representation to calculate the stationary distribution of a network can preserve the graph's semi-metric property, which is lost in stochastic models. These semi-metric edges can be treated as redundancy and be pruned in the all-pairs shortest-path problems to accelerate computations, which can be applied to more complicated problems such as PageRank. We then further demonstrate the advantages of semi-metricity in graphs by showing that the percentage of semi-metric edges in the interaction graphs of two soccer teams changes linearly with the final score. Interestingly, these redundant edges can be interpreted as a measure of a team's tactics

    Algorithms and Data Representations for Emerging Non-Volatile Memories

    Get PDF
    The evolution of data storage technologies has been extraordinary. Hard disk drives that fit in current personal computers have the capacity that requires tons of transistors to achieve in 1970s. Today, we are at the beginning of the era of non-volatile memory (NVM). NVMs provide excellent performance such as random access, high I/O speed, low power consumption, and so on. The storage density of NVMs keeps increasing following Moore’s law. However, higher storage density also brings significant data reliability issues. When chip geometries scale down, memory cells (e.g. transistors) are aligned much closer to each other, and noise in the devices will become no longer negligible. Consequently, data will be more prone to errors and devices will have much shorter longevity. This dissertation focuses on mitigating the reliability and the endurance issues for two major NVMs, namely, NAND flash memory and phase-change memory (PCM). Our main research tools include a set of coding techniques for the communication channels implied by flash memory and PCM. To approach the problems, at bit level we design error correcting codes tailored for the asymmetric errors in flash and PCM, we propose joint coding scheme for endurance and reliability, error scrubbing methods for controlling storage channel quality, and study codes that are inherently resisting to typical errors in flash and PCM; at higher levels, we are interested in analyzing the structures and the meanings of the stored data, and propose methods that pass such metadata to help further improve the coding performance at bit level. The highlights of this dissertation include the first set of write-once memory code constructions which correct a significant number of errors, a practical framework which corrects errors utilizing the redundancies in texts, the first report of the performance of polar codes for flash memories, and the emulation of rank modulation codes in NAND flash chips
    • …
    corecore