57 research outputs found

    Improving the tolerance of stochastic LDPC decoders to overclocking-induced timing errors: a tutorial and design example

    No full text
    Channel codes such as Low-Density Parity-Check (LDPC) codes may be employed in wireless communication schemes for correcting transmission errors. This tolerance to channel-induced transmission errors allows the communication schemes to achieve higher transmission throughputs, at the cost of requiring additional processing for performing LDPC decoding. However, this LDPC decoding operation is associated with a potentially inadequate processing throughput, which may constrain the attainable transmission throughput. In order to increase the processing throughput, the clock period may be reduced, albeit this is at the cost of potentially introducing timing errors. Previous research efforts have considered a paucity of solutions for mitigating the occurrence of timing errors in channel decoders, by employing additional circuitry for detecting and correcting these overclocking-induced timing errors. Against this background, in this paper we demonstrate that stochastic LDPC decoders (LDPC-SDs) are capable of exploiting their inherent error correction capability for correcting not only transmission errors, but also timing errors, even without the requirement for additional circuitry. Motivated by this, we provide the first comprehensive tutorial on LDPC-SDs. We also propose a novel design flow for timing-error-tolerant LDPC decoders. We use this to develop a timing error model for LDPC-SDs and investigate how their overall error correction performance is affected by overclocking. Drawing upon our findings, we propose a modified LDPC-SD, having an improved timing error tolerance. In a particular practical scenario, this modification eliminates the approximately 1 dB performance degradation that is suffered by an overclocked LDPC-SD without our modification, enabling the processing throughput to be increased by up to 69.4%, which is achieved without compromising the error correction capability or processing energy consumption of the LDPC-SD

    Fault Tolerance of Stochastic Decoders for Error Correcting Codes

    Get PDF
    Low-density Parity-check (LDPC) codes are very powerful linear error-correcting codes, first introduced by Gallager in 1963. They are now used in many communication standards due to their ability to achieve near Shannon-capacity performance. Stochastic decoding is a hardware-efficient method of iterative decoding of LDPC codes. In this work, we investigate the capability of stochastic decoding to tolerate circuit soft errors while maintaining good bit error rate performance and low error floor. Soft errors can be intended faults as a result of either supply voltage scaling to reduce power consumption or overclocking the system to achieve a higher throughput. They can also be unintended faults as a result of temperature or process variations. We develop two models to emulate these circuit errors at the system level. We apply our models to two standardized LDPC codes (10GBASE-T and WiMAX). Simulation results show that stochastic decoding is very tolerant to faults and errors, where it can tolerate a probability of setup time violation of 0.1 in the wires of the decoder. Hence, stochastic decoding can be very useful in systems with very low power or high performance requirements where we can push the limits of power or speed by lowering the supply voltage or highly overclocking the system while maintaining good performance. In addition, a chip has been designed and sent to fabrication to do post-silicon validation and verify our models

    The Logic of Random Pulses: Stochastic Computing.

    Full text link
    Recent developments in the field of electronics have produced nano-scale devices whose operation can only be described in probabilistic terms. In contrast with the conventional deterministic computing that has dominated the digital world for decades, we investigate a fundamentally different technique that is probabilistic by nature, namely, stochastic computing (SC). In SC, numbers are represented by bit-streams of 0's and 1's, in which the probability of seeing a 1 denotes the value of the number. The main benefit of SC is that complicated arithmetic computation can be performed by simple logic circuits. For example, a single (logic) AND gate performs multiplication. The dissertation begins with a comprehensive survey of SC and its applications. We highlight its main challenges, which include long computation time and low accuracy, as well as the lack of general design methods. We then address some of the more important challenges. We introduce a new SC design method, called STRAUSS, that generates efficient SC circuits for arbitrary target functions. We then address the problems arising from correlation among stochastic numbers (SNs). In particular, we show that, contrary to general belief, correlation can sometimes serve as a resource in SC design. We also show that unlike conventional circuits, SC circuits can tolerate high error rates and are hence useful in some new applications that involve nondeterministic behavior in the underlying circuitry. Finally, we show how SC's properties can be exploited in the design of an efficient vision chip that is suitable for retinal implants. In particular, we show that SC circuits can directly operate on signals with neural encoding, which eliminates the need for data conversion.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113561/1/alaghi_1.pd

    Design tradeoffs and challenges in practical coherent optical transceiver implementations

    Get PDF
    This tutorial discusses the design and ASIC implementation of coherent optical transceivers. Algorithmic and architectural options and tradeoffs between performance and complexity/power dissipation are presented. Particular emphasis is placed on flexible (or reconfigurable) transceivers because of their importance as building blocks of software-defined optical networks. The paper elaborates on some advanced digital signal processing (DSP) techniques such as iterative decoding, which are likely to be applied in future coherent transceivers based on higher order modulations. Complexity and performance of critical DSP blocks such as the forward error correction decoder and the frequency-domain bulk chromatic dispersion equalizer are analyzed in detail. Other important ASIC implementation aspects including physical design, signal and power integrity, and design for testability, are also discussed.Fil: Morero, Damián Alfonso. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales; Argentina. ClariPhy Argentina S.A.; ArgentinaFil: Castrillon, Alejandro. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales; ArgentinaFil: Aguirre, Alejandro. ClariPhy Argentina S.A.; ArgentinaFil: Hueda, Mario Rafael. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Estudios Avanzados en Ingeniería y Tecnología. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas Físicas y Naturales. Instituto de Estudios Avanzados en Ingeniería y Tecnología; ArgentinaFil: Agazzi, Oscar Ernesto. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales; Argentina. ClariPhy Argentina S.A.; Argentin

    Reliable chip design from low powered unreliable components

    Get PDF
    The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively

    Digital pulse processing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 71-74).This thesis develops an exact approach for processing pulse signals from an integrate-and-fire system directly in the time-domain. Processing is deterministic and built from simple asynchronous finite-state machines that can perform general piecewise-linear operations. The pulses can then be converted back into an analog or fixed-point digital representation through a filter-based reconstruction. Integrate-and-fire is shown to be equivalent to the first-order sigma-delta modulation used in oversampled noise-shaping converters. The encoder circuits are well known and have simple construction using both current and next-generation technologies. Processing in the pulse-domain provides many benefits including: lower area and power consumption, error tolerance, signal serialization and simple conversion for mixed-signal applications. To study these systems, discrete-event simulation software and an FPGA hardware platform are developed. Many applications of pulse-processing are explored including filtering and signal processing, solving differential equations, optimization, the minsum / Viterbi algorithm, and the decoding of low-density parity-check codes (LDPC). These applications often match the performance of ideal continuous-time analog systems but only require simple digital hardware. Keywords: time-encoding, spike processing, neuromorphic engineering, bit-stream, delta-sigma, sigma-delta converters, binary-valued continuous-time, relaxation-oscillators.by Martin McCormick.S.M

    Low-Power and Error-Resilient VLSI Circuits and Systems.

    Full text link
    Efficient low-power operation is critically important for the success of the next-generation signal processing applications. Device and supply voltage have been continuously scaled to meet a more constrained power envelope, but scaling has created resiliency challenges, including increasing timing faults and soft errors. Our research aims at designing low-power and robust circuits and systems for signal processing by drawing circuit, architecture, and algorithm approaches. To gain an insight into the system faults due to supply voltage reduction, we researched the two primary effects that determine the minimum supply voltage (VMIN) in Intel’s tri-gate CMOS technology, namely process variations and gate-dielectric soft breakdown. We determined that voltage scaling increases the timing window that sequential circuits are vulnerable. Thus, we proposed a new hold-time violation metric to define hold-time VMIN, which has been adopted as a new design standard. Device scaling increases soft errors which affect circuit reliability. Through extensive soft error characterization using two 65nm CMOS test chips, we studied the soft error mechanisms and its dependence on supply voltage and clock frequency. This study laid the foundation of the first 65nm DSP chip design for a NASA spaceflight project. To mitigate such random errors, we proposed a new confidence-driven architecture that effectively enhances the error resiliency of deeply scaled CMOS and post-CMOS circuits. Designing low-power resilient systems can effectively leverage application-specific algorithmic approaches. To explore design opportunities in the algorithmic domain, we demonstrate an application-specific detection and decoding processor for multiple-input multiple-output (MIMO) wireless communication. To enhance the receive error rate for a robust wireless communication, we designed a joint detection and decoding technique by enclosing detection and decoding in an iterative loop to enhance both interference cancellation and error reduction. A proof-of-concept chip design was fabricated for the next-generation 4x4 256QAM MIMO systems. Through algorithm-architecture optimizations and low-power circuit techniques, our design achieves significant improvements in throughput, energy efficiency and error rate, paving the way for future developments in this area.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110323/1/uchchen_1.pd

    Conceiving Extrinsic Information Transfer Charts for Stochastic Low-Density Parity-Check Decoders

    Full text link
    [EN] Stochastic low-density parity-check decoders (SLDPCs) have found favor recently both for correcting transmission errors as well as for improving the hardware efficiency. The main drawback of these decoders is that they require hundreds of time periods to decode each frame, but their chip area is smaller than that of their fixed-point counterparts, so they can achieve higher hardware efficiency and may consume less energy. In this paper, we propose a novel extrinsic information transfer chart technique for characterizing the iterative decoding convergence of all the sequences involved in the SLDPC. We have conceived a new model, which takes into consideration not only the sequences exchanged between the decoders but also the sequences generated inside the variable-node decoder (those which are stored in the edge memories). In this way, the model is able to predict the number of decoding iterations required for achieving iterative decoding convergence, as confirmed by own decoder simulations. The proposed technique offers new insights into the operation of SLDPCs, which will facilitate improved designs for the research community.Professor L. Hanzo would like to give thanks to the ERC for the financial support of this Advanced Fellow Grant.Pérez Pascual, MA.; Hamilton, A.; Maunder, RG.; Hanzo, L. (2018). Conceiving Extrinsic Information Transfer Charts for Stochastic Low-Density Parity-Check Decoders. IEEE Access. 6:55741-55753. https://doi.org/10.1109/ACCESS.2018.2872113S5574155753
    corecore