21 research outputs found

    Muller C-element based Decoder (MCD): A Decoder Against Transient Faults

    Get PDF
    This work extends the analysis and application of a digital error correction method called Muller C-element Decoding (MCD), which has been proposed for fault masking in logic circuits comprised of unreliable elements. The proposed technique employs cascaded Muller C-elements and XOR gates to achieve efficient error-correction in the presence of internal upsets. The error-correction analysis of MCD architecture and the investigation of C-element’s robustness are first introduced. We demonstrate that the MCD is able to produce error-correction benefit in a high error-rate of internal faults. Significantly, for a (3,6) short-length LDPC code, when the decoding process is internally error-free the MCD achieves also a gain in terms of decoding performance by comparison to the well-known Gallager Bit-Flipping method. We further consider application of MCD to a general-purpose fault-tolerant model, coded Dual Modular Redundancy (cDMR), which offers low-redundancy error-resilience for contemporary logic systems as well as future nanoeletronic architectures

    Density Evolution and Functional Threshold for the Noisy Min-Sum Decoder

    Full text link
    This paper investigates the behavior of the Min-Sum decoder running on noisy devices. The aim is to evaluate the robustness of the decoder in the presence of computation noise, e.g. due to faulty logic in the processing units, which represents a new source of errors that may occur during the decoding process. To this end, we first introduce probabilistic models for the arithmetic and logic units of the the finite-precision Min-Sum decoder, and then carry out the density evolution analysis of the noisy Min-Sum decoder. We show that in some particular cases, the noise introduced by the device can help the Min-Sum decoder to escape from fixed points attractors, and may actually result in an increased correction capacity with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold. The behavior of the noisy decoder is demonstrated in the asymptotic limit of the code-length -- by using "noisy" density evolution equations -- and it is also verified in the finite-length case by Monte-Carlo simulation.Comment: 46 pages (draft version); extended version of the paper with same title, submitted to IEEE Transactions on Communication

    Techniques and Prospects for Fault-tolerance in Post-CMOS ULSI

    No full text
    International audienceThis paper presents a survey of fault-masking techniques suitable for tolerating short-duration transient upsets in minimum-scale switching devices. Two types of fault masking are considered. The ïŹrst type, coded dual-modular redundancy (cDMR), represents a family of parity-checking methods suitable for correcting a low rate of transient upsets. The second type, Restorative Feedback (RFB), is a triple-modular solution suitable for compensating a higher rate of transient upsets. We show that cDMR can be used efïŹciently for crossbar-style logic, but is not efïŹcient in general for all logic styles. By contrast, RFB offers a ïŹxed redundancy, and can be applied in general to any logic circuit. Finally, we propose novel circuits for ternary Muller C implementation based on carbon nanotube FET devices

    A Space-Time Redundancy Technique for Embedded Stochastic Error Correction

    No full text
    International audienceAn error-correction algorithm, referred as to Low Density Parity Check (LDPC) stochastic decoding technique, has recently been introduced for implementing iterative LDPC decoders in logic technologies with a high rate of transient faults. In this work, a modified algorithm that includes a feedback mechanism is first presented. A temporal majority logic is also applied at the decoder's output, providing an additional dimension of redundancy. By comparison to Gallager-A decoding method, the combination of feedback with temporal redundancy is shown to significantly increase the decoder's resilience against a high rate of internal upsets as a gain of up to three orders of magnitude

    Stochastic multiple-stream decoding of Cortex codes

    No full text
    International audienceCortex codes are short length block codes having a large Hamming distance. Their modular construction, based on simple and very short block codes, yield to difficulties in efficiently decoding them with digital decoders implementing the Sum-Product algorithm. However, this construction lends itself to analog decoding with performance close to ML decoding as was recently demonstrated. A digital decoding method close to analog decoding is stochastic decoding. This paper brings the two together to design a Cortex stochastic architecture with good decoding performance. Moreover, the proposed stochastic decoder architecture is simplified when compared to the customary one. Instead of edge or tracking forecast memories the proposed architecture uses multiple streams to represent the same probability and deterministic shufflers. This results in a more efficient architecture in terms of ratio between data throughput and hardware complexity. Finally, the proposed method offers decoding performance similar to a Min-Sum decoder with 50 iterations

    Calcul sur architecture non fiable

    Get PDF
    Although materials could be fabricated as error-free theoretically with a huge cost for worst-case design methodologies, the circuit is still susceptible to transient faults by the effects of radiation, temperature sensitivity, and etc. On the contrary, an error-resilient design enables the manufacturing process to be relieved from the variability issue so as to save material cost. Since variability and transient upsets are worsening as emerging fabrication process and size shrink are tending intense, the requirement of robust design is imminent. This thesis addresses the issue of designing on unreliable circuit. The main contributions are fourfold. Firstly a fast error-correction and low cost redundancy fault-tolerant method is presented. Moreover, we introduce judicious two-dimensional criteria to estimate the reliability and the hardware efïŹciency of a circuit. A general-purpose model offers low-redundancy error-resilience for contemporary logic systems as well as future nanoeletronic architectures. At last, a decoder against internal transient faults is designed in this work.En thĂ©orie, les circuits Ă©lectroniques conçus selon la mĂ©thode du pire-cas sont supposĂ©s garantir un fonctionnement sans erreur pourun coĂ»t d’implĂ©mentation Ă©levĂ©. Dans la pratique les circuits restent sujets aux erreurs transitoires du fait de leur sensibilitĂ© aux alĂ©astels que la radiation et la tempĂ©rature. En revanche, une conception prenant en compte la tolĂ©rance aux fautes permet de faire face Ă  detels alĂ©as comme la variabilitĂ© du processus de fabrication. De plus, les erreurs transitoires et la variabilitĂ© de fabrication s’intensiïŹentavec l’émergence de nouveaux processus de fabrication et des circuits de dimension de plus en plus rĂ©duite. La demande d’une conceptionintĂ©grant la tolĂ©rance aux fautes devient dĂ©sormais primordiale. La prĂ©sente thĂšse a pour objectif de cerner la problĂ©matique de laconception de circuits sur des puces peu ïŹables et apporte des contributions suivant quatre aspects. Dans un premier temps, nous proposonsune mĂ©thode de tolĂ©rance aux fautes, basĂ©e sur la correction d’erreurs et la redondance Ă  faible coĂ»t. Puis, nous prĂ©sentonsun critĂšre bidimensionnel judicieux permettant d’évaluer la ïŹabilitĂ© et l’efïŹcacitĂ© matĂ©rielle de circuits. Nous proposons ensuite un modĂšleuniversel qui apporte une tolĂ©rance avec fautes Ă  redondance faible pour les systĂšmes logiques d’aujourd’hui et les architecturesnanoĂ©lectroniques de demain. EnïŹn, nous dĂ©couvrons un dĂ©codeur tolĂ©rant aux fautes transitoires internes

    Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems

    Full text link
    With the increasing digital services demand, performance and power-efficiency become vital requirements for digital circuits and systems. However, the enabling CMOS technology scaling has been facing significant challenges of device uncertainties, such as process, voltage, and temperature variations. To ensure system reliability, worst-case corner assumptions are usually made in each design level. However, the over-pessimistic worst-case margin leads to unnecessary power waste and performance loss as high as 2.2x. Since optimizations are traditionally confined to each specific level, those safe margins can hardly be properly exploited. To tackle the challenge, it is therefore advised in this Ph.D. thesis to perform a cross-layer optimization for digital signal processing circuits and systems, to achieve a global balance of power consumption and output quality. To conclude, the traditional over-pessimistic worst-case approach leads to huge power waste. In contrast, the adaptive voltage scaling approach saves power (25% for the CORDIC application) by providing a just-needed supply voltage. The power saving is maximized (46% for CORDIC) when a more aggressive voltage over-scaling scheme is applied. These sparsely occurred circuit errors produced by aggressive voltage over-scaling are mitigated by higher level error resilient designs. For functions like FFT and CORDIC, smart error mitigation schemes were proposed to enhance reliability (soft-errors and timing-errors, respectively). Applications like Massive MIMO systems are robust against lower level errors, thanks to the intrinsically redundant antennas. This property makes it applicable to embrace digital hardware that trades quality for power savings.Comment: 190 page

    Survey of FPGA applications in the period 2000 – 2015 (Technical Report)

    Get PDF
    Romoth J, Porrmann M, RĂŒckert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs
    corecore