149 research outputs found

    Memory Reliability Enhancement against Multiple Cell Upsets Using Decimal Matrix Code for 32-Bit Data

    Get PDF
    An important issue in the reliability of memories exposed to radiation environment is transient multiple cells upsets (MCUs). To protect the memory data from radiations and transients many improved packaging techniques are available. But, a particular packaging provides protection from a limited variation of radiations. Today the devices are exposed to a very wide range of environment radiations due to increasing applications in the field of wireless communication. So some additional data preservation techniques are always preferred for authenticating the data before it is processed. Some of these techniques use encoded data to be stored in memories. These techniques are error correction codes (ECCs). It is always preferred to implement an error correction code that requires a less number of redundant bits to be stored and a minimized delay overhead in data correction. This paper presents an FPGA based implementation of memory data error detection and correction code that involves simple decimal addition algorithm in the encoding of data that is to be stored in memory. The decoding of the data for error detection and correction is based on the Hamming Code. This technique involves divide-symbol concept to represent the linear data in groups to make symbolic code. The length of the symbol is inversely proportional to the delay overhead of the cod

    Exploration and Analysis of Combinations of Hamming Codes in 32-bit Memories

    Full text link
    Reducing the threshold voltage of electronic devices increases their sensitivity to electromagnetic radiation dramatically, increasing the probability of changing the memory cells' content. Designers mitigate failures using techniques such as Error Correction Codes (ECCs) to maintain information integrity. Although there are several studies of ECC usage in spatial application memories, there is still no consensus in choosing the type of ECC as well as its organization in memory. This work analyzes some configurations of the Hamming codes applied to 32-bit memories in order to use these memories in spatial applications. This work proposes the use of three types of Hamming codes: Ham(31,26), Ham(15,11), and Ham(7,4), as well as combinations of these codes. We employed 36 error patterns, ranging from one to four bit-flips, to analyze these codes. The experimental results show that the Ham(31,26) configuration, containing five bits of redundancy, obtained the highest rate of simple error correction, almost 97\%, with double, triple, and quadruple error correction rates being 78.7\%, 63.4\%, and 31.4\%, respectively. While an ECC configuration encompassed four Ham(7.4), which uses twelve bits of redundancy, only fixes 87.5\% of simple errors

    Design and Analysis of an Adjacent Multi-bit Error Correcting Code for Nanoscale SRAMs

    Get PDF
    Increasing static random access memory (SRAM) bitcell density is a major driving force for semiconductor technology scaling. The industry standard 2x reduction in SRAM bitcell area per technology node has lead to a proliferation in memory intensive applications as greater memory system capacity can be realized per unit area. Coupled with this increasing capacity is an increasing SRAM system-level soft error rate (SER). Soft errors, caused by galactic radiation and radioactive chip packaging material corrupt a bitcell’s data-state and are a potential cause of catastrophic system failures. Further, reductions in device geometries, design rules, and sensitive node capacitances increase the probability of multiple adjacent bitcells being upset per particle strike to over 30% of the total SER below the 45 nm process node. Traditionally, these upsets have been addressed using a simple error correction code (ECC) combined with word interleaving. With continued scaling however, errors beyond this setup begin to emerge. Although more powerful ECCs exist, they come at an increased overhead in terms of area and latency. Additionally, interleaving adds complexity to the system and may not always be feasible for the given architecture. In this thesis, a new class of ECC targeted toward adjacent multi-bit upsets (MBU) is proposed and analyzed. These codes present a tradeoff between the currently popular single error correcting-double error detecting (SEC-DED) ECCs used in SRAMs (that are unable to correct MBUs), and the more robust multi-bit ECC schemes used for MBU reliability. The proposed codes are evaluated and compared against other ECCs using a custom test suite and multi-bit error channel model developed in Matlab as well as Verilog hardware description language (HDL) implementations synthesized using Synopsys Design Compiler and a commercial 65 nm bulk CMOS standard cell library. Simulation results show that for the same check-bit overhead as a conventional 64 data-bit SEC-DED code, the proposed scheme provides a corrected-SER approximately equal to the Bose-Chaudhuri- Hocquenghem (BCH) double error correcting (DEC) code, and a 4.38x improvement over the SEC-DED code in the same error channel. While, for 3 additional check-bits (still 3 less than the BCH DEC code), a triple adjacent error correcting version of the proposed code provides a 2.35x improvement in corrected-SER over the BCH DEC code for 90.9% less ECC circuit area and 17.4% less error correction delay. For further verification, a 0.4-1.0 V 75 kb single-cycle SRAM macro protected with a programmable, up-to-3-adjacent-bit-correcting version of the proposed ECC has been fab- ricated in a commercial 28 nm bulk CMOS process. The SRAM macro has undergone neu- tron irradiation testing at the TRIUMF Neutron Irradiation Facility in Vancouver, Canada. Measurements results show a 189x improvement in SER over an unprotected memory with no ECC enabled and a 5x improvement over a traditional single-error-correction (SEC) code at 0.5 V using 1-way interleaving for the same number of check-bits. This is compa- rable with the 4.38x improvement observed in simulation. Measurement results confirm an average active energy of 0.015 fJ/bit at 0.4 V, and average 80 mV reduction in VDDMIN across eight packaged chips by enabling the ECC. Both the SRAM macro and ECC circuit were designed for dynamic voltage and frequency scaling for both nominal and low voltage applications using a full-custom circuit design flow

    Cross-layer reliability evaluation, moving from the hardware architecture to the system level: A CLERECO EU project overview

    Get PDF
    Advanced computing systems realized in forthcoming technologies hold the promise of a significant increase of computational capabilities. However, the same path that is leading technologies toward these remarkable achievements is also making electronic devices increasingly unreliable. Developing new methods to evaluate the reliability of these systems in an early design stage has the potential to save costs, produce optimized designs and have a positive impact on the product time-to-market. CLERECO European FP7 research project addresses early reliability evaluation with a cross-layer approach across different computing disciplines, across computing system layers and across computing market segments. The fundamental objective of the project is to investigate in depth a methodology to assess system reliability early in the design cycle of the future systems of the emerging computing continuum. This paper presents a general overview of the CLERECO project focusing on the main tools and models that are being developed that could be of interest for the research community and engineering practice

    Radiation Hardened by Design Methodologies for Soft-Error Mitigated Digital Architectures

    Get PDF
    abstract: Digital architectures for data encryption, processing, clock synthesis, data transfer, etc. are susceptible to radiation induced soft errors due to charge collection in complementary metal oxide semiconductor (CMOS) integrated circuits (ICs). Radiation hardening by design (RHBD) techniques such as double modular redundancy (DMR) and triple modular redundancy (TMR) are used for error detection and correction respectively in such architectures. Multiple node charge collection (MNCC) causes domain crossing errors (DCE) which can render the redundancy ineffectual. This dissertation describes techniques to ensure DCE mitigation with statistical confidence for various designs. Both sequential and combinatorial logic are separated using these custom and computer aided design (CAD) methodologies. Radiation vulnerability and design overhead are studied on VLSI sub-systems including an advanced encryption standard (AES) which is DCE mitigated using module level coarse separation on a 90-nm process with 99.999% DCE mitigation. A radiation hardened microprocessor (HERMES2) is implemented in both 90-nm and 55-nm technologies with an interleaved separation methodology with 99.99% DCE mitigation while achieving 4.9% increased cell density, 28.5 % reduced routing and 5.6% reduced power dissipation over the module fences implementation. A DMR register-file (RF) is implemented in 55 nm process and used in the HERMES2 microprocessor. The RF array custom design and the decoders APR designed are explored with a focus on design cycle time. Quality of results (QOR) is studied from power, performance, area and reliability (PPAR) perspective to ascertain the improvement over other design techniques. A radiation hardened all-digital multiplying pulsed digital delay line (DDL) is designed for double data rate (DDR2/3) applications for data eye centering during high speed off-chip data transfer. The effect of noise, radiation particle strikes and statistical variation on the designed DDL are studied in detail. The design achieves the best in class 22.4 ps peak-to-peak jitter, 100-850 MHz range at 14 pJ/cycle energy consumption. Vulnerability of the non-hardened design is characterized and portions of the redundant DDL are separated in custom and auto-place and route (APR). Thus, a range of designs for mission critical applications are implemented using methodologies proposed in this work and their potential PPAR benefits explored in detail.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Boolean Weightless Neural Network Architectures

    Get PDF
    A collection of hardware weightless Boolean elements has been developed. These form fundamental building blocks which have particular pertinence to the field of weightless neural networks. They have also been shown to have merit in their own right for the design of robust architectures. A major element of this is a collection of weightless Boolean sum and threshold techniques. These are fundamental building blocks which can be used in weightless architectures particularly within the field of weightless neural networks. Included in these is the implementation of L-max also known as N point thresholding. These elements have been applied to design a Boolean weightless hardware version of Austin’s ADAM neural network. ADAM is further enhanced by the addition of a new learning paradigm, that of non-Hebbian Learning. This new method concentrates on the association of ‘dis-similarity’, believing this is as important as areas of similarity. Image processing using hardware weightless neural networks is investigated through simulation of digital filters using a Type 1 Neuroram neuro-filter. Simulations have been performed using MATLAB to compare the results to a conventional median filter. Type 1 Neuroram has been tested on an extended collection of noise types. The importance of the threshold has been examined and the effect of cascading both types of filters was examined. This research has led to the development of several novel weightless hardware elements that can be applied to image processing. These patented elements include a weightless thermocoder and two weightless median filters. These novel robust high speed weightless filters have been compared with conventional median filters. The robustness of these architectures has been investigated when subjected to accelerated ground based generated neutron radiation simulating the atmospheric radiation spectrum experienced at commercial avionic altitudes. A trial investigating the resilience of weightless hardware Boolean elements in comparison to standard weighted arithmetic logic is detailed, examining the effects on the operation of the function when implemented on hardware experiencing high energy neutron bombardment induced single event effects. Further weightless Boolean elements are detailed which contribute to the development of a weightless implementation of the traditionally weighted self ordered map

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Boolean weightless neural network architectures

    Get PDF
    A collection of hardware weightless Boolean elements has been developed. These form fundamental building blocks which have particular pertinence to the field of weightless neural networks. They have also been shown to have merit in their own right for the design of robust architectures. A major element of this is a collection of weightless Boolean sum and threshold techniques. These are fundamental building blocks which can be used in weightless architectures particularly within the field of weightless neural networks. Included in these is the implementation of L-max also known as N point thresholding. These elements have been applied to design a Boolean weightless hardware version of Austin’s ADAM neural network. ADAM is further enhanced by the addition of a new learning paradigm, that of non-Hebbian Learning. This new method concentrates on the association of ‘dis-similarity’, believing this is as important as areas of similarity. Image processing using hardware weightless neural networks is investigated through simulation of digital filters using a Type 1 Neuroram neuro-filter. Simulations have been performed using MATLAB to compare the results to a conventional median filter. Type 1 Neuroram has been tested on an extended collection of noise types. The importance of the threshold has been examined and the effect of cascading both types of filters was examined. This research has led to the development of several novel weightless hardware elements that can be applied to image processing. These patented elements include a weightless thermocoder and two weightless median filters. These novel robust high speed weightless filters have been compared with conventional median filters. The robustness of these architectures has been investigated when subjected to accelerated ground based generated neutron radiation simulating the atmospheric radiation spectrum experienced at commercial avionic altitudes. A trial investigating the resilience of weightless hardware Boolean elements in comparison to standard weighted arithmetic logic is detailed, examining the effects on the operation of the function when implemented on hardware experiencing high energy neutron bombardment induced single event effects. Further weightless Boolean elements are detailed which contribute to the development of a weightless implementation of the traditionally weighted self ordered map.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    NASA Tech Briefs, December 1988

    Get PDF
    This month's technical section includes forecasts for 1989 and beyond by NASA experts in the following fields: Integrated Circuits; Communications; Computational Fluid Dynamics; Ceramics; Image Processing; Sensors; Dynamic Power; Superconductivity; Artificial Intelligence; and Flow Cytometry. The quotes provide a brief overview of emerging trends, and describe inventions and innovations being developed by NASA, other government agencies, and private industry that could make a significant impact in coming years. A second bonus feature in this month's issue is the expanded subject index that begins on page 98. The index contains cross-referenced listings for all technical briefs appearing in NASA Tech Briefs during 1988

    Product assurance technology for custom LSI/VLSI electronics

    Get PDF
    The technology for obtaining custom integrated circuits from CMOS-bulk silicon foundries using a universal set of layout rules is presented. The technical efforts were guided by the requirement to develop a 3 micron CMOS test chip for the Combined Release and Radiation Effects Satellite (CRRES). This chip contains both analog and digital circuits. The development employed all the elements required to obtain custom circuits from silicon foundries, including circuit design, foundry interfacing, circuit test, and circuit qualification
    • …
    corecore