2,590 research outputs found

    A Radiation-Hardened CMOS Full-Adder Based on Layout Selective Transistor Duplication

    Get PDF
    Single event transients (SETs) have become increasingly problematic for modern CMOS circuits due to the continuous scaling of feature sizes and higher operating frequencies. Especially when involving safety-critical or radiation-exposed applications, the circuits must be designed using hardening techniques. In this brief, we present a new radiation-hardened-by-design full-adder cell on 45-nm technology. The proposed design is hardened against transient errors by selective duplication of sensitive transistors based on a comprehensive radiationsensitivity analysis. Experimental results show a 62% reduction in the SET sensitivity of the proposed design with respect to the unhardened one. Moreover, the proposed hardening technique leads to improvement in performance and power overhead and zero area overhead with respect to the state-of-the-art techniques applied to the unhardened full-adder cell

    Analog Defect Injection and Fault Simulation Techniques: A Systematic Literature Review

    Get PDF
    Since the last century, the exponential growth of the semiconductor industry has led to the creation of tiny and complex integrated circuits, e.g., sensors, actuators, and smart power. Innovative techniques are needed to ensure the correct functionality of analog devices that are ubiquitous in every smart system. The ISO 26262 standard for functional safety in the automotive context specifies that fault injection is necessary to validate all electronic devices. For decades, standardization of defect modeling and injection mainly focused on digital circuits and, in a minor part, on analog ones. An initial attempt is being made with the IEEE P2427 draft standard that started to give a structured and formal organization to the analog testing field. Various methods have been proposed in the literature to speed up the fault simulation of the defect universe for an analog circuit. A more limited number of papers seek to reduce the overall simulation time by reducing the number of defects to be simulated. This literature survey describes the state-of-the-art of analog defect injection and fault simulation methods. The survey is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow, allowing for a systematic and complete literature survey. Each selected paper has been categorized and presented to provide an overview of all the available approaches. In addition, the limitations of the various approaches are discussed by showing possible future directions

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar

    On the Evaluation of SEEs on Open-Source Embedded Static RAMs

    Get PDF
    3Static RAM modules are widely adopted in high performance systems. Single Event Effects (SEEs) resilient memories are required in many embedded systems applied in automotive and aerospace applications to increase their overall resiliency against SEEs. The current SEE resilient SRAM modules are obtained by applying radiation-hardened by design solutions which leads to elevated area overhead and difficulty to tune the resiliency capability with respect to the particle’s radiation profile. To overcome these limitations, we propose a methodology for the analysis and mitigation of embedded SRAMs generated by the OpenRAM memory compiler. A technology-oriented radiation analysis tool is presented to support the interaction of the charged radiation particles with the SRAM layout and depict the sensitive transistors of the SRAM memory. A selective duplication of the sensitive transistors has been applied to the 6T-SRAM cell designed at the layout level. The designed cell is included in the OpenRAM compiler and used to generate a mitigated 8Kb SRAM-bank. We evaluated the SEEs sensitivity by comparative simulation-based radiation analysis observing a reduction more than 6 times with respect to the original 6T-SRAM cell for the SEE sensitivity at high energy heavy ions particles, with negligible degradation of operations margins and power consumption and area overhead of less than ̴4%.partially_openopenAzimi, Sarah; De Sio, Corrado; Sterpone, LucaAzimi, Sarah; De Sio, Corrado; Sterpone, Luc

    Cross-layer Soft Error Analysis and Mitigation at Nanoscale Technologies

    Get PDF
    This thesis addresses the challenge of soft error modeling and mitigation in nansoscale technology nodes and pushes the state-of-the-art forward by proposing novel modeling, analyze and mitigation techniques. The proposed soft error sensitivity analysis platform accurately models both error generation and propagation starting from a technology dependent device level simulations all the way to workload dependent application level analysis

    Digital Design Techniques for Dependable High Performance Computing

    Get PDF
    As today’s process technologies continuously scale down, circuits become increasingly more vulnerable to radiation-induced soft errors in nanoscale VLSI technologies. The reduction of node capacitance and supply voltages coupled with increasingly denser chips are raising soft error rates and making them an important design issue. This research work is focused on the development of design techniques for high-reliability modern VLSI technologies, focusing mainly on Radiation-induced Single Event Transient. In this work, we evaluate the complete life-cycle of the SET pulse from the generation to the mitigation. A new simulation tool, Rad-Ray, has been developed to simulate and model the passage of heavy ion into the silicon matter of modern Integrated Circuit and predict the transient voltage pulse taking into account the physical description of the design. An analysis and mitigation tool has been developed to evaluate the propagation of the predicted SET pulses within the circuit and apply a selective mitigation technique to the sensitive nodes of the circuit. The analysis and mitigation tools have been applied to many industrial projects as well as the EUCLID space mission project, including more than ten modules. The obtained results demonstrated the effectiveness of the proposed tools

    Autonomous fault emulation: a new FPGA-based acceleration system for hardness evaluation

    Get PDF
    The appearance of nanometer technologies has produced a significant increase of integrated circuit sensitivity to radiation, making the occurrence of soft errors much more frequent, not only in applications working in harsh environments, like aerospace circuits, but also for applications working at the earth surface. Therefore, hardened circuits are currently demanded in many applications where fault tolerance was not a concern in the very near past. To this purpose, efficient hardness evaluation solutions are required to deal with the increasing size and complexity of modern VLSI circuits. In this paper, a very fast and cost effective solution for SEU sensitivity evaluation is presented. The proposed approach uses FPGA emulation in an autonomous manner to fully exploit the FPGA emulation speed. Three different techniques to implement it are proposed and analyzed. Experimental results show that the proposed Autonomous Emulation approach can reach execution rates higher than one million faults per second, providing a performance improvement of two orders of magnitude with respect to previous approaches. These rates give way to consider very large fault injection campaigns that were not possible in the past.This work was supported by the Directorate of Research of Madrid Community Government, Spain (Code 07/0052/2003 2) and by the European Commission and Spanish Government under MEDEA+ Project (PARACHUTE-2A701) and PROFIT Project (CIRCE-FIT-330100-2005-60)

    Digital design techniques for dependable High-Performance Computing

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    In-Circuit Mitigation Approach of Single Event Transients for 45nm Flip-Flops

    Get PDF
    Nowadays, radiation-induced Single Event Transients are a leading cause of critical errors in CMOS nanometric integrated circuits. In this work, we propose a workflow for analyzing and mitigating nanometric CMOS integrated circuits to radiation-induced transient errors. The analysis phase starts with the developed Rad-Ray tool for mimicking the passage of the radiation particles through the silicon matter of the cells to identify the features of the generated transient pulses. The tool is integrated with an electrical simulator to evaluate the dynamic behavior of the transient pulses inserted and propagated in the circuit. A tunable mitigation solution is proposed by inserting the filtering block before the storage element, tuned based on the duration and amplitude of the expected transient pulse, identified in the analysis phase. Experimental results are achieved by applying the proposed approach on the 45 nm Flip-Flop component available in the FreePDK design kit, comparing the Dynamic Error Rate for the original Flip-Flop and the mitigated one which shows a reduction of sensitivity up to 56% with respect of the original version, with negligible degradation of performances

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
    corecore