23,415 research outputs found

    Modeling the Impact of Process Variation on Resistive Bridge Defects

    No full text
    Recent research has shown that tests generated without taking process variation into account may lead to loss of test quality. At present there is no efficient device-level modeling technique that models the effect of process variation on resistive bridges. This paper presents a fast and accurate technique to model the effect of process variation on resistive bridge defects. The proposed model is implemented in two stages: firstly, it employs an accurate transistor model (BSIM4) to calculate the critical resistance of a bridge; secondly, the effect of process variation is incorporated in this model by using three transistor parameters: gate length (L), threshold voltage (V) and effective mobility (ueff) where each follow Gaussian distribution. Experiments are conducted on a 65-nm gate library (for illustration purposes), and results show that on average the proposed modeling technique is more than 7 times faster and in the worst case, error in bridge critical resistance is 0.8% when compared with HSPICE

    Testing microelectronic biofluidic systems

    Get PDF
    According to the 2005 International Technology Roadmap for Semiconductors, the integration of emerging nondigital CMOS technologies will require radically different test methods, posing a major challenge for designers and test engineers. One such technology is microelectronic fluidic (MEF) arrays, which have rapidly gained importance in many biological, pharmaceutical, and industrial applications. The advantages of these systems, such as operation speed, use of very small amounts of liquid, on-board droplet detection, signal conditioning, and vast digital signal processing, make them very promising. However, testable design of these devices in a mass-production environment is still in its infancy, hampering their low-cost introduction to the market. This article describes analog and digital MEF design and testing method

    Determining DfT Hardware by VHDL-AMS Fault Simulation for Biological Micro-Electronic Fluidic Arrays

    Get PDF
    The interest of microelectronic fluidic arrays for biomedical applications, like DNA determination, is rapidly increasing. In order to evaluate these systems in terms of required Design-for-Test structures, fault simulations in both fluidic and electronic domains are necessary. VHDL-AMS can be used successfully in this case. This paper shows a highly testable architecture of a DNA Bio-Sensing array, its basic sensing concept, fluidic modeling and sensitivity analysis. The overall VHDL-AMS fault simulation of the system is shown

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    The Art of Fault Injection

    Get PDF
    Classical greek philosopher considered the foremost virtues to be temperance, justice, courage, and prudence. In this paper we relate these cardinal virtues to the correct methodological approaches that researchers should follow when setting up a fault injection experiment. With this work we try to understand where the "straightforward pathway" lies, in order to highlight those common methodological errors that deeply influence the coherency and the meaningfulness of fault injection experiments. Fault injection is like an art, where the success of the experiments depends on a very delicate balance between modeling, creativity, statistics, and patience

    Influence of parasitic capacitance variations on 65 nm and 32 nm predictive technology model SRAM core-cells

    Get PDF
    The continuous improving of CMOS technology allows the realization of digital circuits and in particular static random access memories that, compared with previous technologies, contain an impressive number of transistors. The use of new production processes introduces a set of parasitic effects that gain more and more importance with the scaling down of the technology. In particular, even small variations of parasitic capacitances in CMOS devices are expected to become an additional source of faulty behaviors in future technologies. This paper analyzes and compares the effect of parasitic capacitance variations in a SRAM memory circuit realized with 65 nm and 32 nm predictive technology model

    An experiment in software reliability

    Get PDF
    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay
    corecore