2,287 research outputs found

    A Self Learning based Diagnosis of Faulty Configurable Logic Blocks (CLBs) in Field Programmable Gate Arrays (FPGA) Using Reconfiguration

    Get PDF
    In many areas of digital systems Field programmable gate arrays (FPGAs) are most important for designing. The main usesof FPGAs are, these are programmable, and faults can be easily diagnosed, once faulty locations are identified. The locationand identification of faults in FPGA has not yet been explored much. A methodology for the testing and diagnosis of faultsin FPGAs is presented based on automatic circuit reconfiguration. The proposed method imposes no hardware overhead.This method can also be used in fault-tolerant systems, in which a good functional circuit can be still mapped to a FPGAwith faulty elements, as long as the fault sites are known. The logic synthesis software assigns the Configurable Logic Block(CLB) resources without system designer intervention. It is very advantageous for the designer to understand certain CLBdetails, including the varying capabilities of the look-up tables (LUTs), the physical direction of the carry propagation, thenumber and distribution of the available flip-flops. FPGA consists of 25 Configurable Logic Blocks (CLB). Each CLB isassigned with an application. The inputs for CLB are applied from a file. There is also a fault file in which error CLBs arepresent. If there is error CLBs, those CLBs are replaced by the spare CLBs. Finally, the errors CLBs are corrected withproper inputs and modified bits are displayed. So efficiency is not reduced and configurability is done without replacing thefaulty components. This FPGA can tolerate not only single faults but also for multiple faults. The power analysis resultsprovided for fault free, stuck-at-1, stuck-at-0 faults in digital circuits validate the point that faulty circuits dissipates moreand hence draw more power.Key words: Configurable Logic Block (CLB), Power Dissipation, Fault Tolerance, Fault Diagnosis, Faults, Full adder (FA)

    A Discrete Event System approach to On-line Testing of digital circuits with measurement limitation

    Get PDF
    AbstractIn the present era of complex systems like avionics, industrial processes, electronic circuits, etc., on-the-fly or on-line fault detection is becoming necessary to provide uninterrupted services. Measurement limitation based fault detection schemes are applied to a wide range of systems because sensors cannot be deployed in all the locations from which measurements are required. This paper focuses towards On-Line Testing (OLT) of faults in digital electronic circuits under measurement limitation using the theory of discrete event systems. Most of the techniques presented in the literature on OLT of digital circuits have emphasized on keeping the scheme non-intrusive, low area overhead, high fault coverage, low detection latency etc. However, minimizing tap points (i.e., measurement limitation) of the circuit under test (CUT) by the on-line tester was not considered. Minimizing tap points reduces load on the CUT and this reduces the area overhead of the tester. However, reduction in tap points compromises fault coverage and detection latency. This work studies the effect of minimizing tap points on fault coverage, detection latency and area overhead. Results on ISCAS89 benchmark circuits illustrate that measurement limitation have minimal impact on fault coverage and detection latency but reduces the area overhead of the tester. Further, it was also found that for a given detection latency and fault coverage, area overhead of the proposed scheme is lower compared to other similar schemes reported in the literature

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Investigation into voltage and process variation-aware manufacturing test

    No full text
    Increasing integration and complexity in IC design provides challenges for manufacturing testing. This thesis studies how process and supply voltage variation influence defect behaviour to determine the impact on manufacturing test cost and quality. The focus is on logic testing of static CMOS designs with respect to two important defect types in deep submicron CMOS: resistive bridges and full opens. The first part of the thesis addresses testing for resistive bridge defects in designs with multiple supply voltage settings. To enable analysis, a fault simulator is developed using a supply voltage-aware model for bridge defect behaviour. The analysis shows that for high defect coverage it is necessary to perform test for more than one supply voltage setting, due to supply voltage-dependent behaviour. A low-cost and effective test method is presented consisting of multi-voltage test generation that achieves high defect coverage and test set size reduction without compromise to defect coverage. Experiments on synthesised benchmarks with realistic bridge locations validate the proposed method.The second part focuses on the behaviour of full open defects under supply voltage variation. The aim is to determine the appropriate value of supply voltage to use when testing. Two models are considered for the behaviour of full open defects with and without gate tunnelling leakage influence. Analysis of the supply voltage-dependent behaviour of full open defects is performed to determine if it is required to test using more than one supply voltage to detect all full open defects. Experiments on synthesised benchmarks using an extended version of the fault simulator tool mentioned above, measure the quantitative impact of supply voltage variation on defect coverage.The final part studies the impact of process variation on the behaviour of bridge defects. Detailed analysis using synthesised ISCAS benchmarks and realistic bridge model shows that process variation leads to additional faults. If process variation is not considered in test generation, the test will fail to detect some of these faults, which leads to test escapes. A novel metric to quantify the impact of process variation on test quality is employed in the development of a new test generation tool, which achieves high bridge defect coverage. The method achieves a user-specified test quality with test sets which are smaller than test sets generated without consideration of process variation

    Quiescent current testing of CMOS data converters

    Get PDF
    Power supply quiescent current (IDDQ) testing has been very effective in VLSI circuits designed in CMOS processes detecting physical defects such as open and shorts and bridging defects. However, in sub-micron VLSI circuits, IDDQ is masked by the increased subthreshold (leakage) current of MOSFETs affecting the efficiency of I¬DDQ testing. In this work, an attempt has been made to perform robust IDDQ testing in presence of increased leakage current by suitably modifying some of the test methods normally used in industry. Digital CMOS integrated circuits have been tested successfully using IDDQ and IDDQ methods for physical defects. However, testing of analog circuits is still a problem due to variation in design from one specific application to other. The increased leakage current further complicates not only the design but also testing. Mixed-signal integrated circuits such as the data converters are even more difficult to test because both analog and digital functions are built on the same substrate. We have re-examined both IDDQ and IDDQ methods of testing digital CMOS VLSI circuits and added features to minimize the influence of leakage current. We have designed built-in current sensors (BICS) for on-chip testing of analog and mixed-signal integrated circuits. We have also combined quiescent current testing with oscillation and transient current test techniques to map large number of manufacturing defects on a chip. In testing, we have used a simple method of injecting faults simulating manufacturing defects invented in our VLSI research group. We present design and testing of analog and mixed-signal integrated circuits with on-chip BICS such as an operational amplifier, 12-bit charge scaling architecture based digital-to-analog converter (DAC), 12-bit recycling architecture based analog-to-digital converter (ADC) and operational amplifier with floating gate inputs. The designed circuits are fabricated in 0.5 μm and 1.5 μm n-well CMOS processes and tested. Experimentally observed results of the fabricated devices are compared with simulations from SPICE using MOS level 3 and BSIM3.1 model parameters for 1.5 μm and 0.5 μm n-well CMOS technologies, respectively. We have also explored the possibility of using noise in VLSI circuits for testing defects and present the method we have developed

    Reliable Low-Latency and Low-Complexity Viterbi Architectures Benchmarked on ASIC and FPGA

    Get PDF
    The Viterbi algorithm is commonly applied in a number of sensitive usage models including decoding convolutional codes used in communications such as satellite communication, cellular relay, and wireless local area networks. Moreover, the algorithm has been applied to automatic speech recognition and storage devices. In this thesis, efficient error detection schemes for architectures based on low-latency, low-complexity Viterbi decoders are presented. The merit of the proposed schemes is that reliability requirements, overhead tolerance, and performance degradation limits are embedded in the structures and can be adapted accordingly. We also present three variants of recomputing with encoded operands and its modifications to detect both transient and permanent faults, coupled with signature-based schemes. The instrumented decoder architecture has been subjected to extensive error detection assessments through simulations, and application-specific integrated circuit (ASIC) [32nm library] and field-programmable gate array (FPGA) [Xilinx Virtex-6 family] implementations for benchmark. The proposed fine-grained approaches can be utilized based on reliability objectives and performance/implementation metrics degradation tolerance

    LSI/VLSI design for testability analysis and general approach

    Get PDF
    The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented

    Lightweight Architectures for Reliable and Fault Detection Simon and Speck Cryptographic Algorithms on FPGA

    Get PDF
    The widespread use of sensitive and constrained applications necessitates lightweight (lowpower and low-area) algorithms developed for constrained nano-devices. However, nearly all of such algorithms are optimized for platform-based performance and may not be useful for diverse and flexible applications. The National Security Agency (NSA) has proposed two relatively-recent families of lightweight ciphers, i.e., Simon and Speck, designed as efficient ciphers on both hardware and software platforms. This paper proposes concurrent error detection schemes to provide reliable architectures for these two families of lightweight block ciphers. The research work on analyzing the reliability of these algorithms and providing fault diagnosis approaches has not been undertaken to date to the best of our knowledge. The main aim of the proposed reliable architectures is to provide high error coverage while maintaining acceptable area and power consumption overheads. To achieve this, we propose a variant of recomputing with encoded operands. These low-complexity schemes are suited for lowresource applications such as sensitive, constrained implantable and wearable medical devices. We perform fault simulations for the proposed architectures by developing a fault model framework. The architectures are simulated and analyzed on recent field-programmable grate array (FPGA) platforms, and it is shown that the proposed schemes provide high error coverage. The proposed low-complexity concurrent error detection schemes are a step forward towards more reliable architectures for Simon and Speck algorithms in lightweight, secure applications

    Radiation Induced Fault Detection, Diagnosis, and Characterization of Field Programmable Gate Arrays

    Get PDF
    The development of Field Programmable Gate Arrays (FPGAs) has been a great achievement in the world of micro-electronics. One of these devices can be programmed to do just about anything, and replace the need for thousands of individual specialized devices. Despite their great versatility, FPGAs are still extremely vulnerable to radiation from cosmic waves in space and from adversaries on the ground. Extensive research has been conducted to examine how radiation disrupts different types of FPGAs. The results show, unfortunately, that the newer FPGAs with smaller technology are even more susceptible to radiation damage than the older ones. This research incorporates and enhances current methods of radiation detection. The design consists of 15 sensor networks that each have 29 sensors. The sensors are simple inverters, but they have the ability to detect flipped bits and delay errors caused by radiation. Analyzers process the outputs of each sensor to determine if the value agrees with what is expected. This information is fed to a reporter that creates an easy-to-read output that describes which network the fault is in, what type of fault is present, how many are in the network, how long they have been there, and the percent slowdown if it is a delay issue. Each network reports any fault data, to the computer screen in real time. This design does need some improvement, but once those improvements are made and tested, this system can be incorporated with FPGA reconfiguration methods that automatically place application logic away from failing errors of the FPGA. This system has great potential to become a great too in fault mitigation
    • …
    corecore