137 research outputs found

    Fault Isolation with ‘X’ Filter for Bogus Signals and Intensive Scan Cell Sequence Validation

    Get PDF
    There are some concerns in silicon data collection by using the Design-For-Test (DFT). Bogus signal which carries ‘x’ value in simulation, results from the complex logic synthesis and power-up floating state can often mislead the fault isolation process with invalid failing condition. Besides, scan cells within the scan chain architecture is also having mismatch value in between the simulation data and silicon data due to the non-ideal mapping file passed down from the designer team. Hence, it is important to develop an integrated tool that can filter all the bogus signal online and to validate the correlation between silicon data and simulation data with minimum coverage of 90%. Data from actual Intel 6th generation microprocessor with 14 nm process technology, Skylake is imported to ensure that the application of this thesis in the current industry market. Necessary tools such as the “Differentiate and Display” feature to ease the analysis of data, the AND-logic operation to filter the bogus signal and X-OR logic operation to handle the inverted characteristic of signals are developed throughout the thesis. Results show that the developed integrated filter of bogus signals is successful and the minimum coverage of validation tool is 96.5%. Actual failure analysis case from industry is imported and the difference with and without the developed tools are compared. Inconclusive optical test result from the sample is obtained without the implementation of tools. On the other hand, defect of short circuit between the via and the metal line is found after the implementation of the developed tools. It is concluded that this thesis has achieved all the objectives set

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    Développement des techniques de test et de diagnostic pour les FPGA hiérarchique de type mesh

    Get PDF
    The evolution trend of shrinking feature size and increasing complexity in modern electronics is being slowed down due to physical limits that generate numerous imperfections and defects during fabrication steps or projected life time of the chip. Field Programmable Gate Arrays (FPGAs) are used in complex digital systems mainly due to their reconfigurability and shorter time-to-market. To maintain a high reliability of such systems, FPGAs should be tested thoroughly for defects. FPGA architecture optimization for area saving and better signal routability is an ongoing process which directly impacts the overall FPGA testability, hence the reliability. This thesis presents a complete strategy for test and diagnosis of manufacturing defects in mesh-based FPGAs containing a novel multilevel interconnects topology which promises to provide better area and routability. Efficiency of the proposed test schemes is analyzed in terms of test cost, respective fault coverage and diagnostic resolution.L’évolution tendant Ă  rĂ©duire la taille et augmenter la complexitĂ© des circuits Ă©lectroniques modernes, est en train de ralentir du fait des limitations technologiques, qui gĂ©nĂšrent beaucoup de d’imperfections et de defaults durant la fabrication ou la durĂ©e de vie de la puce. Les FPGAs sont utilisĂ©s dans les systĂšmes numĂ©riques complexes, essentiellement parce qu’ils sont reconfigurables et rapide Ă  commercialiser. Pour garder une grande fiabilitĂ© de tels systĂšmes, les FPGAs doivent ĂȘtre testĂ©s minutieusement pour les defaults. L’optimisation de l’architecture des FPGAs pour l’économie de surface et une meilleure routabilitĂ© est un processus continue qui impacte directement la testabilitĂ© globale et de ce fait, la fiabilitĂ©. Cette thĂšse prĂ©sente une stratĂ©gie complĂšte pour le test et le diagnostique des defaults de fabrication des “mesh-based FPGA” contenant une nouvelle topologie d’interconnections Ă  plusieurs niveaux, ce qui promet d’apporter une meilleure routabilitĂ©. EfficacitĂ© des schĂ©mas proposes est analysĂ©e en termes de temps de test, couverture de faute et rĂ©solution de diagnostique

    inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices

    Get PDF
    Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability. This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from ≈\approx30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads

    Optimal Unknown Bit Filtering for Test Response Masking

    Get PDF
    [[abstract]]In this paper presents a new X-Masking scheme for response compaction. It filters all X states from test response that can no unknown value input to response compactor. In the experimental results, this scheme increased less control data and maintain same observability.[[conferencedate]]20121104~20121107[[iscallforpapers]]Y[[conferencelocation]]New Taipei, Taiwa

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Design of a Scan Chain for Side Channel Attacks on AES Cryptosystem for Improved Security

    Get PDF
    Scan chain-based attacks are side-channel attacks focusing on one of the most significant features of hardware test circuitry. A technique called Design for Testability (DfT) involves integrating certain testability components into a hardware design. However, this creates a side channel for cryptanalysis, providing crypto devices vulnerable to scan-based attacks. Advanced Encryption Standard (AES) has been proven as the most powerful and secure symmetric encryption algorithm announced by USA Government and it outperforms all other existing cryptographic algorithms. Furthermore, the on-chip implementation of private key algorithms like AES has faced scan-based side-channel attacks. With the aim of protecting the data for secure communication, a new hybrid pipelined AES algorithm with enhanced security features is implemented. This paper proposes testing an AES core with unpredictable response compaction and bit level-masking throughout the scan chain process. A bit-level scan flipflop focused on masking as a scan protection solution for secure testing. The experimental results show that the best security is provided by the randomized addition of masked scan flipflop through the scan chain and also provides minimal design difficulty and power expansion overhead with some negligible delay measures. Thus, the proposed technique outperforms the state-of-the-art LUT-based S-box and the composite sub-byte transformation model regarding throughput rate 2 times and 15 times respectively. And security measured in the avalanche effect for the sub-pipelined model has been increased up to 95 per cent with reduced computational complexity. Also, the proposed sub-pipelined S-box utilizing a composite field arithmetic scheme achieves 7 per cent area effectiveness and 2.5 times the hardware complexity compared to the LUT-based model

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar

    A Comprehensive Test and Diagnostic Strategy for TCAMs

    Get PDF
    Content addressable memories (CAMs) are gaining popularity with computer networks. Testing costs of CAMs are extremely high owing to their unique configuration. In this thesis, a fault analysis is carried out on an industrial ternary CAM (TCAM) design, and search path test algorithms are designed. The proposed algorithms are able to test the TCAM array, multiple-match resolver (MMR), and match address encoder (MAE). The tests represent a 6x decrease in test complexity compared to existing algorithms, while dramatically improving fault coverage

    Fault simulation for structural testing of analogue integrated circuits

    Get PDF
    In this thesis the ANTICS analogue fault simulation software is described which provides a statistical approach to fault simulation for accurate analogue IC test evaluation. The traditional figure of fault coverage is replaced by the average probability of fault detection. This is later refined by considering the probability of fault occurrence to generate a more realistic, weighted test metric. Two techniques to reduce the fault simulation time are described, both of which show large reductions in simulation time with little loss of accuracy. The final section of the thesis presents an accurate comparison of three test techniques and an evaluation of dynamic supply current monitoring. An increase in fault detection for dynamic supply current monitoring is obtained by removing the DC component of the supply current prior to measurement
    • 

    corecore