418 research outputs found

    Power Droop Reduction In Logic BIST By Scan Chain Reordering

    Get PDF
    Significant peak power (PP), thus power droop (PD), during test is a serious concern for modern, complex ICs. In fact, the PD originated during the application of test vectors may produce a delay effect on the circuit under test signal transitions. This event may be erroneously recognized as presence of a delay fault, with consequent generation of an erroneous test fail, thus increasing yield loss. Several solutions have been proposed in the literature to reduce the PD during test of combinational ICs, while fewer approaches exist for sequential ICs. In this paper, we propose a novel approach to reduce peak power/power droop during test of sequential circuits with scan-based Logic BIST. In particular, our approach reduces the switching activity of the scan chains between following capture cycles. This is achieved by an original generation and arrangement of test vectors. The proposed approach presents a very low impact on fault coverage and test time

    Integration of a Digital Built-in Self-Test for On-Chip Memories

    Get PDF
    The ability of testing on-chip circuitry is extremely essential to ASIC implemen- tations today. However, providing functional tests and verification for on-chip (embedded) memories always poses a huge number of challenges to the designer. Therefore, a co-existing automated built-in self-test block with the Design Under Test (DUT) seems crucial to provide comprehensive, efficient and robust testing features. The target DUT of this thesis project is the state-of-the-arts Ultra Low Power (ULP) dual-port SRAMs designed in ASIC group of EIT department at Lund University. This thesis starts from system RTL modeling and verification from an earlier project, and then goes through ASIC design phase in 28 nm FD-SOI technology from ST-Microelectronics. All scripts during the ASIC design phase are developed in TCL. This design is implemented with multiple power domains (using CPF approach and introducing level-shifters at crossing-points between domains) and multiple clock sources in order to make it possible to perform various measurements with a high reliability on different flavours of a dual-port SRAM.This design is able to reduce dramatically the complexity of verification and measurement to integrated memories. This digital integrated circuit (IC) is developed as an application-specific IC (ASIC) chip for functional verification of integrated memories and measuring them in different aspects such as power consumption. The design is automated and capable of being reconfigured easily in terms of required actions and data for testing on-chip memories. Put it in other words, this design has automated and optimized the generation of what data to be stored on which location on memories as well as how they have been treated and interpreted later on. For instance, it refreshes and delivers different operation modes and working patterns to the entire test system in order to fully utilize integrated memories, of which such an automation is instructed by the stimuli to the chip. Besides, the pattern generation of the stimuli is implemented on MATLAB in an automated way. Due to constant advancements in chip manufacturing technology, more devices are squeezed into the same silicon area. Meaning that in order to monitor more internal signals introduced by the increased complexity of the circuits, more dedicated input/output ports (the physical interface between the chip internal signals and outside world) are required, that makes the chip bonding and testing in the future difficult and time-consuming. Additionally, memories usually have a bigger number of pins for signal reactions than other circuit blocks do, the method of dealing with so many pins should also be taken into account. Thus, a few techniques are adopted in this system to assist the designers deal with all mentioned issues. Once the ASIC chip has been fabricated (manufactured) and bonded, the on-chip memories can be tested directly on a printed circuit board in a simple and flexible way: Once test instruction input is loaded into the chip, the system starts to update the system settings and then to generate the internal configurations(parameters) so that all different operations, modes or instructions related to memory testing are automatically processed

    Built-In Self-Test (BIST) for Multi-Threshold NULL Convention Logic (MTNCL) Circuits

    Get PDF
    This dissertation proposes a Built-In Self-Test (BIST) hardware implementation for Multi-Threshold NULL Convention Logic (MTNCL) circuits. Two different methods are proposed: an area-optimized topology that requires minimal area overhead, and a test-performance-optimized topology that utilizes parallelism and internal hardware to reduce the overall test time through additional controllability points. Furthermore, an automated software flow is proposed to insert, simulate, and analyze an input MTNCL netlist to obtain a desired fault coverage, if possible, through iterative digital and fault simulations. The proposed automated flow is capable of producing both area-optimized and test-performance-optimized BIST circuits and scripts for digital and fault simulation using commercial software that may be utilized to manually verify or adjust further, if desired

    LOT: Logic Optimization with Testability - new transformations for logic synthesis

    Get PDF
    A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools

    inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices

    Get PDF
    Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability. This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from ≈\approx30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads

    Advances in Architectures and Tools for FPGAs and their Impact on the Design of Complex Systems for Particle Physics

    Get PDF
    The continual improvement of semiconductor technology has provided rapid advancements in device frequency and density. Designers of electronics systems for high-energy physics (HEP) have benefited from these advancements, transitioning many designs from fixed-function ASICs to more flexible FPGA-based platforms. Today’s FPGA devices provide a significantly higher amount of resources than those available during the initial Large Hadron Collider design phase. To take advantage of the capabilities of future FPGAs in the next generation of HEP experiments, designers must not only anticipate further improvements in FPGA hardware, but must also adopt design tools and methodologies that can scale along with that hardware. In this paper, we outline the major trends in FPGA hardware, describe the design challenges these trends will present to developers of HEP electronics, and discuss a range of techniques that can be adopted to overcome these challenges

    Low-Cost and High-Reduction Approaches for Power Droop during Launch-On-Shift Scan-Based Logic BIST

    Get PDF
    During at-speed test of high performance sequential ICs using scan-based Logic BIST, the IC activity factor (AF) induced by the applied test vectors is significantly higher than that experienced during its in field operation. Consequently, power droop (PD) may take place during both shift and capture phases, which will slow down the circuit under test (CUT) signal transitions. At capture, this phenomenon is likely to be erroneously recognized as due to delay faults. As a result, a false test fail may be generated, with consequent increase in yield loss. In this paper, we propose two approaches to reduce the PD generated at capture during at-speed test of sequential circuits with scan-based Logic BIST using the Launch-On-Shift scheme. Both approaches increase the correlation between adjacent bits of the scan chains with respect to conventional scan-based LBIST. This way, the AF of the scan chains at capture is reduced. Consequently, the AF of the CUT at capture, thus the PD at capture, is also reduced compared to conventional scan-based LBIST. The former approach, hereinafter referred to as Low-Cost Approach (LCA), enables a 50% reduction in the worst case magnitude of PD during conventional logic BIST. It requires a small cost in terms of area overhead (of approximately 1.5% on average), and it does not increase the number of test vectors over the conventional scan-based LBIST to achieve the same Fault Coverage (FC). Moreover, compared to three recent alternative solutions, LCA features a comparable AF in the scan chains at capture, while requiring lower test time and area overhead. The second approach, hereinafter referred to as High-Reduction Approach (HRA), enables scalable PD reductions at capture of up to 87%, with limited additional costs in terms of area overhead and number of required test vectors for a given target FC, over our LCA approach. Particularly, compared to two of the three recent alternative solutions mentioned above, HRA enables a significantly lower AF in the scan chains during the application of test vectors, while requiring either a comparable area overhead or a significantly lower test time. Compared to the remaining alternative solutions mentioned above, HRA enables a similar AF in the scan chains at capture (approximately 90% lower than conventional scan-based LBIST), while requiring a significantly lower test time (approximately 4.87 times on average lower number of test vectors) and comparable area overhead (of approximately 1.9% on average)
    • 

    corecore