16 research outputs found

    New Techniques for Reliability Characterization of Electronic Circuits

    No full text
    Integrated electronic systems are increasingly used in an wide number of applications and environments, ranging from critical missions to low cost consumer products. Information processing has been thoroughly integrated into everyday objects and activities, in the so-called ubiquitous computing paradigm. This wide distribution is caused mainly by the miniaturization of semiconductor devices (transistor channel length scaling from 180 nm in 1999 to 22 nm in 2012), which allows integrating a complete system on a single chip (SoC). However, there are many difficult challenges associated with continued cost reduction, size reduction, improved performance and improved power efficiency. One of these challenges is the reliability of these electronic systems. Important research efforts are aimed at improving the reliability of semiconductors. Manufacturing processes, intrinsic aging phenomena of components and environmental stress may cause internal defects and damages during the lifetime of a system, possibly causing misbehaviours or failures. In order to guarantee product quality and consumer satisfaction, it is necessary not only to discover faults as soon as possible in the manufacturing process, but also to continuously check for their absence throughout a product lifetime. Today's modern systems have become increasingly complex to design and build, while the demand for reliability and cost effective development continues. Reliability is one of the most important attributes in all these systems, including aerospace applications, real-time control, medical care, defence equipment, transportation, communication, entertainment products, agriculture, energy and environmental systems. Growing international competition has increased the need for all designers, managers, practitioners, scientists and engineers to ensure a high level of reliability of their product before release and during mission time, at the lowest cost. The interest in reliability has been growing in recent years and this trend will continue during the next decade and beyond. With testers being expensive pieces of equipment and the cost of transistors continuously decreasing, it make sense to use some of these low-cost transistors to replace the costly test tools, whenever possible. The first low cost approach we can think about is using the devices themselves to implement their own test. This is the underlying motivation of functional Software-Based Self-Test (SBST): a fast, powerful microprocessor, which has lots of resources, could certainty help in its testing procedure. Having the outstanding advantages of enabling at-speed testing, zero area overhead and actually testing the device's operation, this approach also has some drawbacks. Even if SBST is essentially suitable for online testing (and sometimes it is the only possible approach), it requires some dedicated system memory for the functional testing data, which can reach very big sizes. Also some faults happen to be functionally untestable; i.e., you cannot detect them exclusively by running proper software routines. For this reason a combination of both functional and structural test approaches is common practise. A second natural approach to low cost testing is design for test (DfT). Add some extra (cheap) area on-chip specifically in charge of performing and managing tests. The DfT path started long ago, but it is still a key element in 2012 International Technology Roadmap for Semiconductors (ITRS)[1] test roadmap. Different sorts of DfT enable the use of low cost testers, contributing to the full checking of a device, and may also be reused for online testing purposes. Logic and Memory Built-In Self Test (BIST) schemas are usual practises. Analogue DfT, even if it is not as advances as digital one, is also an interesting strategy, especially when the analogue or mixed-signal device is integrated in a wider digital system like a SoC Finally, there ar

    New Techniques for Reliability Characterization of Electronic Circuits

    No full text
    Integrated electronic systems are increasingly used in an wide number of applications and environments, ranging from critical missions to low cost consumer products. Information processing has been thoroughly integrated into everyday objects and activities, in the so-called ubiquitous computing paradigm. This wide distribution is caused mainly by the miniaturization of semiconductor devices (transistor channel length scaling from 180 nm in 1999 to 22 nm in 2012), which allows integrating a complete system on a single chip (SoC). However, there are many difficult challenges associated with continued cost reduction, size reduction, improved performance and improved power efficiency. One of these challenges is the reliability of these electronic systems. Important research efforts are aimed at improving the reliability of semiconductors. Manufacturing processes, intrinsic aging phenomena of components and environmental stress may cause internal defects and damages during the lifetime of a system, possibly causing misbehaviours or failures. In order to guarantee product quality and consumer satisfaction, it is necessary not only to discover faults as soon as possible in the manufacturing process, but also to continuously check for their absence throughout a product lifetime. Today’s modern systems have become increasingly complex to design and build, while the demand for reliability and cost effective development continues. Reliability is one of the most important attributes in all these systems, including aerospace applications, real-time control, medical care, defence equipment, transportation, communication, entertainment products, agriculture, energy and environmental systems. Growing international competition has increased the need for all designers, managers, practitioners, scientists and engineers to ensure a high level of reliability of their product before release and during mission time, at the lowest cost. The interest in reliability has been growing in recent years and this trend will continue during the next decade and beyond. With testers being expensive pieces of equipment and the cost of transistors continuously decreasing, it make sense to use some of these low-cost transistors to replace the costly test tools, whenever possible. The first low cost approach we can think about is using the devices themselves to implement their own test. This is the underlying motivation of functional Software-Based Self-Test (SBST): a fast, powerful microprocessor, which has lots of resources, could certainty help in its testing procedure. Having the outstanding advantages of enabling at-speed testing, zero area overhead and actually testing the device’s operation, this approach also has some drawbacks. Even if SBST is essentially suitable for online testing (and sometimes it is the only possible approach), it requires some dedicated system memory for the functional testing data, which can reach very big sizes. Also some faults happen to be functionally untestable; i.e., you cannot detect them exclusively by running proper software routines. For this reason a combination of both functional and structural test approaches is common practise. A second natural approach to low cost testing is design for test (DfT). Add some extra (cheap) area on-chip specifically in charge of performing and managing tests. The DfT path started long ago, but it is still a key element in 2012 International Technology Roadmap for Semiconductors (ITRS)[1] test roadmap. Different sorts of DfT enable the use of low cost testers, contributing to the full checking of a device, and may also be reused for online testing purposes. Logic and Memory Built-In Self Test (BIST) schemas are usual practises. Analogue DfT, even if it is not as advances as digital one, is also an interesting strategy, especially when the analogue or mixed-signal device is integrated in a wider digital system like a SoC Finally, there are some fields where the use of external (and generally expensive) testers is mandatory. Diagnosis is one of the cases in which an Automatic Test Equipment (ATE) is needed to store the huge amount of retrieved data and to drive the cyclic characteristic of the diagnosis procedure. In particular, even if memories are commonly tested making use of internal BIST structures, their diagnosis demands the use of a tester. Another interesting and blooming field is that of the mixed energy-domain devices as Micro Electro Mechanical Systems (MEMS). MEMS require unique testing apparatus applying both electrical and physical stimuli: movement, pressure, magnetic fields. Additionally, they not only need to be exhaustively tested but in most of the cases also calibrated. The work described in this thesis falls in low cost testing domain. Strategies for new and/or improved SBST, DfT and ATE mechanisms are proposed, implemented and evaluated. The strategies deal mainly with memories, processor and mixed-signal devices (analogue-to-digital converters is our target device) embedded in Systems-on-a-Chip, where standard communication protocols and wrappers are used to communicate with the device under test

    An adaptive low-cost tester architecture supporting embedded memory volume diagnosis

    No full text
    This paper describes the working principle and an implementation of a low-cost tester architecture supporting volume test and diagnosis of built-in self-test (BIST)-assisted embedded memory cores. The described tester architecture autonomously executes a diagnosis-oriented test program, adapting the stimuli at run-time, based on the collected test results. In order to effectively allow the tester architecture to interact with the devices under test with an acceptable time overhead, the approach exploits a special hardware module to manage the diagnostic process. Embedded static RAMs equipped with diagnostic BISTs and IEEE 1500 wrappers were selected as case study; experimental results show the feasibility of the approach when having a field-programmable gate array available on the tester and its effectiveness in terms of diagnosis time and required tester memory with respect to traditional testers executing diagnosis procedures by means of software running on the host compute

    SW-Based Transparent In-Field Memory Testing

    No full text
    With continuous technology scaling, both quality and reliability are becoming major concerns for ICs due to extreme variations, non-ideal voltage scaling, etc. (not to mention the business pressure leading to shorter-time to market). One-time-factory manufacturing test is not sufficient anymore, and in-field testing (e.g., periodically, at power-on, during idle times) is becoming mandatory. Due to the strict constraints of in-field test, transparent BIST is extremely attractive, since it allows to minimize test invasiveness. This paper presents a cheap, high quality and practical SW-based transparent in-field test approach for memories within a system. Instead of using hardware BIST, the proposed scheme re-uses the CPU to perform in-field testing for all memories within the system. All quality metrics of the proposed solution (such as defect coverage, test time and code size) are analyzed. Case studies using the ARM instruction set architecture are provided to demonstrate the applicability of the solution. With the proposed approach no hardware BIST is necessary and speed-related faults are tackled, whereas results show the test time complexity of the SW-based transparent tests is the same as the one of the standard hardware BIST test. Moreover, data previously present in the memory is not corrupted with, in average, only a 30% increase in test program size with respect to non-transparent SW-based test

    SW-Based Transparent In-Field Memory Testing

    No full text
    With continuous technology scaling, both quality and reliability are becoming major concerns for ICs due to extreme variations, non-ideal voltage scaling, etc. (not to mention the business pressure leading to shorter-time to market). One-time-factory manufacturing test is not sufficient anymore, and in-field testing (e.g., periodically, at power-on, during idle times) is becoming mandatory. Due to the strict constraints of in-field test, transparent BIST is extremely attractive, since it allows to minimize test invasiveness. This paper presents a cheap, high quality and practical SW-based transparent in-field test approach for memories within a system. Instead of using hardware BIST, the proposed scheme re-uses the CPU to perform in-field testing for all memories within the system. All quality metrics of the proposed solution (such as defect coverage, test time and code size) are analyzed. Case studies using the ARM instruction set architecture are provided to demonstrate the applicability of the solution. With the proposed approach no hardware BIST is necessary and speed-related faults are tackled, whereas results show the test time complexity of the SW-based transparent tests is the same as the one of the standard hardware BIST test. Moreover, data previously present in the memory is not corrupted with, in average, only a 30% increase in test program size with respect to non-transparent SW-based test

    A SBST strategy to test microprocessors' branch target buffer

    No full text
    A Branch Target Buffer (BTB) is a mechanism to support speculative execution in order to overcome the performance penalty caused by branch instructions in pipelined microprocessors. Being an intrinsically fault tolerant unit, it is hard to achieve a good fault coverage resorting to plain functional testing methods. In this paper we analyze the causes for low functional testability and propose some techniques able to effectively face these issues. In particular, we describe a strategy to perform SBST on fully associative BTB units. The unit's general structure is analyzed, a suitable test program is proposed and the strategy to observe the test responses is explained. Feasibility and effectiveness of the proposed approach are shown on a MIPS-like processo
    corecore