32 research outputs found

    Dynamic scan chains : a novel architecture to lower the cost of VLSI test

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 61-64).Fast developments in semiconductor industry have led to smaller and cheaper integrated circuit (IC) components. As the designs become larger and more complex, larger amount of test data is required to test them. This results in longer test application times, therefore, increasing cost of testing each chip. This thesis describes an architecture, named Dynamic Scan, that allows to reduce this cost by reducing the test data volume and, consequently, test application time. The Dynamic Scan architecture partitions the scan chains of the IC design into several segments by a set of multiplexers. The multiplexers allow bypassing or including a particular segment during the test application on the automatic test equipment. The optimality criteria for partitioning scan chains into segments, as well as a partitioning algorithm based on this criteria are also introduced. According to our experimental results Dynamic Scan provides almost a factor of five reduction in test data volume and test application time. More theoretical results reach as much as ten times the reductions compared to the classical scan methodologies.by Nodari S. Sitchinava.M.Eng

    High Quality Delay Testing Scheme for a Self-Timed Microprocessor

    Get PDF
    RÉSUMÉ La popularité d’internet et la quantité toujours croissante de données qui transitent à travers ses terminaux nécessite d’importantes infrastructures de serveurs qui consomment énormément d’énergie. Par conséquent, et puisqu’une augmentation de la consommation d’énergie se traduit par une augmentation des coûts, la demande pour des processeurs efficaces en énergie est en forte hausse. Une manière d’augmenter l’efficacité énergétique des processeurs consiste à moduler la fréquence d’opération du système en fonction de la charge de travail. Les processeurs endochrones et asynchrones sont une des solutions mettant en œuvre ce principe de modulation de l’activité à la demande. Cependant, les méthodes de conception non conventionnelles qui leur sont associées, en particulier en termes de testabilité et d’automation, sont un frein au développement de ce type de systèmes. Ce travail s’intéresse au développement d’une méthode de test de haute qualité adressée aux pannes de retards dans une architecture de processeur endochrone spécifique, appelée AnARM. La méthode proposée consiste à détecter les pannes à faibles retards (PFR) dans l’AnARM en tirant profit des lignes à délais configurables intégrées. Ces pannes sont connues pour passer au travers des modèles de pannes de retards utilisés habituellement (les pannes de retards de portes). Ce travail s’intéresse principalement aux PFR qui échappent à la détection des pannes de retards de portes mais qui sont suffisamment longues pour provoquer des erreurs dans des conditions normales d’opération. D’autre part, la détection de pannes à très faibles retards est évitée, autant que possible, afin de limiter le nombre de faux positifs. Pour réaliser un test de haute qualité, ce travail propose, dans un premier temps, une métrique de test dédiée aux PFR, qui est mieux adaptée aux circuits endochrones, puis, dans un second temps, une méthode de test des pannes de retards basée sur la modulation de la vitesse des lignes à délais intégrés, qui s’adapte à un jeu de vecteurs de test préexistant.Ce travail présente une métrique de test ciblant les PFR, appelée pourcentage de marges pondérées (PoMP), ainsi qu’un nouveau modèle de test pour les PFR (appelé test de PFR idéal).----------ABSTRACT The popularity of the Internet and the huge amount of data that is transfered between devices nowadays requires very powerful servers that demand lots of power. Since higher power consumptions mean more expenses to companies, there is an increase in demand for power eÿcient processors. One of the ways to increase the power eÿciency of processors is to adapt the processing speeds and chip activity according the needed computation load. Self-timed or asynchronous processors are one of the solutions that apply this principle of activity on demand. However, their unconventional design methodology introduces several challenges in terms of testability and design automation. This work focuses on developing a high quality delay test for a specific architecture of self-timed processors called the AnARM. The proposed delay test focuses on catching e˙ective small-delay defects (SDDs) in the AnARM by taking advantage of built-in configurable delay lines. Those defects are known to escape one of the most commonly used delay fault models (the transition delay fault model). This work mainly focuses on e˙ective SDDs which can escape transition delay fault testing and are large enough to fail the circuit under normal operating conditions. At the same time, catching very small delay defects is avoided, when possible, to avoid falsely failing functional chips. To build the high quality delay test, this work develops an SDD test quality metric that is better suited for circuits with adaptable speeds. Then, it builds a delay test optimizer that adapts the built-in delay lines speeds to a preexisting at-speed pattern set to create a high quality SDD test. This work presents a novel SDD test quality metric called the weighted slack percentage (WeSPer), along with a new SDD testing model (named the ideal SDD test model). WeSPer is built to be a flexible metric capable of adapting to the availability of information about the circuit under test and the test environment. Since the AnARM can use multiple test speeds, WeSPer computation takes special care of assessing the effects of test frequency changes on the test quality. Specifically, special care is taken into avoiding overtesting the circuit. Overtesting will cause circuits under test to fail due to defects that are too small to affect the functionality of these circuits in their present state. A computation framework is built to compute WeSPer and compare it with other existing metrics in the literature over a large sets of process-voltage-temperature computation points. Simulations are done on a selected set of known benchmark circuits synthesized in the 28nm FD-SOI technology from STMicroelectronics

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Design-for-Test and Test Optimization Techniques for TSV-based 3D Stacked ICs

    Get PDF
    <p>As integrated circuits (ICs) continue to scale to smaller dimensions, long interconnects</p><p>have become the dominant contributor to circuit delay and a significant component of</p><p>power consumption. In order to reduce the length of these interconnects, 3D integration</p><p>and 3D stacked ICs (3D SICs) are active areas of research in both academia and industry.</p><p>3D SICs not only have the potential to reduce average interconnect length and alleviate</p><p>many of the problems caused by long global interconnects, but they can offer greater design</p><p>flexibility over 2D ICs, significant reductions in power consumption and footprint in</p><p>an era of mobile applications, increased on-chip data bandwidth through delay reduction,</p><p>and improved heterogeneous integration.</p><p>Compared to 2D ICs, the manufacture and test of 3D ICs is significantly more complex.</p><p>Through-silicon vias (TSVs), which constitute the dense vertical interconnects in a</p><p>die stack, are a source of additional and unique defects not seen before in ICs. At the same</p><p>time, testing these TSVs, especially before die stacking, is recognized as a major challenge.</p><p>The testing of a 3D stack is constrained by limited test access, test pin availability,</p><p>power, and thermal constraints. Therefore, efficient and optimized test architectures are</p><p>needed to ensure that pre-bond, partial, and complete stack testing are not prohibitively</p><p>expensive.</p><p>Methods of testing TSVs prior to bonding continue to be a difficult problem due to test</p><p>access and testability issues. Although some built-in self-test (BIST) techniques have been</p><p>proposed, these techniques have numerous drawbacks that render them impractical. In this dissertation, a low-cost test architecture is introduced to enable pre-bond TSV test through</p><p>TSV probing. This has the benefit of not needing large analog test components on the die,</p><p>which is a significant drawback of many BIST architectures. Coupled with an optimization</p><p>method described in this dissertation to create parallel test groups for TSVs, test time for</p><p>pre-bond TSV tests can be significantly reduced. The pre-bond probing methodology is</p><p>expanded upon to allow for pre-bond scan test as well, to enable both pre-bond TSV and</p><p>structural test to bring pre-bond known-good-die (KGD) test under a single test paradigm.</p><p>The addition of boundary registers on functional TSV paths required for pre-bond</p><p>probing results in an increase in delay on inter-die functional paths. This cost of test</p><p>architecture insertion can be a significant drawback, especially considering that one benefit</p><p>of 3D integration is that critical paths can be partitioned between dies to reduce their delay.</p><p>This dissertation derives a retiming flow that is used to recover the additional delay added</p><p>to TSV paths by test cell insertion.</p><p>Reducing the cost of test for 3D-SICs is crucial considering that more tests are necessary</p><p>during 3D-SIC manufacturing. To reduce test cost, the test architecture and test</p><p>scheduling for the stack must be optimized to reduce test time across all necessary test</p><p>insertions. This dissertation examines three paradigms for 3D integration - hard dies, firm</p><p>dies, and soft dies, that give varying degrees of control over 2D test architectures on each</p><p>die while optimizing the 3D test architecture. Integer linear programming models are developed</p><p>to provide an optimal 3D test architecture and test schedule for the dies in the 3D</p><p>stack considering any or all post-bond test insertions. Results show that the ILP models</p><p>outperform other optimization methods across a range of 3D benchmark circuits.</p><p>In summary, this dissertation targets testing and design-for-test (DFT) of 3D SICs.</p><p>The proposed techniques enable pre-bond TSV and structural test while maintaining a</p><p>relatively low test cost. Future work will continue to enable testing of 3D SICs to move</p><p>industry closer to realizing the true potential of 3D integration.</p>Dissertatio

    Concurrent Online Testing for Many Core Systems-on-Chips

    Get PDF
    Shrinking transistor sizes have introduced new challenges and opportunities for system-on-chip (SoC) design and reliability. Smaller transistors are more susceptible to early lifetime failure and electronic wear-out, greatly reducing their reliable lifetimes. However, smaller transistors will also allow SoC to contain hundreds of processing cores and other infrastructure components with the potential for increased reliability through massive structural redundancy. Concurrent online testing (COLT) can provide sufficient reliability and availability to systems with this redundancy. COLT manages the process of testing a subset of processing cores while the rest of the system remains operational. This can be considered a temporary, graceful degradation of system performance that increases reliability while maintaining availability. In this dissertation, techniques to assist COLT are proposed and analyzed. The techniques described in this dissertation focus on two major aspects of COLT feasibility: recovery time and test delivery costs. To reduce the time between failure and recovery, and thereby increase system availability, an anomaly-based test triggering unit (ATTU) is proposed to initiate COLT when anomalous network behavior is detected. Previous COLT techniques have relied on initiating tests periodically. However, determining the testing period is based on a device's mean time between failures (MTBF), and calculating MTBF is exceedingly difficult and imprecise. To address the test delivery costs associated with COLT, a distributed test vector storage (DTVS) technique is proposed to eliminate the dependency of test delivery costs on core location. Previous COLT techniques have relied on a single location to store test vectors, and it has been demonstrated that centralized storage of tests scales poorly as the number of cores per SoC grows. Assuming that the SoC organizes its processing cores with a regular topology, DTVS uses an interleaving technique to optimally distribute the test vectors across the entire chip. DTVS is analyzed both empirically and analytically, and a testing protocol using DTVS is described. COLT is only feasible if the applications running concurrently are largely unaffected. The effect of COLT on application execution time is also measured in this dissertation, and an application-aware COLT protocol is proposed and analyzed. Application interference is greatly reduced through this technique

    Reliable Design of Three-Dimensional Integrated Circuits

    Get PDF

    Improvement of hardware reliability with aging monitors

    Get PDF

    Fault Detection Methodology for Caches in Reliable Modern VLSI Microprocessors based on Instruction Set Architectures

    Get PDF
    Η παρούσα διδακτορική διατριβή εισάγει μία χαμηλού κόστους μεθοδολογία για την ανίχνευση ελαττωμάτων σε μικρές ενσωματωμένες κρυφές μνήμες που βασίζεται σε σύγχρονες Αρχιτεκτονικές Συνόλου Εντολών και εφαρμόζεται με λογισμικό αυτοδοκιμής. Η προτεινόμενη μεθοδολογία εφαρμόζει αλγορίθμους March μέσω λογισμικού για την ανίχνευση τόσο ελαττωμάτων αποθήκευσης όταν εφαρμόζεται σε κρυφές μνήμες που περιέχουν μόνο στατικές μνήμες τυχαίας προσπέλασης όπως για παράδειγμα κρυφές μνήμες επιπέδου 1, όσο και ελαττωμάτων σύγκρισης όταν εφαρμόζεται σε κρυφές μνήμες που περιέχουν εκτός από SRAM μνήμες και μνήμες διευθυνσιοδοτούμενες μέσω περιεχομένου, όπως για παράδειγμα πλήρως συσχετιστικές κρυφές μνήμες αναζήτησης μετάφρασης. Η προτεινόμενη μεθοδολογία εφαρμόζεται και στις τρεις οργανώσεις συσχετιστικότητας κρυφής μνήμης και είναι ανεξάρτητη της πολιτικής εγγραφής στο επόμενο επίπεδο της ιεραρχίας. Η μεθοδολογία αξιοποιεί υπάρχοντες ισχυρούς μηχανισμούς των μοντέρνων ISAs χρησιμοποιώντας ειδικές εντολές, που ονομάζονται στην παρούσα διατριβή Εντολές Άμεσης Προσπέλασης Κρυφής Μνήμης (Direct Cache Access Instructions - DCAs). Επιπλέον, η προτεινόμενη μεθοδολογία εκμεταλλεύεται τους έμφυτους μηχανισμούς καταγραφής απόδοσης και τους μηχανισμούς χειρισμού παγίδων που είναι διαθέσιμοι στους σύγχρονους επεξεργαστές. Επιπρόσθετα, η προτεινόμενη μεθοδολογία εφαρμόζει την λειτουργία σύγκρισης των αλγορίθμων March όταν αυτή απαιτείται (για μνήμες CAM) και επαληθεύει το αποτέλεσμα του ελέγχου μέσω σύντομης απόκρισης, ώστε να είναι συμβατή με τις απαιτήσεις του ελέγχου εντός λειτουργίας. Τέλος, στη διατριβή προτείνεται μία βελτιστοποίηση της μεθοδολογίας για πολυνηματικές, πολυπύρηνες αρχιτεκτονικές.The present PhD thesis introduces a low cost fault detection methodology for small embedded cache memories that is based on modern Instruction Set Architectures and is applied with Software-Based Self-Test (SBST) routines. The proposed methodology applies March tests through software to detect both storage faults when applied to caches that comprise Static Random Access Memories (SRAM) only, e.g. L1 caches, and comparison faults when applied to caches that apart from SRAM memories comprise Content Addressable Memories (CAM) too, e.g. Translation Lookaside Buffers (TLBs). The proposed methodology can be applied to all three cache associativity organizations: direct mapped, set-associative and full-associative and it does not depend on the cache write policy. The methodology leverages existing powerful mechanisms of modern ISAs by utilizing instructions that we call in this PhD thesis Direct Cache Access (DCA) instructions. Moreover, our methodology exploits the native performance monitoring hardware and the trap handling mechanisms which are available in modern microprocessors. Moreover, the proposed Methodology applies March compare operations when needed (for CAM arrays) and verifies the test result with a compact response to comply with periodic on-line testing needs. Finally, a multithreaded optimization of the proposed methodology that targets multithreaded, multicore architectures is also presented in this thesi

    Multi-level simulation of nano-electronic digital circuits on GPUs

    Get PDF
    Simulation of circuits and faults is an essential part in design and test validation tasks of contemporary nano-electronic digital integrated CMOS circuits. Shrinking technology processes with smaller feature sizes and strict performance and reliability requirements demand not only detailed validation of the functional properties of a design, but also accurate validation of non-functional aspects including the timing behavior. However, due to the rising complexity of the circuit behavior and the steady growth of the designs with respect to the transistor count, timing-accurate simulation of current designs requires a lot of computational effort which can only be handled by proper abstraction and a high degree of parallelization. This work presents a simulation model for scalable and accurate timing simulation of digital circuits on data-parallel graphics processing unit (GPU) accelerators. By providing compact modeling and data-structures as well as through exploiting multiple dimensions of parallelism, the simulation model enables not only fast and timing-accurate simulation at logic level, but also massively-parallel simulation with switch level accuracy. The model facilitates extensions for fast and efficient fault simulation of small delay faults at logic level, as well as first-order parametric and parasitic faults at switch level. With the parallelization on GPUs, detailed and scalable simulation is enabled that is applicable even to multi-million gate designs. This way, comprehensive analyses of realistic timing-related faults in presence of process- and parameter variations are enabled for the first time. Additional simulation efficiency is achieved by merging the presented methods in a unified simulation model, that allows to combine the unique advantages of the different levels of abstraction in a mixed-abstraction multi-level simulation flow to reach even higher speedups. Experimental results show that the implemented parallel approach achieves unprecedented simulation throughput as well as high speedup compared to conventional timing simulators. The underlying model scales for multi-million gate designs and gives detailed insights into the timing behavior of digital CMOS circuits, thereby enabling large-scale applications to aid even highly complex design and test validation tasks
    corecore