1,189 research outputs found

    Effective Launch-to-Capture Power Reduction for LOS Scheme with Adjacent-Probability-Based X-Filling

    Get PDF
    It has become necessary to reduce power during LSI testing. Particularly, during at-speed testing, excessive power consumed during the Launch-To-Capture (LTC) cycle causes serious issues that may lead to the overkill of defect-free logic ICs. Many successful test generation approaches to reduce IR-drop and/or power supply noise during LTC for the launch-off capture (LOC) scheme have previously been proposed, and several of X-filling techniques have proven especially effective. With X-filling in the launch-off shift (LOS) scheme, however, adjacent-fill (which was originally proposed for shift-in power reduction) is used frequently. In this work, we propose a novel X-filling technique for the LOS scheme, called Adjacent-Probability-based X-Filling (AP-fill), which can reduce more LTC power than adjacent-fill. We incorporate AP-fill into a post-ATPG test modification flow consisting of test relaxation and X-filling in order to avoid the fault coverage loss and the test vector count inflation. Experimental results for larger ITC\u2799 circuits show that the proposed AP-fill technique can achieve a higher power reduction ratio than 0-fill, 1-fill, and adjacent-fill.2011 Asian Test Symposium, 20-23 November 2011, New Delhi, Indi

    A survey of scan-capture power reduction techniques

    Get PDF
    With the advent of sub-nanometer geometries, integrated circuits (ICs) are required to be checked for newer defects. While scan-based architectures help detect these defects using newer fault models, test data inflation happens, increasing test time and test cost. An automatic test pattern generator (ATPG) exercise’s multiple fault sites simultaneously to reduce test data which causes elevated switching activity during the capture cycle. The switching activity results in an IR drop exceeding the devices under test (DUT) specification. An increase in IR-drop leads to failure of the patterns and may cause good DUTs to fail the test. The problem is severe during at-speed scan testing, which uses a functional rated clock with a high frequency for the capture operation. Researchers have proposed several techniques to reduce capture power. They used various methods, including the reduction of switching activity. This paper reviews the recently proposed techniques. The principle, algorithm, and architecture used in them are discussed, along with key advantages and limitations. In addition, it provides a classification of the techniques based on the method used and its application. The goal is to present a survey of the techniques and prepare a platform for future development in capture power reduction during scan testing

    Efficient Path Delay Test Generation with Boolean Satisfiability

    Get PDF
    This dissertation focuses on improving the accuracy and efficiency of path delay test generation using a Boolean satisfiability (SAT) solver. As part of this research, one of the most commonly used SAT solvers, MiniSat, was integrated into the path delay test generator CodGen. A mixed structural-functional approach was implemented in CodGen where longest paths were detected using the K Longest Path Per Gate (KLPG) algorithm and path justification and dynamic compaction were handled with the SAT solver. Advanced techniques were implemented in CodGen to further speed up the performance of SAT based path delay test generation using the knowledge of the circuit structure. SAT solvers are inherently circuit structure unaware, and significant speedup can be availed if structure information of the circuit is provided to the SAT solver. The advanced techniques explored include: Dynamic SAT Solving (DSS), Circuit Observability Don’t Care (Cir-ODC), SAT based static learning, dynamic learnt clause management and Approximate Observability Don’t Care (ACODC). Both ISCAS 89 and ITC 99 benchmarks as well as industrial circuits were used to demonstrate that the performance of CodGen was significantly improved with MiniSat and the use of circuit structure

    Test and Diagnosis of Integrated Circuits

    Get PDF
    The ever-increasing growth of the semiconductor market results in an increasing complexity of digital circuits. Smaller, faster, cheaper and low-power consumption are the main challenges in semiconductor industry. The reduction of transistor size and the latest packaging technology (i.e., System-On-a-Chip, System-In-Package, Trough Silicon Via 3D Integrated Circuits) allows the semiconductor industry to satisfy the latest challenges. Although producing such advanced circuits can benefit users, the manufacturing process is becoming finer and denser, making chips more prone to defects.The work presented in the HDR manuscript addresses the challenges of test and diagnosis of integrated circuits. It covers:- Power aware test;- Test of Low Power Devices;- Fault Diagnosis of digital circuits

    Metodologia de monitorização do envelhecimento para aplicações de auto-teste embutido

    Get PDF
    Dissertação de mestrado, Engenharia Eléctrica e Electrónica, Instituto Superior de Engenharia, Universidade do Algarve, 2013The high integration level achieved as well as complexity and performance enhancements in new nanometer technologies make IC (Integrated Circuits) products very difficult to test. Moreover, long term operation brings aging cumulative degradations, due to new processes and materials that lead to emerging defect phenomena and the consequence are products with increased variability in their behaviour, more susceptible to delay-faults and with a reduced expected lifecycle. The main objectives of this thesis are twofold, as explained in the following. First, a new software tool is presented to generate HDL (Hardware Description Language) for BIST (Built-In Self-Test) structures, aiming delay-faults, and inserted the new auto-test functionality in generic sequential CMOS circuits. The BIST methodology used implements a scan based BIST approach, using a new BIST controller to implement the Launch-On-Shift (LOS) and Launch-On-Capture (LOC) delay-fault techniques. Second, it will be shown that multi-VDD tests in circuits with BIST infra-structures can be used to detect gross delay-faults during on-field operations, and consequently can be used as an aging sensor methodology during circuits’ lifecycle. The discrete set of multi-VDD BIST sessions generates a Voltage Signature Collection (VSC) and the presence of a delay-fault (or a physical defect) modifies the VSC collection, allowing the aging sensor capability. The proposed Design for Testability (DFT) method and tool are demonstrated with extensive SPICE simulation using three ITC’99 benchmark circuits.O elevado nível de integração atingida, complexidade, assim como performances melhoradas em novas tecnologias nanométricas tornam os produtos em circuitos integrados tecnológicos muito difíceis de testar. Para além disso, a operação a longo prazo produz degradações cumulativas pelo envelhecimento dos circuitos, devido a novos processos e materiais que conduzem a novos defeitos e a consequência são produtos com maior variabilidade no seu funcionamento, mais susceptíveis às faltas de atraso e com um tempo de vida menor. Os principais objectivos desta tese são dois, como explicado em seguida. Primeiro, é apresentada uma nova ferramenta de software para gerar estruturas de auto-teste integrado (BIST, Built-In Self-Test) descritas em linguagens de descrição de hardware (HDL, Hardware Description Language), com o objectivo de detectar faltas de atraso, e inserir a nova funcionalidade de auto-teste em circuitos genéricos sequenciais CMOS. A metodologia de BIST utilizada implementa um procedimento baseado em caminhos de deslocamento, utilizando um novo controlador de BIST para implementar técnicas de faltas de atraso, como Launch-On-Shift (LOS) e Launch-On-Capture (LOC). Segundo, irá ser mostrado que testes multi-VDD em circuitos com infra-estruturas de BIST podem ser usados para detectar faltas de atraso grosseiras durante a operação no terreno e, consequentemente, pode ser usado como uma metodologia de sensor de envelhecimento durante o tempo de vida dos circuitos. Um número discreto de sessões BIST multi-VDD geram uma Colecção de Assinaturas de Tensão (Voltage Signature Collection, VSC) e a presença de uma falta de atraso (ou um defeito físico) faz modificar a colecção VSC, comportando-se como sensor de envelhecimento. O trabalho foi iniciado com o estudo do estado da arte nesta área. Assim, foram estudadas e apresentadas no capítulo 2 as principais técnicas de DfT (Design for Testability) disponíveis e utilizadas pela indústria, nomeadamente, as técnicas de SP (Scan Path), de BIST e as técnicas de scan para delay-faults, LOS e LOC. No capítulo 3, ainda referente ao estudo sobre o estado da arte, é apresentado o estudo sobre os fenómenos que provocam o envelhecimento dos circuitos digitais, nomeadamente o NBTI (Negative Bias Temperature Instability), que é considerado o factor mais relevante no envelhecimento de circuitos integrados (especialmente em nanotecnologias). Em seguida, iniciou-se o desenvolvimento do primeiro objectivo. Relativamente a este assunto, começou-se por definir qual o comportamento das estruturas de BIST e como se iriam interligar. O comportamento foi descrito, bloco a bloco, em VHDL comportamental, ao nível RTL (Register Transfer Level). Esta descrição foi então validada por simulação, utilizando a ferramenta ModelSim. Posteriormente, esta descrição comportamental foi sintetizada através da ferramenta Synopsys, com a colaboração do INESC-ID em Lisboa (instituição parceira nestes trabalhos de investigação), e foi obtida uma netlist ao nível de porta lógica, que foi guardada utilizando a linguagem de descrição de hardware Verilog. Assim, obtiveram-se dois tipos de descrição dos circuitos BIST: uma comportamental, em VHDL, e outra estrutural, em Verilog (esta descrição estrutural em Verilog irá permitir, posteriormente, fazer a simulação e análise de envelhecimento). A nova estrutura de BIST obtida é baseada no modelo clássico de BIST, mas apresenta algumas alterações, nomeadamente ao nível da geração de vectores de teste e no controlo e aplicação desses vectores ao circuito. Estas modificações têm como objectivo aumentar a detecção de faltas e permitir o teste de faltas de atraso. É composto por três blocos denominados LFSRs (Linear Feedback Shift Registers), um utilizado para gerar os vectores pseudo-aleatórios para as entradas primárias do circuito, outro para gerar os vectores para a entrada do scan path, e o último utilizado como contador para controlar o número de bits introduzidos no scan path. Relativamente ao controlador, este foi especificamente desenhado para controlar um teste com estratégia de test-per-scan (ou seja, um teste baseado no caminho de varrimento existente no circuito) e tem uma codificação de estados que permite implementar as estratégias de teste de faltas de atraso, Launch-On-Shift (LOS) e Launch-On-Capture (LOC). Na secção de saída do novo modelo de BIST, o processo de compactação usa o mesmo princípio do modelo tradicional, utilizando neste caso um MISR (Multiple Input Signature Register). Ainda relativamente ao primeiro objectivo, seguiu-se o desenvolvimento da ferramenta BISTGen, para automatizar a geração das estruturas de BIST atrás mencionadas, nos dois tipos de descrição, e automaticamente inserir estas estruturas num circuito de teste (CUT, Circuit Under Test). A aplicação de software deve permitir o manuseamento de dois tipos de informação relativa ao circuito: descrição do circuito pelo seu comportamento, em VHDL, e descrição do circuito pela sua estrutura, em Verilog. Deve ter como saída a descrição de hardware supra citada, inserindo todos os blocos integrantes da estrutura num só ficheiro, contendo apenas um dos tipos de linguagem (Verilog ou VHDL), escolhida previamente pelo utilizador. No caso dos LFSRs e do MISR, o programa deve permitir ao utilizador a escolha de LFSRs do tipo linear ou do tipo modular (também conhecidos por fibonacci ou galois), e deve também possuir suporte para automaticamente seleccionar de uma base de dados quais as realimentações necessárias que conduzem à definição do polinómio primitivo para o LFSR. Será necessário ainda criar uma estrutura em base de dados para gerir os nomes e o número de entradas e saídas do circuito submetido a teste, a que chamamos CUT, de forma a simplificar o processo de renomeação que o utilizador poderá ter de efectuar. Dar a conhecer ao programa os nomes das entradas e saídas do CUT é de relevante importância, uma vez que a atribuição de nomes para as entradas e saídas pode vir em qualquer língua ou dialecto, não coincidindo com os nomes padrão normalmente atribuídos. Relativamente às duas linguagens que o programa recebe através do CUT na sua entrada, no caso VHDL após inserir BIST o ficheiro final terá sempre uma estrutura semelhante, qualquer que seja o ficheiro a ser tratado, variando apenas com o hardware apresentado pelo CUT. No entanto, para o caso Verilog a situação será diferente, uma vez que o programa tem de permitir que o ficheiro final gerado possa surgir de duas formas dependendo da escolha desejada. A primeira forma que o software deve permitir para o caso Verilog é gerar um ficheiro contendo módulos, de uma forma semelhante ao que acontece no caso VHDL. No entanto, deve permitir também a obtenção, caso o utilizador solicite, de um ficheiro unificado, sem sub-módulos nos blocos, para que o ficheiro final contenha apenas uma única estrutura, facilitando a sua simulação e análise de envelhecimento nas etapas seguintes. Relativamente ao segundo objectivo, com base no trabalho anterior já efectuado em metodologias para detectar faltas de delay em circuitos com BIST, foi definida uma metodologia de teste para, durante a vida útil dos circuitos, permitir avaliar como vão envelhecendo, tratando-se assim de uma metodologia de monitorização de envelhecimento para circuitos com BIST. Um aspecto fundamental para a realização deste segundo objectivo é podermos prever como o circuito vai envelhecer. Para realizar esta tarefa, sempre subjectiva, utilizou-se uma ferramenta desenvolvida no ISE-UAlg em outra tese de mestrado anterior a esta, a ferramenta AgingCalc. Esta ferramenta inicia-se com a definição, por parte do utilizador, das probabilidades de operação das entradas primárias do circuito (probabilidades de cada entrada estar a ‘0’ ou a ‘1’). De notar que este é o processo subjectivo existente na análise de envelhecimento, já que é impossível prever como um circuito irá ser utilizado. Com base nestas probabilidades de operação, o programa utiliza a estrutura do circuito para calcular, numa primeira instância, as probabilidades dos nós do circuito estarem a ‘0’ ou a ‘1’, e numa segunda instância as probabilidades de cada transístor PMOS estar ligado e com o seu canal em stress (com uma tensão negativa aplicada à tensão VGS e um campo eléctrico aplicado ao dieléctrico da porta). Utilizando fórmulas definidas na literatura para modelação do parâmetro Vth (tensão limiar de condução) do transístor de acordo com um envelhecimento produzido pelo efeito NBTI (Negative Bias Temperature Instability), o programa calcula, para cada ano ou tempo de envelhecimento a considerar, as variações ocorridas no Vth de cada transístor PMOS, com base nas probabilidades e condições de operação previamente definidas, obtendo um novo Vth para cada transístor (os valores prováveis para os transístores envelhecidos). Em seguida, o programa instancia o simulador HSPICE para simular as portas lógicas do circuito, utilizando uma descrição que contém os Vth calculados. Esta simulação permite calcular os atrasos em cada porta para cada ano de envelhecimento considerado, podendo em seguida calcular e obter a previsão para o envelhecimento de cada caminho combinatório do circuito. É de notar que, embora a previsão de envelhecimento seja subjectiva, pois depende de uma previsão de operação, é possível definir diferentes probabilidades de operação de forma a estabelecer limites prováveis para o envelhecimento de cada caminho. Tendo uma ferramenta que permite prever como o circuito irá envelhecer, é possível utilizá-la para modificar a estrutura do circuito e introduzir faltas de delay produzidas pelo envelhecimento por NBTI ao longo dos anos de operação (modelados pelo Vth dos transístores PMOS). Assim, no capítulo 5 irá ser mostrado que testes multi-VDD em circuitos com infra-estruturas de BIST podem ser usados para detectar faltas de atraso grosseiras durante a operação no terreno, podendo em alguns casos identificar variações provocadas pelo envelhecimento em caminhos curtos, e consequentemente, estes testes podem ser usados como uma metodologia de sensor de envelhecimento durante o tempo de vida dos circuitos. Um número discreto de sessões BIST multi-VDD geram uma Colecção de Assinaturas de Tensão (Voltage Signature Collection, VSC) e a presença de uma falta de atraso (ou um defeito físico) faz modificar a colecção VSC, comportando-se como sensor de envelhecimento. O objectivo será, especificando, fazer variar a tensão de alimentação, baixando o seu valor dentro de um determinado intervalo e submetendo o circuito a sucessivas sessões de BIST para cada valor de tensão, até que o circuito retorne uma assinatura diferente da esperada. Este procedimento de simulação será feito para uma maturidade de até 20 anos, podendo o incremento não ser unitário. Na realidade os circuitos nos primeiros anos de vida em termos estatísticos não sofrem envelhecimento a ponto de causar falhas por esse efeito. As falhas que podem acelerar o processo de envelhecimento estão relacionadas com defeitos significativos no processo de fabrico mas que ainda assim não são suficientes para no início do seu ciclo de vida fazer o circuito falhar, tornando-se efectivas após algum tempo de utilização. Os métodos e ferramentas propostos de DfT são demonstrados com extensas simulações VHDL e SPICE, utilizando circuitos de referência

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    High Quality Compact Delay Test Generation

    Get PDF
    Delay testing is used to detect timing defects and ensure that a circuit meets its timing specifications. The growing need for delay testing is a result of the advances in deep submicron (DSM) semiconductor technology and the increase in clock frequency. Small delay defects that previously were benign now produce delay faults, due to reduced timing margins. This research focuses on the development of new test methods for small delay defects, within the limits of affordable test generation cost and pattern count. First, a new dynamic compaction algorithm has been proposed to generate compacted test sets for K longest paths per gate (KLPG) in combinational circuits or scan-based sequential circuits. This algorithm uses a greedy approach to compact paths with non-conflicting necessary assignments together during test generation. Second, to make this dynamic compaction approach practical for industrial use, a recursive learning algorithm has been implemented to identify more necessary assignments for each path, so that the path-to-test-pattern matching using necessary assignments is more accurate. Third, a realistic low cost fault coverage metric targeting both global and local delay faults has been developed. The metric suggests the test strategy of generating a different number of longest paths for each line in the circuit while maintaining high fault coverage. The number of paths and type of test depends on the timing slack of the paths under this metric. Experimental results for ISCAS89 benchmark circuits and three industry circuits show that the pattern count of KLPG can be significantly reduced using the proposed methods. The pattern count is comparable to that of transition fault test, while achieving higher test quality. Finally, the proposed ATPG methodology has been applied to an industrial quad-core microprocessor. FMAX testing has been done on many devices and silicon data has shown the benefit of KLPG test

    Scan Test Coverage Improvement Via Automatic Test Pattern Generation (Atpg) Tool Configuration

    Get PDF
    The scan test coverage improvement by using automatic test pattern generation (ATPG) tool configuration was investigated. Improving the test coverage is essential in detecting manufacturing defects in semiconductor industry so that high quality products can be supplied to consumers. The ATPG tool used was Mentor Graphics Tessent TestKompress (version 2014.1). The study was done by setting up a few experiments of utilizing and modifying ATPG commands and switches, observing the test coverage improvement from the statistical reports provided during pattern generation process and providing relatable discussions. By modifying the ATPG commands, it can be expected to have some improvement in the test coverage. The scan test patterns generated were stuck-at test patterns. Based on the experiments done, comparison was made on the different coverage readings and the most optimized method and flow of ATPG were determined. The most optimized flow gave an improvement of 0.91% in test coverage which is acceptable since this method does not involve a change in design. The test patterns generated were converted and tested using automatic test equipment (ATE) to observe its performance on real silicon. The test coverage improvement using ATPG tool instead of the design-based method is important as a faster workaround for back-end engineers to provide high quality test contents in such a short product development duration

    A Novel Scheme to Reduce Power Supply Noise for High-Quality At-Speed Scan Testing

    Get PDF
    High-quality at-speed scan testing, characterized by high small-delay-defect detecting capability, is indispensable to achieve high delay test quality for DSM circuits. However, such testing is susceptible to yield loss due to excessive power supply noise caused by high launch-induced switching activity. This paper addresses this serious problem with a novel and practical post-ATPG X-filling scheme, featuring (1) a test relaxation method, called path keeping X-identification, that finds don\u27t-care bits from a fully-specified transition delay test set while preserving its delay test quality by keeping the longest paths originally sensitized for fault detection, and (2) an X-filling method, called justification-probability-based fill (JP-fill), that is both effective and scalable for reducing launch-induced switching activity. This scheme can be easily implemented into any ATPG flow to effectively reduce power supply noise, without any impact on delay test quality, test data volume, area overhead, and circuit timing.2007 IEEE International Test Conference, 21-26 October 2007, Santa Clara, CA, US
    corecore