7 research outputs found

    Test Pattern Generation Using LFSR with Reseeding Scheme for BIST Designs

    Get PDF
    ABSTRACT: In this paper we present LFSR reseeding scheme for BIST. A time -to -market efficient algorithm is introduced for selecting reseeding points in the test sequence. This algorithm targets complete fault coverage and minimization of the test length. Functional broadside tests that avoid over testing by ensuring that a circuit traverses only reachable states during the functional clock cycles of a tes

    REALIZATION OF LOW TRANSITION BASED PRPG FOR POWER OPTIMIZED APPLICATIONS

    Get PDF
    This paper proposes low power pseudo random test pattern generator. This produces the necessary test patterns which are used for running the circuit under test for detecting faults. Power consumption of the circuit under test is measured by switching activity of the inside logic which depends on the randomness of applied stimulus. Power consumption is greatly increased due to the reduction of correlation between the successive vectors of applied stimulus. A modified conventional linear feedback shift register is implemented for reducing power of circuit under test by generating the patterns by reducing the utilization of hard ware. The main intension of producing intermediate patterns is to reduce the conventional activity of primary inputs (PI) that which reduces the switching activities inside the CUT and by this power consumption is reduced without using huge hardware

    Multi-Cycle at Speed Test

    Get PDF
    In this research, we focus on the development of an algorithm that is used to generate a minimal number of patterns for path delay test of integrated circuits using the multi-cycle at-speed test. We test the circuits in functional mode, where multiple functional cycles follow after the test pattern scan-in operation. This approach increases the delay correlation between the scan and functional test, due to more functionally realistic power supply noise. We use multiple at-speed cycles to compact K-longest paths per gate tests, which reduces the number of scan patterns. After a path is generated, we try to place each path in the first pattern in the pattern pool. If the path does not fit due to conflicts, we attempt to place it in later functional cycles. This compaction approach retains the greedy nature of the original dynamic compaction algorithm where it will stop if the path fits into a pattern. If the path is not able to compact in any of the functional cycles of patterns in the pool, we generate a new pattern. In this method, each path delay test is compared to at-speed patterns in the pool. The challenge is that the at-speed delay test in a given at-speed cycle must have its necessary value assignments set up in previous (preamble) cycles, and have the captured results propagated to a scan cell in the later (coda) cycles. For instance, if we consider three at-speed (capture) cycles after the scan-in operation, and if we need to place a fault in the first capture cycle, then we must generate it with two propagation cycles. In this case, we consider these propagation cycles as coda cycles, so the algorithm attempts to select the most observable path through them. Likewise, if we are placing the path test in the second capture cycle, then we need one preamble cycle and one coda cycle, and if we are placing the path test in the third capture cycle, we require two preamble cycles with no coda cycles

    Observability Driven Path Generation for Delay Test

    Get PDF
    This research describes an approach for path generation using an observability metric for delay test. K Longest Path Per Gate (KLPG) tests are generated for sequential circuits. A transition launched from a scan flip-flop (SFF) is captured into another SFF during at-speed clock cycles, that is, clock cycles at the rated design speed. The generated path is a ‘longest path’ suitable for delay test. The path generation algorithm then utilizes observability of the fan-out gates in the consecutive, lower-speed clock cycles, known as coda cycles, to generate paths ending at a SFF, to capture the transition from the at-speed cycles. For a given clocking scheme defined by the number of coda cycles, if the final flip-flop is not scan-enabled, the path generation algorithm attempts to generate a different path that ends at a SFF, located in a different branch of the circuit fan-out, indicated by lower observability. The paths generated over multiple cycles are sequentially justified using Boolean satisfiability. The observability metric optimizes the path generation in the coda cycles by always attempting to grow the path through the branch with the best observability and never generating a path that ends at a non-scan flip-flop. The algorithm has been developed in C++. The experiments have been performed on an Intel Core i7 machine with 64GB RAM. Various ISCAS benchmark circuits have been used with various KLPG configurations for code evaluation. Multiple configurations have been used for the experiments. The combinations of the values of K [1, 2, 3, 4, 5] and number of coda cycles [1, 2, 3] have been used to characterize the implementation. A sublinear rise is run time has been observed with increasing K values. The total number of tested paths rise with K and falls with number of coda cycles, due to the increasing number of constraints on the path, particularly due to the fixed inputs

    Improved Path Recovery in Pseudo Functional Path Delay Test Using Extended Value Algebra

    Get PDF
    Scan-based delay test achieves high fault coverage due to its improved controllability and observability. This is particularly important for our K Longest Paths Per Gate (KLPG) test approach, which has additional necessary assignments on the paths. At the same time, some percentage of the flip-flops in the circuit will not scan, increasing the difficulty in test generation. In particular, there is no direct control on the outputs of those non-scan cells. All the non-scan cells that cannot be initialized are considered “uncontrollable” in the test generation process. They behave like “black boxes” and, thus, may block a potential path propagation, resulting in path delay test coverage loss. It is common for the timing critical paths in a circuit to pass through nodes influenced by the non-scan cells. In our work, we have extended the traditional Boolean algebra by including the “uncontrolled” state as a legal logic state, so that we can improve path coverage. Many path pruning decisions can be taken much earlier and many of the lost paths due to uncontrollable non-scan cells can be recovered, increasing path coverage and potentially reducing average CPU time per path. We have extended the existing traditional algebra to an 11-value algebra: Zero (stable), One (stable), Unknown, Uncontrollable, Rise, Fall, Zero/Uncontrollable, One/Uncontrollable, Unknown/Uncontrollable, Rise/Uncontrollable, and Fall/Uncontrollable. The logic descriptions for the NOT, AND, NAND, OR, NOR, XOR, XNOR, PI, Buff, Mux, TSL, TSH, TSLI, TSHI, TIE1 and TIE0 cells in the ISCAS89 benchmark circuits have been extended to the 11-value truth table. With 10% non-scan flip-flops, improved path delay fault coverage has been observed in comparison to that with the traditional algebra. The greater the number of long paths we want to test; the greater the path recovery advantage we achieve using our algebra. Along with improved path recovery, we have been able to test a greater number of transition fault sites. In most cases, the average CPU time per path is also lower while using the 11-value algebra. The number of tested paths increased by an average of 1.9x for robust tests, and 2.2x for non-robust tests, for K=5 (five longest rising and five longest falling transition paths through each line in the circuit), using the eleven-value algebra in contrast to the traditional algebra. The transition fault coverage increased by an average of 70%. The improvement increased with higher K values. The CPU time using the extended algebra increased by an average of 20%. So the CPU time per path decreased by an average of 40%. In future work, the extended algebra can achieve better test coverage for memory intensive circuits, circuits with logic black boxes, third party IPs, and analog units

    Constraint extraction for pseudo-functional scan-based delay testing

    No full text
    Recent research results have shown that the traditional structural testing for delay and crosstalk faults may result in over-testing due to the non-trivial number of such faults that are untestable in the functional mode while testable in the test mode. This paper presents a pseudo-functional test methodology that attempts to minimize the over-testing problem of the scan-based circuits for the delay faults. The first pattern of a two-pattern test is still delivered by scan in the test mode but the pattern is generated in such a way that it does not violate the functional constraints extracted from the functional logic. In this paper, we use a SAT solver to extract a set of functional constraints which consists of illegal states and internal signal correlation. Along with the functional justification (also called broad-side) test application scheme, the functional constraints are imposed to a commercial delay-fault ATPG tool to generate pseudofunctional delay tests. The experimental results indicate that the percentage of untestable delay faults is non-trivial for many circuits which support the hypothesis of the over-testing problem in delay testing. The results also indicate the effectiveness of the proposed constraint extraction method. I

    Méthodologie de vérification automatique basée sur l'utilisation des tests structurels de transition avec insertion de registres à balayage

    Get PDF
    Au cours des derniĂšres dĂ©cennies, l’évolution de la technologie n'a cessĂ© d’introduire de nouveaux dĂ©fis dans la vĂ©rification des circuits intĂ©grĂ©s (IC). L'industrie estime que la vĂ©rification fonctionnelle prend environ 50% Ă  70% de l'effort total d’un projet. Et, malgrĂ© les budgets et les efforts investis dans la vĂ©rification, les rĂ©sultats obtenus ne sont pas satisfaisants. La vĂ©rification basĂ©e sur la simulation, Ă©galement appelĂ©e vĂ©rification dynamique, est la technique la plus utilisĂ©e dans la vĂ©rification fonctionnelle. Par contre, ce type de vĂ©rification a clairement Ă©chouĂ© Ă  suivre le rythme de croissance de la complexitĂ©. Par consĂ©quent, des solutions innovantes sont requises, avec la concurrence sur les produits et les services ainsi que l’implacable loi du temps de mise sur le marchĂ©. Plusieurs techniques ont Ă©tĂ© dĂ©veloppĂ©es pour surmonter les dĂ©fis de la vĂ©rification dynamique, allant de techniques entiĂšrement manuelles Ă  des techniques plus avancĂ©es. Les techniques manuelles et semi-manuelles ne peuvent ĂȘtre utilisĂ©es pour les designs complexes, et les approches les plus avancĂ©es qui sont couramment utilisĂ©s dans l'industrie ont besoin de compĂ©tences particuliĂšres et beaucoup d’efforts afin d'atteindre une bonne productivitĂ© de vĂ©rification. Au niveau du test par contre, l'utilisation d'approches basĂ©es sur des modĂšles de pannes et sur les concepts de conception en vue du test (DFT), a conduit au dĂ©veloppement d’outils automatiques de gĂ©nĂ©ration de test (ATPG) efficaces. L'infrastructure de test qui en rĂ©sulte a grandement aidĂ© la communautĂ© du test Ă  rĂ©soudre plusieurs problĂšmes. Dans cette thĂšse, nous nous intĂ©ressons principalement Ă  la productivitĂ© du processus de vĂ©rification, plus particuliĂšrement la vĂ©rification de circuits sĂ©quentiels. Nous proposons une nouvelle mĂ©thodologie qui explore la combinaison du test et de la vĂ©rification, plus prĂ©cisĂ©ment l'utilisation des tests structurels de transition dans le processus de vĂ©rification RT basĂ©e sur la simulation. Cette mĂ©thodologie a pour but de rĂ©duire le temps et les efforts requis pour vĂ©rifier un circuit et d'amĂ©liorer la couverture rĂ©sultante, induisant des amĂ©liorations significatives de la qualitĂ© de la vĂ©rification et de sa productivitĂ©. La base de la mĂ©thodologie proposĂ©e est l'intuition (qui est devenu une observation), selon laquelle ce qui est difficile Ă  tester (« Hard Fault ») est probablement difficile Ă  vĂ©rifier (« Dark Corner »). L'objectif est de tirer profit des outils de test efficaces tels que les outils ATPG, et les techniques DFT tels que l’insertion des registres a balayage afin de simuler efficacement la fonctionnalitĂ© du design avec un minimum de temps et d'efforts. Sur la base de tous ces concepts, nous avons dĂ©veloppĂ© un environnement de vĂ©rification RTL automatisĂ© composĂ© de trois outils de base: 1) un extracteur de contraintes qui identifie les contraintes fonctionnelles de conception, 2) un outil gĂ©nĂ©rateur de banc d'essai, et 3) un dĂ©tecteur d’erreurs basĂ© sur une observabilitĂ© Ă©levĂ©e. Les rĂ©sultats expĂ©rimentaux montrent l'efficacitĂ© de la mĂ©thode de vĂ©rification proposĂ©e. Les couvertures de code et d’erreurs obtenues suite Ă  la simulation avec l’environnement proposĂ© sont Ă©gales Ă , et la plupart des fois plus Ă©levĂ© que, celles obtenues avec d'autres approches connues de vĂ©rification. En plus des amĂ©liorations de couverture, il y a une rĂ©duction remarquable de l'effort et du temps nĂ©cessaire pour vĂ©rifier les designs
    corecore