6,718 research outputs found

    Transition Faults and Transition Path Delay Faults: Test Generation, Path Selection, and Built-In Generation of Functional Broadside Tests

    Get PDF
    As the clock frequency and complexity of digital integrated circuits increase rapidly, delay testing is indispensable to guarantee the correct timing behavior of the circuits. In this dissertation, we describe methods developed for three aspects of delay testing in scan-based circuits: test generation, path selection and built-in test generation. We first describe a deterministic broadside test generation procedure for a path delay fault model named the transition path delay fault model, which captures both large and small delay defects. Under this fault model, a path delay fault is detected only if all the individual transition faults along the path are detected by the same test. To reduce the complexity of test generation, sub-procedures with low complexity are applied before a complete branch-and-bound procedure. Next, we describe a method based on static timing analysis to select critical paths for test generation. Logic conditions that are necessary for detecting a path delay fault are considered to refine the accuracy of static timing analysis, using input necessary assignments. Input necessary assignments are input values that must be assigned to detect a fault. The method calculates more accurate path delays, selects paths that are critical during test application, and identifies undetectable path delay faults. These two methods are applicable to off-line test generation. For large circuits with high complexity and frequency, built-in test generation is a cost-effective method for delay testing. For a circuit that is embedded in a larger design, we developed a method for built-in generation of functional broadside tests to avoid excessive power dissipation during test application and the overtesting of delay faults, taking the functional constraints on the primary input sequences of the circuit into consideration. Functional broadside tests are scan-based two-pattern tests for delay faults that create functional operation conditions during test application. To avoid the potential fault coverage loss due to the exclusive use of functional broadside tests, we also developed an optional DFT method based on state holding to improve fault coverage. High delay fault coverage can be achieved by the developed method for benchmark circuits using simple hardware

    Low-Capture-Power Test Generation for Scan-Based At-Speed Testing

    Get PDF
    Scan-based at-speed testing is a key technology to guarantee timing-related test quality in the deep submicron era. However, its applicability is being severely challenged since significant yield loss may occur from circuit malfunction due to excessive IR drop caused by high power dissipation when a test response is captured. This paper addresses this critical problem with a novel low-capture-power X-filling method of assigning 0\u27s and 1\u27s to unspecified (X) bits in a test cube obtained during ATPG. This method reduces the circuit switching activity in capture mode and can be easily incorporated into any test generation flow to achieve capture power reduction without any area, timing, or fault coverage impact. Test vectors generated with this practical method greatly improve the applicability of scan-based at-speed testing by reducing the risk of test yield lossIEEE International Conference on Test, 2005, 8 November 2005, Austin, TX, US

    DSpot: Test Amplification for Automatic Assessment of Computational Diversity

    Full text link
    Context: Computational diversity, i.e., the presence of a set of programs that all perform compatible services but that exhibit behavioral differences under certain conditions, is essential for fault tolerance and security. Objective: We aim at proposing an approach for automatically assessing the presence of computational diversity. In this work, computationally diverse variants are defined as (i) sharing the same API, (ii) behaving the same according to an input-output based specification (a test-suite) and (iii) exhibiting observable differences when they run outside the specified input space. Method: Our technique relies on test amplification. We propose source code transformations on test cases to explore the input domain and systematically sense the observation domain. We quantify computational diversity as the dissimilarity between observations on inputs that are outside the specified domain. Results: We run our experiments on 472 variants of 7 classes from open-source, large and thoroughly tested Java classes. Our test amplification multiplies by ten the number of input points in the test suite and is effective at detecting software diversity. Conclusion: The key insights of this study are: the systematic exploration of the observable output space of a class provides new insights about its degree of encapsulation; the behavioral diversity that we observe originates from areas of the code that are characterized by their flexibility (caching, checking, formatting, etc.).Comment: 12 page

    Finding and understanding bugs in C compilers

    Get PDF
    ManuscriptCompilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to find compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our first contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undefined and unspecified behaviors that would destroy its ability to automatically find wrong-code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers

    Constraint solving over multi-valued logics - application to digital circuits

    Get PDF
    Due to usage conditions, hazardous environments or intentional causes, physical and virtual systems are subject to faults in their components, which may affect their overall behaviour. In a ‘black-box’ agent modelled by a set of propositional logic rules, in which just a subset of components is externally visible, such faults may only be recognised by examining some output function of the agent. A (fault-free) model of the agent’s system provides the expected output given some input. If the real output differs from that predicted output, then the system is faulty. However, some faults may only become apparent in the system output when appropriate inputs are given. A number of problems regarding both testing and diagnosis thus arise, such as testing a fault, testing the whole system, finding possible faults and differentiating them to locate the correct one. The corresponding optimisation problems of finding solutions that require minimum resources are also very relevant in industry, as is minimal diagnosis. In this dissertation we use a well established set of benchmark circuits to address such diagnostic related problems and propose and develop models with different logics that we formalise and generalise as much as possible. We also prove that all techniques generalise to agents and to multiple faults. The developed multi-valued logics extend the usual Boolean logic (suitable for faultfree models) by encoding values with some dependency (usually on faults). Such logics thus allow modelling an arbitrary number of diagnostic theories. Each problem is subsequently solved with CLP solvers that we implement and discuss, together with a new efficient search technique that we present. We compare our results with other approaches such as SAT (that require substantial duplication of circuits), showing the effectiveness of constraints over multi-valued logics, and also the adequacy of a general set constraint solver (with special inferences over set functions such as cardinality) on other problems. In addition, for an optimisation problem, we integrate local search with a constructive approach (branch-and-bound) using a variety of logics to improve an existing efficient tool based on SAT and ILP

    A survey of scan-capture power reduction techniques

    Get PDF
    With the advent of sub-nanometer geometries, integrated circuits (ICs) are required to be checked for newer defects. While scan-based architectures help detect these defects using newer fault models, test data inflation happens, increasing test time and test cost. An automatic test pattern generator (ATPG) exercise’s multiple fault sites simultaneously to reduce test data which causes elevated switching activity during the capture cycle. The switching activity results in an IR drop exceeding the devices under test (DUT) specification. An increase in IR-drop leads to failure of the patterns and may cause good DUTs to fail the test. The problem is severe during at-speed scan testing, which uses a functional rated clock with a high frequency for the capture operation. Researchers have proposed several techniques to reduce capture power. They used various methods, including the reduction of switching activity. This paper reviews the recently proposed techniques. The principle, algorithm, and architecture used in them are discussed, along with key advantages and limitations. In addition, it provides a classification of the techniques based on the method used and its application. The goal is to present a survey of the techniques and prepare a platform for future development in capture power reduction during scan testing

    Towards a Packet Classification Benchmark

    Get PDF
    Packet classiïŹcation is the enabling technology for next generation network services and often the primary bottleneck in high-performance routers. Due to the importance and complexity of the problem, a myriad of algorithms and resulting implementations exist. The performance and capacity of many algorithms and classiïŹcation devices, including TCAMs, depend upon properties of the ïŹlter set and query patterns. Unlike microprocessors in the ïŹeld of computer architecture, there are no standard performance evaluation tools or techniques available to evaluate packet classiïŹcation algorithms and products. Network service providers are reluctant to distribute copies of real ïŹlter databases for security and conïŹdentiality reasons, hence realistic test vectors are a scarce commodity. The small subset of the research community who obtain real databases either limit performance evaluation to the small sample space or employ ad hoc methods of modifying those databases. We present a tool for creating synthetic ïŹlter databases that retain characteristics of a seed database and provide systematic mechanisms for varying the number and composition of the ïŹlters. We propose a benchmarking methodology based on this tool that provides a mechanism for evaluating packet classiïŹcation performance on a uniform scale. We seek to initiate a broader discussion within the community that will result in a standard packet classiïŹcation benchmark

    Putting Teeth into Open Architectures: Infrastructure for Reducing the Need for Retesting

    Get PDF
    Proceedings Paper (for Acquisition Research Program)The Navy is currently implementing the open-architecture framework for developing joint interoperable systems that adapt and exploit open-system design principles and architectures. This raises concerns about how to practically achieve dependability in software-intensive systems with many possible configurations when: 1) the actual configuration of the system is subject to frequent and possibly rapid change, and 2) the environment of typical reusable subsystems is variable and unpredictable. Our preliminary investigations indicate that current methods for achieving dependability in open architectures are insufficient. Conventional methods for testing are suited for stovepipe systems and depend strongly on the assumptions that the environment of a typical system is fixed and known in detail to the quality-assurance team at test and evaluation time. This paper outlines new approaches to quality assurance and testing that are better suited for providing affordable reliability in open architectures, and explains some of the additional technical features that an Open Architecture must have in order to become a Dependable Open Architecture.Naval Postgraduate School Acquisition Research ProgramApproved for public release; distribution is unlimited
    • 

    corecore