1,090 research outputs found

    A combined tree growing technique for block-test scheduling under power constraints

    Get PDF
    A tree growing technique is used here together with classical scheduling algorithms in order to improve the test concurrency having assigned power dissipation limits. First of all, the problem of unequal-length block-test scheduling under power dissipation constraints is modeled as a tree growing problem. Then a combination of list and force-directed scheduling algorithms is adapted to tackle it. The goal of this approach is to achieve rapidly a test scheduling solution with a near-optimal test application time. This is initially achieved with the list approach. Then the power dissipation distribution of this solution is balanced by using a force-directed global priority function. The force-directed priority function is a distribution-graph based global priority function. A constant additive model is employed for power dissipation analysis and estimation. Based on test scheduling examples, the efficiency of this approach is discussed as compared to the other approaches

    Test schedules for VLSI circuits having built-in test hardware

    Get PDF
    AbstractNumerous built-in test techniques exist for testing structures within a VLSI chip. In general these techniques deal with a repeated application of the following steps: (1) generate a test vector; (2) transmit it to the structure being tested; (3) process the test through the structure; (4) obtain the response from the structure; and (5) process the response. These steps constitute a test schema. Because these steps must be repeated for each test vector, it is possible that steps in processing one test vector can overlap those used in processing another vector. The manner of overlapping this testing process leads to the concept of a test schedule. In this paper we first present a model for built-in test techniques and for describing test schemas and schedules. We introduce the new concept of an I-path which is used to transfer data from one place in a circuit to another, without modifying the data. Finally results are presented describing how to create test schedules that minimizes the total testing time. Lower bounds on the minimal test time are also derived

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability

    FASTER: Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration

    Get PDF
    The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) EU FP7 project, aims to ease the design and implementation of dynamically changing hardware systems. Our motivation stems from the promise reconfigurable systems hold for achieving high performance and extending product functionality and lifetime via the addition of new features that operate at hardware speed. However, designing a changing hardware system is both challenging and time-consuming. FASTER facilitates the use of reconfigurable technology by providing a complete methodology enabling designers to easily specify, analyze, implement and verify applications on platforms with general-purpose processors and acceleration modules implemented in the latest reconfigurable technology. Our tool-chain supports both coarse- and fine-grain FPGA reconfiguration, while during execution a flexible run-time system manages the reconfigurable resources. We target three applications from different domains. We explore the way each application benefits from reconfiguration, and then we asses them and the FASTER tools, in terms of performance, area consumption and accuracy of analysis

    A unified method for assembling global test schedules

    Get PDF
    In order to make a register transfer structure testable, it is usually divided into functional blocks that can be tested independently by various test methods. The test patterns are shifted in or generated autonomously at the inputs of each block. The test responses of a block are compacted or observed at its output register. In this paper a unified method for assembling all the single tests to a global schedule is presented. It is compatible with a variety of different test methods. The described scheduling procedures reduce the overall test time and minimize the number of internal registers that have to be made directly observable
    • …
    corecore