16,387 research outputs found

    An Efficient Algorithm to Selectively Gate Scan Cells for Capture Power Reduction

    Get PDF
    [[abstract]]Recently, power dissipation in full-scan testing has brought a great challenge for test engineers. In addition to shift power reduction, excessive switching activity during capture operation may lead to circuit malfunction and yield loss. In this paper, a new algorithm is proposed with using clock gating technique on a part of the scan cells to prevent the internal circuit from unnecessary transitions. These scan cells are divided into several exclusive scan groups. For each test vector, only a portion of the scan groups are activated to store the test response per capture cycle. The proposed method can reduce the capture power dissipation without any influence on fault coverage or testing time. Experimental results for ISCAS'89 benchmark circuits show that the capture power reduction in test sequence can up to 55%.[[notice]]補正完畢[[incitationindex]]EI[[booktype]]紙

    Timing Measurement Platform for Arbitrary Black-Box Circuits Based on Transition Probability

    No full text

    EffiTest: Efficient Delay Test and Statistical Prediction for Configuring Post-silicon Tunable Buffers

    Full text link
    At nanometer manufacturing technology nodes, process variations significantly affect circuit performance. To combat them, post- silicon clock tuning buffers can be deployed to balance timing bud- gets of critical paths for each individual chip after manufacturing. The challenge of this method is that path delays should be mea- sured for each chip to configure the tuning buffers properly. Current methods for this delay measurement rely on path-wise frequency stepping. This strategy, however, requires too much time from ex- pensive testers. In this paper, we propose an efficient delay test framework (EffiTest) to solve the post-silicon testing problem by aligning path delays using the already-existing tuning buffers in the circuit. In addition, we only test representative paths and the delays of other paths are estimated by statistical delay prediction. Exper- imental results demonstrate that the proposed method can reduce the number of frequency stepping iterations by more than 94% with only a slight yield loss.Comment: ACM/IEEE Design Automation Conference (DAC), June 201

    Minimizing Test Power in SRAM through Reduction of Pre-charge Activity

    No full text
    In this paper we analyze the test power of SRAM memories and demonstrate that the full functional pre-charge activity is not necessary during test mode because of the predictable addressing sequence. We exploit this observation in order to minimize power dissipation during test by eliminating the unnecessary power consumption associated with the pre-charge activity. This is achieved through a modified pre-charge control circuitry, exploiting the first degree of freedom of March tests, which allows choosing a specific addressing sequence. The efficiency of the proposed solution is validated through extensive Spice simulations

    Space programs summary no. 37-63, volume 1 for the period 1 March - 30 April 1970. Flight projects

    Get PDF
    Mariner Mars 1971, Mariner Venus-Mercury 1973 and Viking Orbiter 1975 status report

    Extensible sparse functional arrays with circuit parallelism

    Get PDF
    A longstanding open question in algorithms and data structures is the time and space complexity of pure functional arrays. Imperative arrays provide update and lookup operations that require constant time in the RAM theoretical model, but it is conjectured that there does not exist a RAM algorithm that achieves the same complexity for functional arrays, unless restrictions are placed on the operations. The main result of this paper is an algorithm that does achieve optimal unit time and space complexity for update and lookup on functional arrays. This algorithm does not run on a RAM, but instead it exploits the massive parallelism inherent in digital circuits. The algorithm also provides unit time operations that support storage management, as well as sparse and extensible arrays. The main idea behind the algorithm is to replace a RAM memory by a tree circuit that is more powerful than the RAM yet has the same asymptotic complexity in time (gate delays) and size (number of components). The algorithm uses an array representation that allows elements to be shared between many arrays with only a small constant factor penalty in space and time. This system exemplifies circuit parallelism, which exploits very large numbers of transistors per chip in order to speed up key algorithms. Extensible Sparse Functional Arrays (ESFA) can be used with both functional and imperative programming languages. The system comprises a set of algorithms and a circuit specification, and it has been implemented on a GPGPU with good performance
    corecore