15,453 research outputs found

    LOT: Logic Optimization with Testability - new transformations for logic synthesis

    Get PDF
    A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools

    CTX: A Clock-Gating-Based Test Relaxation and X-Filling Scheme for Reducing Yield Loss Risk in At-Speed Scan Testing

    Get PDF
    At-speed scan testing is susceptible to yield loss risk due to power supply noise caused by excessive launch switching activity. This paper proposes a novel two-stage scheme, namely CTX (Clock-Gating-Based Test Relaxation and X-Filling), for reducing switching activity when test stimulus is launched. Test relaxation and X-filling are conducted (1) to make as many FFs inactive as possible by disabling corresponding clock-control signals of clock-gating circuitry in Stage-1 (Clock-Disabling), and (2) to make as many remaining active FFs as possible to have equal input and output values in Stage-2 (FF-Silencing). CTX effectively reduces launch switching activity, thus yield loss risk, even with a small number of donpsilat care (X) bits as in test compression, without any impact on test data volume, fault coverage, performance, and circuit design.2008 17th Asian Test Symposium (ATS 2008), 24-27 November 2008, Sapporo, Japa

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    A survey of scan-capture power reduction techniques

    Get PDF
    With the advent of sub-nanometer geometries, integrated circuits (ICs) are required to be checked for newer defects. While scan-based architectures help detect these defects using newer fault models, test data inflation happens, increasing test time and test cost. An automatic test pattern generator (ATPG) exercise’s multiple fault sites simultaneously to reduce test data which causes elevated switching activity during the capture cycle. The switching activity results in an IR drop exceeding the devices under test (DUT) specification. An increase in IR-drop leads to failure of the patterns and may cause good DUTs to fail the test. The problem is severe during at-speed scan testing, which uses a functional rated clock with a high frequency for the capture operation. Researchers have proposed several techniques to reduce capture power. They used various methods, including the reduction of switching activity. This paper reviews the recently proposed techniques. The principle, algorithm, and architecture used in them are discussed, along with key advantages and limitations. In addition, it provides a classification of the techniques based on the method used and its application. The goal is to present a survey of the techniques and prepare a platform for future development in capture power reduction during scan testing

    VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing

    Full text link
    The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention: many applications in fact require high-speed operations that suit a hardware implementation. However, numerous elements and complex interconnections are usually required, leading to a large area occupation and copious power consumption. Stochastic computing has shown promising results for low-power area-efficient hardware implementations, even though existing stochastic algorithms require long streams that cause long latencies. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture has been implemented on a Virtex7 FPGA, resulting in 45% and 62% average reductions in area and latency compared to the best reported architecture in literature. We also synthesize the circuits in a 65 nm CMOS technology and we show that the proposed integral stochastic architecture results in up to 21% reduction in energy consumption compared to the binary radix implementation at the same misclassification rate. Due to fault-tolerant nature of stochastic architectures, we also consider a quasi-synchronous implementation which yields 33% reduction in energy consumption w.r.t. the binary radix implementation without any compromise on performance.Comment: 11 pages, 12 figure

    Fault Models for Quantum Mechanical Switching Networks

    Full text link
    The difference between faults and errors is that, unlike faults, errors can be corrected using control codes. In classical test and verification one develops a test set separating a correct circuit from a circuit containing any considered fault. Classical faults are modelled at the logical level by fault models that act on classical states. The stuck fault model, thought of as a lead connected to a power rail or to a ground, is most typically considered. A classical test set complete for the stuck fault model propagates both binary basis states, 0 and 1, through all nodes in a network and is known to detect many physical faults. A classical test set complete for the stuck fault model allows all circuit nodes to be completely tested and verifies the function of many gates. It is natural to ask if one may adapt any of the known classical methods to test quantum circuits. Of course, classical fault models do not capture all the logical failures found in quantum circuits. The first obstacle faced when using methods from classical test is developing a set of realistic quantum-logical fault models. Developing fault models to abstract the test problem away from the device level motivated our study. Several results are established. First, we describe typical modes of failure present in the physical design of quantum circuits. From this we develop fault models for quantum binary circuits that enable testing at the logical level. The application of these fault models is shown by adapting the classical test set generation technique known as constructing a fault table to generate quantum test sets. A test set developed using this method is shown to detect each of the considered faults.Comment: (almost) Forgotten rewrite from 200
    • 

    corecore