40 research outputs found

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    FCSCAN: An Efficient Multiscan-based Test Compression Technique for Test Cost Reduction

    Get PDF

    Quantifiable Assurance: From IPs to Platforms

    Get PDF
    Hardware vulnerabilities are generally considered more difficult to fix than software ones because they are persistent after fabrication. Thus, it is crucial to assess the security and fix the vulnerabilities at earlier design phases, such as Register Transfer Level (RTL) and gate level. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately. Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during platform integration. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first answer the following questions: What additional parameters are introduced during the platform integration? How do we define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at the pre-silicon stages. We also touch upon the term security optimization and present the challenges for future research directions

    Survey of VLSI Test Data Compression Methods

    Get PDF
    It has been seen that the test data compression has been an emerging need of VLSI field and hence the hot topic of research for last decade. Still there is a great need and scope for further reduction in test data volume. This reduction may be lossy for output side test data but must be lossless for input side test data. This paper summarizes the different methods applied for lossless compression of the input side test data, starting with simple code based methods to combined/hybrid methods. The basic goal here is to prepare survey on current methodologies applied for test data compression and prepare a platform for further development in this avenue

    Testing the Trustworthiness of IC Testing: An Oracle-less Attack on IC Camouflaging

    Get PDF
    Test of integrated circuits (ICs) is essential to ensure their quality; the test is meant to prevent defective and out-of-spec ICs from entering into the supply chain. The test is conducted by comparing the observed IC output with the expected test responses for a set of test patterns; the test patterns are generated using automatic test pattern generation algorithms. Existing test-pattern generation algorithms aim to achieve higher fault coverage at lower test costs. In an attempt to reduce the size of test data, these algorithms reveal the maximum information about the internal circuit structure. This is realized through sensitizing the internal nets to the outputs as much as possible, unintentionally leaking the secrets embedded in the circuit as well. In this paper, we present HackTest, an attack that extracts secret information generated in the test data, even if the test data does not explicitly contain the secret. HackTest can break the existing intellectual property (IP) protection techniques, such as camouflaging, within two minutes for our benchmarks using only the camouflaged layout and the test data. HackTest applies to all existing camouflaged gate-selection techniques and is successful even in the presence of state-of-the-art test infrastructure, i.e. test data compression circuits. Our attack necessitates that the IC test data generation algorithms be reinforced with security. We also discuss potential countermeasures to prevent HackTest

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    Packet Compression in GPU Architectures

    Get PDF
    Graphical processing unit (GPU) can support multiple operations in parallel by executing it on multiple thread unit known as warp i.e. multiple threads running the same instruction. Each time miss happens at private cache of Streaming Multiprocessor (SM), the request is migrated over the network to shared L2 cache and then later down to Memory Controller (MC) for supplying memory block. The interconnect delay becomes a bottleneck due to a large number of requests from different SM and multiple replies from the MCs. The compression technique can be used to mitigate the performance bottleneck caused by a large volume of data. In this work, I apply various compression algorithms and propose a new compression scheme, Data Segment Matching (DSM). I apply approximation to the floating-point elements to improve compressibility and develop a prediction model to identify number of approximation bits. I focus on compression techniques to resolve this bottleneck. The evaluations using a cycle accurate simulator show that this scheme improves Instructions per Cycle (IPC) by 12% on an average across various benchmarks with compressibility 50% in integer type benchmarks and 35% in floating-point type benchmarks when the proposed scheme is applied to packet compression in the interconnection network

    VirtualScan: a new compressed scan technology for test cost reduction

    Get PDF
    This work describes the VirtualScan technology for scan test cost reduction. Scan chains in a VirtualScan circuit are split into shorter ones and the gap between external scan ports and internal scan chains are bridged with a broadcaster and a compactor. Test patterns for a VirtualScan circuit are generated directly by one-pass VirtualScan ATPG, in which multi-capture clocking and maximum test compaction are supported. In addition, VirtualScan ATPG avoids unknown-value and aliasing effects algorithmically without adding any additional circuitry. The VirtualScan technology has achieved successful tape-outs of industrial chips and has been proven to be an efficient and easy-to-implement solution for scan test cost reduction.2004 International Conference on Test, 26-28 October 2004, Charlotte, NC, USA, US
    corecore