40 research outputs found

    Cost Model-Driven Test Resource Partitioning for SoCs

    No full text
    The increasing complexity of modern SoCs and quality expectations are making the cost of test represent an significant fraction of the manufacturing cost. The main factors contributing to the cost of test are the required number of tester pins, the test application time, the tester memory requirements and the area overhead required by the test resources. These factors contribute with different weights, depending on the cost model of each product. Several methods have been proposed to optimize each of these factors, however none of them allows an objective function derived from the actual cost model of each product. In this paper, we propose a cost model-driven test resource allocation and scheduling method that minimizes the cost of test

    AN IMPLEMENTATION THAT FACILITATE ANTICIPATORY TEST FORECAST FOR IM-CHIPS

    Get PDF
    These designs pose significant challenges towards the funnel management plan, flow, and tools. This paper introduces several test logic architectures that facilitate preemptive test scheduling for SC circuits with embedded deterministic test-based test data compression. This paper presents several techniques used to resolve problems surfacing when using scan bandwidth management to large industrial multicore system-on-nick (SC) designs with embedded test data compression. Exactly the same solutions allow efficient handling of physical constraints in realistic programs. Finally, condition-of-the-art SC test scheduling calculations are architected accordingly by looking into making provisions for: 1) establishing time-effective test designs 2) optimization of SC pin partitions 3) allocation of core-level channels according to scan data volume and 4) more flexible core-wise use of automatic test equipment funnel sources. An in depth situation study is highlighted herein with a number of experiments permitting someone to learn to compromise different architectures and test-related factors

    The optimal sequence compression

    Get PDF
    This paper presents the optimal compression for sequences with undefined values. Let we have (Nm)(N-m) undefined and mm defined positions in the boolean sequence vvVvv V of length NN. The sequence code length can\u27t be less then mm in general case, otherwise at least two sequences will have the same code. We present the coding algorithm which generates codes of almost mm length, i.e. almost equal to the lower bound. The paper presents the decoding circuit too. The circuit has low complexity which depends from the inverse density of defined values D(vvV)=fracNmD(vv V) = frac{N}{m}. The decoding circuit includes RAM and random logic. It performs sequential decoding. The total RAM size is proportional to the logleft(D(vvV)ight),logleft(D(vv V) ight) , the number of random logic cells is proportional to loglogleft(D(vvV)ight)left(logloglogleft(D(vvV)ight)ight)2.log logleft(D(vv V) ight) * left(log log logleft(D(vv V) ight) ight)^2 . So the decoding circuit will be small enough even for the very low density sequences. The decoder complexity doesn\u27t depend of the sequence length at all

    Optimal Unknown Bit Filtering for Test Response Masking

    Get PDF
    [[abstract]]In this paper presents a new X-Masking scheme for response compaction. It filters all X states from test response that can no unknown value input to response compactor. In the experimental results, this scheme increased less control data and maintain same observability.[[conferencedate]]20121104~20121107[[iscallforpapers]]Y[[conferencelocation]]New Taipei, Taiwa

    A TEST-LOGIC SCHEME FOR LARGE SCALE INDUSTRIAL STRATEGY USING CHANNELS

    Get PDF
    ATE funnel bandwidth management for SoC designs can enjoy a vital role in growing test data compression without any visible effect on test application time. Many SoC-based test schemes suggested to date utilize dedicated instrumentation, including test access mechanisms (TAMs) and test wrappers. The assumption is that cores within the SoC are generally heterogeneous modules, or wrapped testable units, and they have their individual EDT-based compression logic, that is subsequently interfaced with ATE with an enhanced quantity of channels. Bandwidth management mitigates the dependence of core channels on the amount of available nick-level pins, enables automatic scheduling of tests by looking into making it transparent towards the users, and considerably improves test planning fundamentally level. This paradigm clearly requires efficient schemes minimizing the general test application time, while considering physical constraints, particularly, SoC pin allocations. It seems, however, that the amount of test configurations, and therefore the quantity of control data one should employ and transfer between your ATE and DSR address registers, may visibly impact test scheduling and also the resultant test time. Using SDV figures in designing a DSR is outstanding by itself, particularly when all SoC cores get their ATPG patterns ready. Still, the precise PC for every core might not continually be offered at the DSR design stage. The suggested solutions include methods accustomed to deliver control data and test scheduling algorithms minimizing the general test application time

    Scan Test Coverage Improvement Via Automatic Test Pattern Generation (Atpg) Tool Configuration

    Get PDF
    The scan test coverage improvement by using automatic test pattern generation (ATPG) tool configuration was investigated. Improving the test coverage is essential in detecting manufacturing defects in semiconductor industry so that high quality products can be supplied to consumers. The ATPG tool used was Mentor Graphics Tessent TestKompress (version 2014.1). The study was done by setting up a few experiments of utilizing and modifying ATPG commands and switches, observing the test coverage improvement from the statistical reports provided during pattern generation process and providing relatable discussions. By modifying the ATPG commands, it can be expected to have some improvement in the test coverage. The scan test patterns generated were stuck-at test patterns. Based on the experiments done, comparison was made on the different coverage readings and the most optimized method and flow of ATPG were determined. The most optimized flow gave an improvement of 0.91% in test coverage which is acceptable since this method does not involve a change in design. The test patterns generated were converted and tested using automatic test equipment (ATE) to observe its performance on real silicon. The test coverage improvement using ATPG tool instead of the design-based method is important as a faster workaround for back-end engineers to provide high quality test contents in such a short product development duration

    PROGRAMMABLE GENERATOR PRODUCING VIRTUAL ARBITRARY TEST PATTERNS

    Get PDF
    The suggested hybrid plan efficiently combines test compression with LBIST, where both techniques could work synergistically to provide top quality tests. It is composed of a straight line finite condition machine driving a suitable phase shifter, and it arrives with numerous features permitting this product to create binary sequences with preselected toggling (PRESTO) activity. We introduce a means to instantly select several controls from the generator offering simple and easy, precise tuning. This paper describes a minimal-power (LP) generator able to creating pseudorandom test designs with preferred toggling levels that has been enhanced fault coverage gradient in comparison using the best-to-date built-in self-test (BIST)-based pseudorandom test pattern machines. Exactly the same strategy is subsequently used to deterministically advice the generator toward test sequences with enhanced fault-coverage-to pattern-count ratios. In addition, this paper proposes an LP test compression way in which enables shaping the exam power envelope inside a fully foreseeable, accurate, and versatile fashion by adapting the PRESTO-based logic BIST (LBIST) infrastructure. Experimental results acquired for industrial designs illustrate the practicality from the suggested test schemes and therefore are reported herein

    Hybrid Diagnosis Model To Determine Fault Isolation For Scan Chain Failure Analysis On 22nm Fabrication Process

    Get PDF
    With the rapid growth of Very Large Scale Integration (VLSI) in complex designs, there is high demand for Design for Testability (DFT). Vast study has proven that Scan based testing is achieving good test coverage with lower cost and smaller die area and is widely used in the industry. Scan chain fault diagnosis plays an important role as with the implementation of Scan based testing, it is reported that 10%-30% of defects in a Scan based design occurs within the Scan chain itself. Currently, there are three main types of stand-alone diagnosis models available, which are: software-based diagnosis, tester-based diagnosis and hardware-based diagnosis, where each has its disadvantages and limitations. In this project, the author proposed a hybrid Scan chain failure analysis technique that uses the proposed software-based diagnosis to obtain a list of possible failing suspect Scan cells, followed by the proposed tester-based diagnosis to further isolate the fault to a single failing device suspect. This proposed hybrid diagnosis algorithm ensures that Scan chain faults such as stuck-at and transition faults can be root-caused with lesser time and low complexity for both solid and marginal failures. Four case studies were successfully carried out to evaluate the proposed hybrid diagnosis algorithm on a 22nm fabrication process technology Device under Test (DUT) System-on-Chip (SOC) product, where the fault isolation was able to isolate a single failing device suspect for all four case studies, indicating a 100% fault isolation success rate
    corecore