449 research outputs found
Recommended from our members
Testability considerations for implementing an embedded memory subsystem
textThere are a number of testability considerations for VLSI design,
but test coverage, test time, accuracy of test patterns and
correctness of design information for DFD (Design for debug) are
the most important ones in design with embedded memories. The goal
of DFT (Design-for-Test) is to achieve zero defects. When it comes
to the memory subsystem in SOCs (system on chips), many flavors of
memory BIST (built-in self test) are able to get high test
coverage in a memory, but often, no proper attention is given to
the memory interface logic (shadow logic). Functional testing and
BIST are the most prevalent tests for this logic, but functional
testing is impractical for complicated SOC designs. As a result,
industry has widely used at-speed scan testing to detect delay
induced defects. Compared with functional testing, scan-based
testing for delay faults reduces overall pattern generation
complexity and cost by enhancing both controllability and
observability of flip-flops. However, without proper modeling of
memory, Xs are generated from memories. Also, when the design has
chip compression logic, the number of ATPG patterns is increased
significantly due to Xs from memories. In this dissertation, a
register based testing method and X prevention logic are presented
to tackle these problems.
An important design stage for scan based testing with memory
subsystems is the step to create a gate level model and verify
with this model. The flow needs to provide a robust ATPG netlist
model. Most industry standard CAD tools used to analyze fault
coverage and generate test vectors require gate level models.
However, custom embedded memories are typically designed using a
transistor-level flow, there is a need for an abstraction step to
generate the gate models, which must be equivalent to the actual
design (transistor level). The contribution of the research is a
framework to verify that the gate level representation of custom
designs is equivalent to the transistor-level design.
Compared to basic stuck-at fault testing, the number of patterns
for at-speed testing is much larger than for basic stuck-at fault
testing. So reducing test and data volume are important. In this
desertion, a new scan reordering method is introduced to reduce
test data with an optimal routing solution. With in depth
understanding of embedded memories and flows developed during the
study of custom memory DFT, a custom embedded memory Bit Mapping
method using a symbolic simulator is presented in the last chapter
to achieve high yield for memories.Electrical and Computer Engineerin
Test Slice Difference Technique for Low-Transition Test Data Compression
[[notice]]èŁæŁćźçą[[incitationindex]]EI[[booktype]]é»ć
REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES
Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test.
In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains.
To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor.
The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains
A Static Time Analysis of 1-bit to 32-page SCA architecture for Logic Test
This research proposes the Static Time Analysis of 32 page Single cycle access (SCA) architecture for Logic test. The timing analysis of each and very path of Logic test are observed that is setup and hold timings are calculated. It also eliminates the peak power consumption problem of conventional shift-based scan chains and reduces the activity during shift and capture cycles using Clock-Gating technique. This leads to more realistic circuit behavior during at-speed tests. It enables the complete test to run at much higher frequencies equal or close to the one in functional mode. It will be shown, that a lesser number of test cycles can be achieved compared to other published solutions. The test cycle per net based on a simple test pattern generator algorithm without test pattern compression is below 1 for larger designs and is independent of the design size. The structure allows an additional on-chip debugging signal visibility for each register. The method is backward compatible to full scan designs and existing test pattern generators and simulators can be used with a minor enhancement. It is shown how to combine the proposed solution with built-in self-test (BIST) and massive parallel  scan  chains.  The  results  are  observed  on  Xilinx XC3s1600e-5fgg48
GraphMineSuite: Enabling High-Performance and Programmable Graph Mining Algorithms with Set Algebra
We propose GraphMineSuite (GMS): the first benchmarking suite for graph
mining that facilitates evaluating and constructing high-performance graph
mining algorithms. First, GMS comes with a benchmark specification based on
extensive literature review, prescribing representative problems, algorithms,
and datasets. Second, GMS offers a carefully designed software platform for
seamless testing of different fine-grained elements of graph mining algorithms,
such as graph representations or algorithm subroutines. The platform includes
parallel implementations of more than 40 considered baselines, and it
facilitates developing complex and fast mining algorithms. High modularity is
possible by harnessing set algebra operations such as set intersection and
difference, which enables breaking complex graph mining algorithms into simple
building blocks that can be separately experimented with. GMS is supported with
a broad concurrency analysis for portability in performance insights, and a
novel performance metric to assess the throughput of graph mining algorithms,
enabling more insightful evaluation. As use cases, we harness GMS to rapidly
redesign and accelerate state-of-the-art baselines of core graph mining
problems: degeneracy reordering (by up to >2x), maximal clique listing (by up
to >9x), k-clique listing (by 1.1x), and subgraph isomorphism (by up to 2.5x),
also obtaining better theoretical performance bounds
Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing
Book of Abstracts of CSC14 edited by Bora UçarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale Supérieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, Université de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM
- âŠ