555 research outputs found

    A programmable BIST architecture for clusters of Multiple-Port SRAMs

    Get PDF
    This paper presents a BIST architecture, based on a single microprogrammable BIST processor and a set of memory wrappers, designed to simplify the test of a system containing many distributed multi-port SRAMs of different sizes (number of bits, number of words), access protocol (asynchronous, synchronous), and timin

    Infrastructures and Algorithms for Testable and Dependable Systems-on-a-Chip

    Get PDF
    Every new node of semiconductor technologies provides further miniaturization and higher performances, increasing the number of advanced functions that electronic products can offer. Silicon area is now so cheap that industries can integrate in a single chip usually referred to as System-on-Chip (SoC), all the components and functions that historically were placed on a hardware board. Although adding such advanced functionality can benefit users, the manufacturing process is becoming finer and denser, making chips more susceptible to defects. Today’s very deep-submicron semiconductor technologies (0.13 micron and below) have reached susceptibility levels that put conventional semiconductor manufacturing at an impasse. Being able to rapidly develop, manufacture, test, diagnose and verify such complex new chips and products is crucial for the continued success of our economy at-large. This trend is expected to continue at least for the next ten years making possible the design and production of 100 million transistor chips. To speed up the research, the National Technology Roadmap for Semiconductors identified in 1997 a number of major hurdles to be overcome. Some of these hurdles are related to test and dependability. Test is one of the most critical tasks in the semiconductor production process where Integrated Circuits (ICs) are tested several times starting from the wafer probing to the end of production test. Test is not only necessary to assure fault free devices but it also plays a key role in analyzing defects in the manufacturing process. This last point has high relevance since increasing time-to-market pressure on semiconductor fabrication often forces foundries to start volume production on a given semiconductor technology node before reaching the defect densities, and hence yield levels, traditionally obtained at that stage. The feedback derived from test is the only way to analyze and isolate many of the defects in today’s processes and to increase process’s yield. With the increasing need of high quality electronic products, at each new physical assembly level, such as board and system assembly, test is used for debugging, diagnosing and repairing the sub-assemblies in their new environment. Similarly, the increasing reliability, availability and serviceability requirements, lead the users of high-end products performing periodic tests in the field throughout the full life cycle. To allow advancements in each one of the above scaling trends, fundamental changes are expected to emerge in different Integrated Circuits (ICs) realization disciplines such as IC design, packaging and silicon process. These changes have a direct impact on test methods, tools and equipment. Conventional test equipment and methodologies will be inadequate to assure high quality levels. On chip specialized block dedicated to test, usually referred to as Infrastructure IP (Intellectual Property), need to be developed and included in the new complex designs to assure that new chips will be adequately tested, diagnosed, measured, debugged and even sometimes repaired. In this thesis, some of the scaling trends in designing new complex SoCs will be analyzed one at a time, observing their implications on test and identifying the key hurdles/challenges to be addressed. The goal of the remaining of the thesis is the presentation of possible solutions. It is not sufficient to address just one of the challenges; all must be met at the same time to fulfill the market requirements

    An effective distributed BIST architecture for RAMs

    Get PDF
    The present paper proposes a solution to the problem of testing a system containing many distributed memories of different sizes. The proposed solution relies in the development of a BIST architecture characterized by a single BIST processor, implemented as a microprogrammable machine and able to execute different test algorithms, a wrapper for each SRAM including standard memory BIST modules, and an interface block to manage the communications between the SRAM and the BIST processor. Both area overhead and routing costs are minimized, and a scan-based approach allows full diagnostic capabilities of the faults possibly detected in the memories under test

    A Comprehensive Test and Diagnostic Strategy for TCAMs

    Get PDF
    Content addressable memories (CAMs) are gaining popularity with computer networks. Testing costs of CAMs are extremely high owing to their unique configuration. In this thesis, a fault analysis is carried out on an industrial ternary CAM (TCAM) design, and search path test algorithms are designed. The proposed algorithms are able to test the TCAM array, multiple-match resolver (MMR), and match address encoder (MAE). The tests represent a 6x decrease in test complexity compared to existing algorithms, while dramatically improving fault coverage

    Dependability Assessment of NAND Flash-memory for Mission-critical Applications

    Get PDF
    It is a matter of fact that NAND flash memory devices are well established in consumer market. However, it is not true that the same architectures adopted in the consumer market are suitable for mission critical applications like space. In fact, USB flash drives, digital cameras, MP3 players are usually adopted to store "less significant" data which are not changing frequently (e.g., MP3s, pictures, etc.). Therefore, in spite of NAND flash's drawbacks, a modest complexity is usually needed in the logic of commercial flash drives. On the other hand, mission critical applications have different reliability requirements from commercial scenarios. Moreover, they are usually playing in a hostile environment (e.g., the space) which contributes to worsen all the issues. We aim at providing practical valuable guidelines, comparisons and tradeoffs among the huge number of dimensions of fault tolerant methodologies for NAND flash applied to critical environments. We hope that such guidelines will be useful for our ongoing research and for all the interested reader

    Dependability Assessment of NAND Flash-memory for Mission-critical Applications

    Get PDF
    It is a matter of fact that NAND flash memory devices are well established in consumer market. However, it is not true that the same architectures adopted in the consumer market are suitable for mission critical applications like space. In fact, USB flash drives, digital cameras, MP3 players are usually adopted to store "less significant" data which are not changing frequently (e.g., MP3s, pictures, etc.). Therefore, in spite of NAND flash’s drawbacks, a modest complexity is usually needed in the logic of commercial flash drives. On the other hand, mission critical applications have different reliability requirements from commercial scenarios. Moreover, they are usually playing in a hostile environment (e.g., the space) which contributes to worsen all the issues. We aim at providing practical valuable guidelines, comparisons and tradeoffs among the huge number of dimensions of fault tolerant methodologies for NAND flash applied to critical environments. We hope that such guidelines will be useful for our ongoing research and for all the interested readers

    Efficiently Hardening SGX Enclaves against Memory Access Pattern Attacks via Dynamic Program Partitioning

    Full text link
    Intel SGX is known to be vulnerable to a class of practical attacks exploiting memory access pattern side-channels, notably page-fault attacks and cache timing attacks. A promising hardening scheme is to wrap applications in hardware transactions, enabled by Intel TSX, that return control to the software upon unexpected cache misses and interruptions so that the existing side-channel attacks exploiting these micro-architectural events can be detected and mitigated. However, existing hardening schemes scale only to small-data computation, with a typical working set smaller than one or few times (e.g., 88 times) of a CPU data cache. This work tackles the data scalability and performance efficiency of security hardening schemes of Intel SGX enclaves against memory-access pattern side channels. The key insight is that the size of TSX transactions in the target computation is critical, both performance- and security-wise. Unlike the existing designs, this work dynamically partitions target computations to enlarge transactions while avoiding aborts, leading to lower performance overhead and improved side-channel security. We materialize the dynamic partitioning scheme and build a C++ library to monitor and model cache utilization at runtime. We further build a data analytical system using the library and implement various external oblivious algorithms. Performance evaluation shows that our work can effectively increase transaction size and reduce the execution time by up to two orders of magnitude compared with the state-of-the-art solutions

    PPM Reduction on Embedded Memories in System on Chip

    Full text link
    This paper summarizes advanced test patterns designed to target dynamic and time-related faults caused by new defect mechanisms in deep-submicron memory technologies. Such tests are industrially evaluated together with the traditional tests at "Design of Systems on Silicon (DS2)" in Spain in order to (a) validate the used fault models and (b) investigate the added value of the new tests and their impact on the PPM level. The preliminary silicon results are presented and analyzed. They validate some of the new dynamic fault models and show the importance of considering dynamic faults for high outgoing product quality.Electrical Engineering, Mathematics and Computer Scienc

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Technology and layout-related testing of static random-access memories

    Full text link
    Static random-access memories (SRAMs) exhibit faults that are electrical in nature. Functional and electrical testing are performed to diagnose faulty operation. These tests are usually designed from simple fault models that describe the chip interface behavior without a thorough analysis of the chip layout and technology. However, there are certain technology and layout-related defects that are internal to the chip and are mostly time-dependent in nature. The resulting failures may or may not seriously degrade the input/output interface behavior. They may show up as electrical faults (such as a slow access fault) and/or functional faults (such as a pattern sensitive fault). However, these faults cannot be described properly with the functional fault models because these models do not take timing into account. Also, electrical fault models that describe merely the input/output interface behavior are inadequate to characterize every possible defect in the basic SRAM cell. Examples of faults produced by these defects are: (a) static data loss, (b) abnormally high currents drawn from the power supply, etc. Generating tests for such faults often requires a thorough understanding and analysis of the circuit technology and layout. In this article, we shall examine ways to characterize and test such faults. We shall divide such faults into two categories depending on the types of SRAMs they effect—silicon SRAMs and GaAs SRAMs.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43015/1/10836_2004_Article_BF00972519.pd
    • …
    corecore