19,025 research outputs found

    A design for testability study on a high performance automatic gain control circuit.

    Get PDF
    A comprehensive testability study on a commercial automatic gain control circuit is presented which aims to identify design for testability (DfT) modifications to both reduce production test cost and improve test quality. A fault simulation strategy based on layout extracted faults has been used to support the study. The paper proposes a number of DfT modifications at the layout, schematic and system levels together with testability. Guidelines that may well have generic applicability. Proposals for using the modifications to achieve partial self test are made and estimates of achieved fault coverage and quality levels presente

    FPGA ARCHITECTURE AND VERIFICATION OF BUILT IN SELF-TEST (BIST) FOR 32-BIT ADDER/SUBTRACTER USING DE0-NANO FPGA AND ANALOG DISCOVERY 2 HARDWARE

    Get PDF
    The integrated circuit (IC) is an integral part of everyday modern technology, and its application is very attractive to hardware and software design engineers because of its versatility, integration, power consumption, cost, and board area reduction. IC is available in various types such as Field Programming Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), System on Chip (SoC) architecture, Digital Signal Processing (DSP), microcontrollers (μC), and many more. With technology demand focused on faster, low power consumption, efficient IC application, design engineers are facing tremendous challenges in developing and testing integrated circuits that guaranty functionality, high fault coverage, and reliability as the transistor technology is shrinking to the point where manufacturing defects of ICs are affecting yield which associates with the increased cost of the part. The competitive IC market is pressuring manufactures of ICs to develop and market IC in a relatively quick turnaround which in return requires design and verification engineers to develop an integrated self-test structure that would ensure fault-free and the quality product is delivered on the market. 70-80% of IC design is spent on verification and testing to ensure high quality and reliability for the enduser. To test complex and sophisticated IC designs, the verification engineers must produce laborious and costly test fixtures which affect the cost of the part on the competitive market. To avoid increasing the part cost due to yield and test time to the end-user and to keep up with the competitive market many IC design engineers are deviating from complex external test fixture approach and are focusing on integrating Built-in Self-Test (BIST) or Design for Test (DFT) techniques onto IC’s which would reduce time to market but still guarantee high coverage for the product. Understanding the BIST, the architecture, as well as the application of IC, must be understood before developing IC. The architecture of FPGA is elaborated in this paper followed by several BIST techniques and applications of those BIST relative to FPGA, SoC, analog to digital (ADC), or digital to analog converters (DAC) that are integrated on IC. Paper is concluded with verification of BIST for the 32-bit adder/subtracter designed in Quartus II software using the Analog Discovery 2 module as stimulus and DE0-NANO FPGA board for verification

    IEEE Standard 1500 Compliance Verification for Embedded Cores

    Get PDF
    Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar

    Automating the IEEE std. 1500 compliance verification for embedded cores

    Get PDF
    The IEEE 1500 standard for embedded core testing proposes a very effective solution for testing modern system-on-chip (SoC). It proposes a flexible hardware test wrapper architecture, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Already several IP providers have announced compliance in both existing and future design blocks. In this paper we address the challenge of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE std. 1500. This is a mandatory step to fully trust the wrapper functionalities in applying the test sequences to the core. The proposed solution aims at implementing a verification framework allowing core providers and/or integrators to automatically verify the compliancy of their products (sold or purchased) to the standar

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Miniature mobile sensor platforms for condition monitoring of structures

    Get PDF
    In this paper, a wireless, multisensor inspection system for nondestructive evaluation (NDE) of materials is described. The sensor configuration enables two inspection modes-magnetic (flux leakage and eddy current) and noncontact ultrasound. Each is designed to function in a complementary manner, maximizing the potential for detection of both surface and internal defects. Particular emphasis is placed on the generic architecture of a novel, intelligent sensor platform, and its positioning on the structure under test. The sensor units are capable of wireless communication with a remote host computer, which controls manipulation and data interpretation. Results are presented in the form of automatic scans with different NDE sensors in a series of experiments on thin plate structures. To highlight the advantage of utilizing multiple inspection modalities, data fusion approaches are employed to combine data collected by complementary sensor systems. Fusion of data is shown to demonstrate the potential for improved inspection reliability

    A Hardware Security Solution against Scan-Based Attacks

    Get PDF
    Scan based Design for Test (DfT) schemes have been widely used to achieve high fault coverage for integrated circuits. The scan technique provides full access to the internal nodes of the device-under-test to control them or observe their response to input test vectors. While such comprehensive access is highly desirable for testing, it is not acceptable for secure chips as it is subject to exploitation by various attacks. In this work, new methods are presented to protect the security of critical information against scan-based attacks. In the proposed methods, access to the circuit containing secret information via the scan chain has been severely limited in order to reduce the risk of a security breach. To ensure the testability of the circuit, a built-in self-test which utilizes an LFSR as the test pattern generator (TPG) is proposed. The proposed schemes can be used as a countermeasure against side channel attacks with a low area overhead as compared to the existing solutions in literature

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Scanamorphos: a map-making software for Herschel and similar scanning bolometer arrays

    Full text link
    Scanamorphos is one of the public softwares available to post-process scan observations performed with the Herschel photometer arrays. This post-processing mainly consists in subtracting the total low-frequency noise (both its thermal and non-thermal components), masking high-frequency artefacts such as cosmic ray hits, and projecting the data onto a map. Although it was developed for Herschel, it is also applicable with minimal adjustment to scan observations made with some other imaging arrays subjected to low-frequency noise, provided they entail sufficient redundancy; it was successfully applied to P-Artemis, an instrument operating on the APEX telescope. Contrary to matrix-inversion softwares and high-pass filters, Scanamorphos does not assume any particular noise model, and does not apply any Fourier-space filtering to the data, but is an empirical tool using purely the redundancy built in the observations -- taking advantage of the fact that each portion of the sky is sampled at multiple times by multiple bolometers. It is an interactive software in the sense that the user is allowed to optionally visualize and control results at each intermediate step, but the processing is fully automated. This paper describes the principles and algorithm of Scanamorphos and presents several examples of application.Comment: This is the final version as accepted by PASP (on July 27, 2013). A copy with much better-quality figures is available on http://www2.iap.fr/users/roussel/herschel
    corecore