545 research outputs found

    IEEE Standard 1500 Compliance Verification for Embedded Cores

    Get PDF
    Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar

    SoC Test: Trends and Recent Standards

    Get PDF
    The well-known approaching test cost crisis, where semiconductor test costs begin to approach or exceed manufacturing costs has led test engineers to apply new solutions to the problem of testing System-On-Chip (SoC) designs containing multiple IP (Intellectual Property) cores. While it is not yet possible to apply generic test architectures to an IP core within a SoC, the emergence of a number of similar approaches, and the release of new industry standards, such as IEEE 1500 and IEEE 1450.6, may begin to change this situation. This paper looks at these standards and at some techniques currently used by SoC test engineers. An extensive reference list is included, reflecting the purpose of this publication as a review paper

    Modellierung und automatische Generierung von FPGA-basierten Testinstrumenten fĂŒr den strukturellen Leiterplattentest

    Get PDF
    Neue Bauformen von Schaltkreisen wie BGAs fĂŒhren zu sinkenden Möglichkeiten des optischen und mechanischen Testzugriffs und stellen Testsysteme vor Probleme bei der Testbarkeit von Verbindungen zwischen ICs auf Leiterplatten. Damit verbunden sind eine reduzierte Testabdeckung und steigende Kosten. Besonders fĂŒr FPGAs fehlen geeignete Methoden, bei denen sich das Testsystem automatisch den Gegebenheiten der zu testenden Leiterplatte anpasst. Diese Dissertation beschĂ€ftigt sich mit dem Problem des FPGA-basierten Testens. Das vorgestellte Konzept nutzt ausschließlich vorhandene Ressourcen des FPGAs, um Testalgorithmen in dessen Logik zu implementieren und erhöht die Herstellungskosten der Leiterplatte nicht. Die Ressourcen des FPGAs stehen wĂ€hrend der Testphase exklusiv fĂŒr das Testen zur VerfĂŒgung. Ausgehend vom Stand der Technik nicht-invasiver elektrischer Verfahren fĂŒr Leiterplattentests werden aktuelle AnsĂ€tze und Methoden miteinander verglichen. Aus deren StĂ€rken und SchwĂ€chen wird eine detaillierte Zielstellung fĂŒr diese Dissertation erarbeitet. Es wird eine Methode zur Generierung von Testinstrumenten fĂŒr das FPGA-basierte Testen vorgestellt, die die AusfĂŒhrung von Testalgorithmen in den FPGA verlagern und eine vergleichbare oder bessere Testabdeckung sowie Testgeschwindigkeit als etablierte Verfahren liefert, ohne dafĂŒr auf manuelle Eingriffe bei der Generierung angewiesen zu sein. Im Rahmen eines Lösungsansatzes wird neben der Testsystemarchitektur eine Modellierung fĂŒr die an den Verbindungstests beteiligten Schaltkreise vorgestellt. Hierbei wird die AusfĂŒhrung der Testalgorithmen im FPGA entweder in Software auf einem softcore-basierten Prozessor oder direkt in Hardware als diskrete Logik in einem sogenannten Co-Prozessor ermöglicht. Mit der Methode ist es möglich jeden Schaltkreis getrennt und unabhĂ€ngig von der Art seiner spĂ€teren Implementierung und den konkreten Gegebenheiten des PrĂŒflings zu modellieren. Die Generierung aller nötigen Bestandteile in Software und Hardware, wie auch deren Integration zu einem Testinstrument erfolgen dabei vollstĂ€ndig automatisch. Kern der Arbeit ist die Modellierung und Generierung fĂŒr eingebettete Testinstrumente, die auf der Testsystemarchitektur basieren. Der Fokus wird dabei auf die zeitlich korrekte Ansteuerung der an den Verbindungstests beteiligten Schaltkreise gelegt, ohne dabei eine konkrete Implementierung vorzugeben. In Untersuchungen wird die Generierung von Testinstrumenten fĂŒr verschiedene Schaltkreise betrachtet. Die Ergebnisse belegen die LeistungsfĂ€higkeit der vorgestellten Methode zur automatischen Generierung von FPGA-basierten Testinstrumenten und zeigen eine signifikante Beschleunigung des FPGA-basierten Verbindungstests.New types of cases for integrated circuits like BGAs are leading to a decreased optical and mechanical test access. They are causing problems for test systems when testing connections between integrated circuits on printed circuit boards. This causes decreasing test coverage and increasing test costs. Especially for FPGAs some appropriate methods that automatically adapt the test system to the conditions of the printed circuit board are missing. This thesis is about the problems of FPGA-based testing. The presented concept solely uses available resources of the FPGA to transfer test algorithms from external test equipment into the programmable logic of the FPGA and therefore does not increase the production costs of the printed circuit board. The resources of the FPGA are exclusively used for testing during the test phase. Based on state-of-the-art non-invasive electrical methods for printed circuit boards with FPGAs current approaches are compared and analyzed. From the strengths and weaknesses of the considered methods a detailed description of the goals that should be achieved with this thesis is discussed. A method for the generation of so called test instruments for FPGA-based testing is presented. This method transfers the execution of test algorithms into the FPGA and has a similar or better test coverage as well as test speed compared to the well-established techniques without the need for any manually actions when generating such systems. Besides the chosen test system architecture the modeling of integrated circuits that are part of the connection test is presented. The test system architecture allows the execution of test algorithms either in software on a soft-core processor or directly in dedicated logic, so called co-processors. With this method it is possible to model each integrated circuit independent of each other and also independent of the implementation in software or hardware. The generation of all software and hardware parts of the test system is done fully automatically. Central element of this thesis is the modeling and generation of embedded test instruments, based on the presented test system architecture. The focus is on the timing-correct control routines of the integrated circuits that are part of the connection test. All parts of the test system should be modeled independent of each other and without knowledge about the use case. In experiments the generation of test instruments for different integrated circuits is carried out. These experiments prove the performance of the proposed methods for automatic generation of FPGA-based test instrument and show a significant speed-up for FPGA-based tests of printed circuit boards

    Methodology and Ecosystem for the Design of a Complex Network ASIC

    Full text link
    Performance of HPC systems has risen steadily. While the 10 Petaflop/s barrier has been breached in the year 2011 the next large step into the exascale era is expected sometime between the years 2018 and 2020. The EXTOLL project will be an integral part in this venture. Originally designed as a research project on FPGA basis it will make the transition to an ASIC to improve its already excelling performance even further. This transition poses many challenges that will be presented in this thesis. Nowadays, it is not enough to look only at single components in a system. EXTOLL is part of complex ecosystem which must be optimized overall since everything is tightly interwoven and disregarding some aspects can cause the whole system either to work with limited performance or even to fail. This thesis examines four different aspects in the design hierarchy and proposes efficient solutions or improvements for each of them. At first it takes a look at the design implementation and the differences between FPGA and ASIC design. It introduces a methodology to equip all on-chip memory with ECC logic automatically without the user’s input and in a transparent way so that the underlying code that uses the memory does not have to be changed. In the next step the floorplanning process is analyzed and an iterative solution is worked out based on physical and logical constraints of the EXTOLL design. Besides, a work flow for collaborative design is presented that allows multiple users to work on the design concurrently. The third part concentrates on the high-speed signal path from the chip to the connector and how it is affected by technological limitations. All constraints are analyzed and a package layout for the EXTOLL chip is proposed that is seen as the optimal solution. The last part develops a cost model for wafer and package level test and raises technological concerns that will affect the testing methodology. In order to run testing internally it proposes the development of a stand-alone test platform that is able to test packaged EXTOLL chips in every aspect

    A Case Study of Hierarchical Diagnosis for Core-Based SoC

    Get PDF
    In this paper, a silicon debug case study was given in the context of a hierarchical diagnosis flow for core-based SoC. We discuss (1) how to design a simple core wrapper that supports at-speed test, (2) how to map the failures collected from the chip level to core level, and (3) how to perform failure analysis and silicon debug under the guidance of diagnosis results. Terminology and Introduction The terminology used in this paper is briefly discussed below. SoC: Designs that integrate a complete system onto one chip are called System-on-a-Chip (SoC) designs. Core: In SoC designs, the design process involves an IC that is often made up of large pre-defined and preverified reusable building blocks or intellectual property (IP) blocks, such as digital logic, processors, memories, analog and mixed signal circuits. The IC building blocks are called cores or embedded cores Core Wrapper Design The IEEE 1500 core wrapper [8] is illustrated in (1) Wrapper Serial Port (WSP) has a set of serial terminals that could be sourced from chip-level pins or from an embedded controller such as an IEEE 1149.1-based (JTAG) controller. The WSP is used to load and unload instructions and data into and out of the IEEE 1500 registers. In addition to the wrapper serial input (WSI) and wrapper serial output (WSO) terminals shown i

    A Methodology for Generative Spelling Correction via Natural Spelling Errors Emulation across Multiple Domains and Languages

    Full text link
    Modern large language models demonstrate impressive capabilities in text generation and generalization. However, they often struggle with solving text editing tasks, particularly when it comes to correcting spelling errors and mistypings. In this paper, we present a methodology for generative spelling correction (SC), which was tested on English and Russian languages and potentially can be extended to any language with minor changes. Our research mainly focuses on exploring natural spelling errors and mistypings in texts and studying the ways those errors can be emulated in correct sentences to effectively enrich generative models' pre-train procedure. We investigate the impact of such emulations and the models' abilities across different text domains. In this work, we investigate two spelling corruption techniques: 1) first one mimics human behavior when making a mistake through leveraging statistics of errors from particular dataset and 2) second adds the most common spelling errors, keyboard miss clicks, and some heuristics within the texts. We conducted experiments employing various corruption strategies, models' architectures and sizes on the pre-training and fine-tuning stages and evaluated the models using single-domain and multi-domain test sets. As a practical outcome of our work, we introduce SAGE(Spell checking via Augmentation and Generative distribution Emulation). It is a library for automatic generative SC that includes a family of pre-trained generative models and built-in augmentation algorithms.Comment: to appear in EACL 202

    NASA/ESACV-990 spacelab simulation. Appendix B: Experiment development and performance

    Get PDF
    Eight experiments flown on the CV-990 airborne laboratory during the NASA/ESA joint Spacelab simulation mission are described in terms of their physical arrangement in the aircraft, their scientific objectives, developmental considerations dictated by mission requirements, checkout, integration into the aircraft, and the inflight operation and performance of the experiments

    The Development of Microfluidic and Plasmonic Devices for Terahertz Frequencies

    Get PDF
    The wealth of opportunities associated with the terahertz (THz) region of the electromagnetic spectrum have only recently, thanks to advances in technology, begun to be fully recognised and exploited. The advent of terahertz time-domain spectroscopy (THz-TDS) has led to a wide spectrum of research, spanning chemical, biological and physical systems. However, the relative immaturity of THz techniques results in a variety of inherent problems which limit the potential applications. With an equality existing between the wavelength of THz radiation, and the length scales associated with modern microfabrication techniques, such technology can be exploited to facilitate in finding solutions to these problems. This thesis seeks to address one of these problems, namely the strong absorptions associated with liquid water in the THz region. A simple design idea, that if the optical path length through a fluidic sample were reduced, strong signals could be detected after direct transmission, resulted in a micromachined fluidic cell being devised. The design, fabrication and testing of a microfluidic device inherently transparent to THz radiation, and designed for use in a standard THz-TDS arrangement, is presented. A range of samples, including primary alcohol-water mixtures, commercial whiskies and organic materials are analysed, which, when used in conjunction with data extraction algorithms, allows for accurate dielectric information to be yielded. Further exploitation of micromachining techniques are presented, where a variety of structures, seeking to initiate and utilise a class of surface electromagnetic wave known as surface plasmon polaritons (SPPs), are realised. By flanking a single sub-wavelength aperture with sub-wavelength periodic corrugations, extraordinary optical transmission (EOT) can be observed. This technique allows smaller apertures to be used for THz near-field imaging applications, with a view to increase spatial resolution. The first demonstration of THz near-field imaging using sub-wavelength plasmonic apertures in conjunction with a THz quantum cascade laser source, is presented. Detailed investigations into EOT for the case of two-dimensional, sub-wavelength aperture arrays are documented. A qualitative time-of-flight model describing the transmission properties of these structures is presented, resulting from systematic investigations into a variety of geometrical effects. This model has allowed sharp resonances to be engineered in the frequency domain. A hybrid device featuring a combination of sub-wavelength periodic apertures and corrugations is also investigated. Such a structure is not known to have been described previously in the literature, either in the optical or THz domains. The device demonstrates unparallelled transmission efficiencies, termed `super' EOT. Finally, a device combining the microfluidic technology with the highly resonant SPP structures is presented. This device seeks to exploit the innate dependence of SPPs to a metal-dielectric interface, for use as a sensor. By introducing a range of fluids into the device, the change in the metal-dielectric interface induced a change in the frequency response of the resonant structure. The magnitude of the observed frequency shift can be related back to the dielectric properties of the fluid. This result displays how microfabrication techniques can be successfully exploited to create devices for THz applications, seeking to provide solutions to the inherent problems associated with this part of the electromagnetic spectrum
    • 

    corecore