63 research outputs found

    Infrastructures and Algorithms for Testable and Dependable Systems-on-a-Chip

    Get PDF
    Every new node of semiconductor technologies provides further miniaturization and higher performances, increasing the number of advanced functions that electronic products can offer. Silicon area is now so cheap that industries can integrate in a single chip usually referred to as System-on-Chip (SoC), all the components and functions that historically were placed on a hardware board. Although adding such advanced functionality can benefit users, the manufacturing process is becoming finer and denser, making chips more susceptible to defects. Today’s very deep-submicron semiconductor technologies (0.13 micron and below) have reached susceptibility levels that put conventional semiconductor manufacturing at an impasse. Being able to rapidly develop, manufacture, test, diagnose and verify such complex new chips and products is crucial for the continued success of our economy at-large. This trend is expected to continue at least for the next ten years making possible the design and production of 100 million transistor chips. To speed up the research, the National Technology Roadmap for Semiconductors identified in 1997 a number of major hurdles to be overcome. Some of these hurdles are related to test and dependability. Test is one of the most critical tasks in the semiconductor production process where Integrated Circuits (ICs) are tested several times starting from the wafer probing to the end of production test. Test is not only necessary to assure fault free devices but it also plays a key role in analyzing defects in the manufacturing process. This last point has high relevance since increasing time-to-market pressure on semiconductor fabrication often forces foundries to start volume production on a given semiconductor technology node before reaching the defect densities, and hence yield levels, traditionally obtained at that stage. The feedback derived from test is the only way to analyze and isolate many of the defects in today’s processes and to increase process’s yield. With the increasing need of high quality electronic products, at each new physical assembly level, such as board and system assembly, test is used for debugging, diagnosing and repairing the sub-assemblies in their new environment. Similarly, the increasing reliability, availability and serviceability requirements, lead the users of high-end products performing periodic tests in the field throughout the full life cycle. To allow advancements in each one of the above scaling trends, fundamental changes are expected to emerge in different Integrated Circuits (ICs) realization disciplines such as IC design, packaging and silicon process. These changes have a direct impact on test methods, tools and equipment. Conventional test equipment and methodologies will be inadequate to assure high quality levels. On chip specialized block dedicated to test, usually referred to as Infrastructure IP (Intellectual Property), need to be developed and included in the new complex designs to assure that new chips will be adequately tested, diagnosed, measured, debugged and even sometimes repaired. In this thesis, some of the scaling trends in designing new complex SoCs will be analyzed one at a time, observing their implications on test and identifying the key hurdles/challenges to be addressed. The goal of the remaining of the thesis is the presentation of possible solutions. It is not sufficient to address just one of the challenges; all must be met at the same time to fulfill the market requirements

    Test and Diagnosis of Integrated Circuits

    Get PDF
    The ever-increasing growth of the semiconductor market results in an increasing complexity of digital circuits. Smaller, faster, cheaper and low-power consumption are the main challenges in semiconductor industry. The reduction of transistor size and the latest packaging technology (i.e., System-On-a-Chip, System-In-Package, Trough Silicon Via 3D Integrated Circuits) allows the semiconductor industry to satisfy the latest challenges. Although producing such advanced circuits can benefit users, the manufacturing process is becoming finer and denser, making chips more prone to defects.The work presented in the HDR manuscript addresses the challenges of test and diagnosis of integrated circuits. It covers:- Power aware test;- Test of Low Power Devices;- Fault Diagnosis of digital circuits

    Recent Trends and Perspectives on Defect-Oriented Testing

    Get PDF
    Electronics employed in modern safety-critical systems require severe qualification during the manufacturing process and in the field, to prevent fault effects from manifesting themselves as critical failures during mission operations. Traditional fault models are not sufficient anymore to guarantee the required quality levels for chips utilized in mission-critical applications. The research community and industry have been investigating new test approaches such as device-aware test, cell-aware test, path-delay test, and even test methodologies based on the analysis of manufacturing data to move the scope from OPPM to OPPB. This special session presents four contributions, from academic researchers and industry professionals, to enable better chip quality. We present results on various activities towards this objective, including device-aware test, software-based self-test, and memory test

    Voltage stacking for near/sub-threshold operation

    Get PDF

    Development of a bridge fault extractor tool

    Get PDF
    Bridge fault extractors are tools that analyze chip layouts and produce a realistic list of bridging faults within that chip. FedEx, previously developed at Texas A&M University, extracts all two-node intralayer bridges of any given chip layout and optionally extracts all two-node interlayer bridges. The goal of this thesis was to further develop this tool. The primary goal was to speed it up so that it can handle large industrial designs in a reasonable amount of time. A second goal was to develop a graphical user interface (GUI) for this tool which aids in more effectively visualizing the bridge faults across the chip. The final aim of this thesis was to perform FedEx output analysis to understand the nature of the defects, such as variation of critical area (the area where the presence of a defect can cause a fault) as a function of layer as well as defect size

    Variation Analysis, Fault Modeling and Yield Improvement of Emerging Spintronic Memories

    Get PDF

    DART: Dependable VLSI Test Architecture and Its Implementation

    Get PDF
    Although many electronic safety-related systems require very high reliability, it is becoming harder and harder to achieve it because of delay-related failures, which are caused by decreased noise margin. This paper describes a technology named DART and its implementation. The DART repeatedly measures the maximum delay of a circuit and the amount of degradation in field, in consequence, confirms the marginality of the circuit. The system employing the DART will be informed the significant reduction of delay margin in advance of a failure and be able to repair it at an appropriate time. The DART also equips a technique to improve the test coverage using the rotating test and a technique to consider the test environment such as temperature or voltage using novel ring-oscillator-based monitors. The authors applied the proposed technology to an industrial design and confirmed its effectiveness and availability with reasonable resources.2012 IEEE International Test Conference, 5-8 November 2012, Anaheim, CA, US

    L2C2: Last-level compressed-contents non-volatile cache and a procedure to forecast performance and lifetime

    Get PDF
    Several emerging non-volatile (NV) memory technologies are rising as interesting alternatives to build the Last-Level Cache (LLC). Their advantages, compared to SRAM memory, are higher density and lower static power, but write operations wear out the bitcells to the point of eventually losing their storage capacity. In this context, this paper presents a novel LLC organization designed to extend the lifetime of the NV data array and a procedure to forecast in detail the capacity and performance of such an NV-LLC over its lifetime. From a methodological point of view, although different approaches are used in the literature to analyze the degradation of an NV-LLC, none of them allows to study in detail its temporal evolution. In this sense, this work proposes a forecasting procedure that combines detailed simulation and prediction, allowing an accurate analysis of the impact of different cache control policies and mechanisms (replacement, wear-leveling, compression, etc.) on the temporal evolution of the indices of interest, such as the effective capacity of the NV-LLC or the system IPC. We also introduce L2C2, a LLC design intended for implementation in NV memory technology that combines fault tolerance, compression, and internal write wear leveling for the first time. Compression is not used to store more blocks and increase the hit rate, but to reduce the write rate and increase the lifetime during which the cache supports near-peak performance. In addition, to support byte loss without performance drop, L2C2 inherently allows N redundant bytes to be added to each cache entry. Thus, L2C2+N, the endurance-scaled version of L2C2, allows balancing the cost of redundant capacity with the benefit of longer lifetime. For instance, as a use case, we have implemented the L2C2 cache with STT-RAM technology. It has affordable hardware overheads compared to that of a baseline NV-LLC without compression in terms of area, latency and energy consumption, and increases up to 6-37 times the time in which 50% of the effective capacity is degraded, depending on the variability in the manufacturing process. Compared to L2C2, L2C2+6 which adds 6 bytes of redundant capacity per entry, that means 9.1% of storage overhead, can increase up to 1.4-4.3 times the time in which the system gets its initial peak performance degraded

    Practical Lightweight Security: Physical Unclonable Functions and the Internet of Things

    Get PDF
    In this work, we examine whether Physical Unclonable Functions (PUFs) can act as lightweight security mechanisms for practical applications in the context of the Internet of Things (IoT). In order to do so, we first discuss what PUFs are, and note that memory-based PUFs seem to fit the best to the framework of the IoT. Then, we consider a number of relevant memory-based PUF designs and their properties, and evaluate their ability to provide security in nominal and adverse conditions. Finally, we present and assess a number of practical PUF-based security protocols for IoT devices and networks, in order to confirm that memory-based PUFs can indeed constitute adequate security mechanisms for the IoT, in a practical and lightweight fashion. More specifically, we first consider what may constitute a PUF, and we redefine PUFs as inanimate physical objects whose characteristics can be exploited in order to obtain a behaviour similar to a highly distinguishable (i.e., “(quite) unique”) mathematical function. We note that PUFs share many characteristics with biometrics, with the main difference being that PUFs are based on the characteristics of inanimate objects, while biometrics are based on the characteristics of humans and other living creatures. We also note that it cannot really be proven that PUFs are unique per instance, but they should be considered to be so, insofar as (human) biometrics are also considered to be unique per instance. We, then, proceed to discuss the role of PUFs as security mechanisms for the IoT, and we determine that memory-based PUFs are particularly suited for this function. We observe that the IoT nowadays consists of heterogeneous devices connected over diverse networks, which include both high-end and resource-constrained devices. Therefore, it is essential that a security solution for the IoT is not only effective, but also highly scalable, flexible, lightweight, and cost-efficient, in order to be considered as practical. To this end, we note that PUFs have been proposed as security mechanisms for the IoT in the related work, but the practicality of the relevant security mechanisms has not been sufficiently studied. We, therefore, examine a number of memory-based PUFs that are implemented using Commercial Off-The-Shelf (COTS) components, and assess their potential to serve as acceptable security mechanisms in the context of the IoT, not only in terms of effectiveness and cost, but also under both nominal and adverse conditions, such as ambient temperature and supply voltage variations, as well as in the presence of (ionising) radiation. In this way, we can determine whether memory-based PUFs are truly suitable to be used in the various application areas of the IoT, which may even involve particularly adverse environments, e.g., in IoT applications involving space modules and operations. Finally, we also explore the potential of memory-based PUFs to serve as adequate security mechanisms for the IoT in practice, by presenting and analysing a number of cryptographic protocols based on these PUFs. In particular, we study how memory-based PUFs can be used for key generation, as well as device identification, and authentication, their role as security mechanisms for current and next-generation IoT devices and networks, and their potential for applications in the space segment of the IoT and in other adverse environments. Additionally, this work also discusses how memory-based PUFs can be utilised for the implementation of lightweight reconfigurable PUFs that allow for advanced security applications. In this way, we are able to confirm that memory-based PUFs can indeed provide flexible, scalable, and efficient security solutions for the IoT, in a practical, lightweight, and inexpensive manner

    Characterization of Interconnection Delays in FPGAS Due to Single Event Upsets and Mitigation

    Get PDF
    RÉSUMÉ L’utilisation incessante de composants électroniques à géométrie toujours plus faible a engendré de nouveaux défis au fil des ans. Par exemple, des semi-conducteurs à mémoire et à microprocesseur plus avancés sont utilisés dans les systèmes avioniques qui présentent une susceptibilité importante aux phénomènes de rayonnement cosmique. L'une des principales implications des rayons cosmiques, observée principalement dans les satellites en orbite, est l'effet d'événements singuliers (SEE). Le rayonnement atmosphérique suscite plusieurs préoccupations concernant la sécurité et la fiabilité de l'équipement avionique, en particulier pour les systèmes qui impliquent des réseaux de portes programmables (FPGA). Les FPGA à base de cellules de mémoire statique (SRAM) présentent une solution attrayante pour mettre en oeuvre des systèmes complexes dans le domaine de l’avionique. Les expériences de rayonnement réalisées sur les FPGA ont dévoilé la vulnérabilité de ces dispositifs contre un type particulier de SEE, à savoir, les événements singuliers de changement d’état (SEU). Un SEU est considérée comme le changement de l'état d'un élément bistable (c'est-à-dire, un bit-flip) dû à l'effet d'un ion, d'un proton ou d’un neutron énergétique. Cet effet est non destructif et peut être corrigé en réécrivant la partie de la SRAM affectée. Les changements de délai (DC) potentiels dus aux SEU affectant la mémoire de configuration de routage ont été récemment confirmés. Un des objectifs de cette thèse consiste à caractériser plus précisément les DC dans les FPGA causés par les SEU. Les DC observés expérimentalement sont présentés et la modélisation au niveau circuit de ces DC est proposée. Les circuits impliqués dans la propagation du délai sont validés en effectuant une modélisation précise des blocs internes à l'intérieur du FPGA et en exécutant des simulations. Les résultats montrent l’origine des DC qui sont en accord avec les mesures expérimentales de délais. Les modèles proposés au niveau circuit sont, aux meilleures de notre connaissance, le premier travail qui confirme et explique les délais combinatoires dans les FPGA. La conception d'un circuit moniteur de délai pour la détection des DC a été faite dans la deuxième partie de cette thèse. Ce moniteur permet de détecter un changement de délai sur les sections critiques du circuit et de prévenir les pannes de synchronisation engendrées par les SEU sans utiliser la redondance modulaire triple (TMR).----------ABSTRACT The unrelenting demand for electronic components with ever diminishing feature size have emerged new challenges over the years. Among them, more advanced memory and microprocessor semiconductors are being used in avionic systems that exhibit a substantial susceptibility to cosmic radiation phenomena. One of the main implications of cosmic rays, which was primarily observed in orbiting satellites, is single-event effect (SEE). Atmospheric radiation causes several concerns regarding the safety and reliability of avionics equipment, particularly for systems that involve field programmable gate arrays (FPGA). SRAM-based FPGAs, as an attractive solution to implement systems in aeronautic sector, are very susceptible to SEEs in particular Single Event Upset (SEU). An SEU is considered as the change of the state of a bistable element (i.e., bit-flip) due to the effect of an energetic ion or proton. This effect is non-destructive and may be fixed by rewriting the affected part. Sensitivity evaluation of SRAM-based FPGAs to a physical impact such as potential delay changes (DC) has not been addressed thus far in the literature. DCs induced by SEU can affect the functionality of the logic circuits by disturbing the race condition on critical paths. The objective of this thesis is toward the characterization of DCs in SRAM-based FPGAs due to transient ionizing radiation. The DCs observed experimentally are presented and the circuit-level modeling of those DCs is proposed. Circuits involved in delay propagation are reverse-engineered by performing precise modeling of internal blocks inside the FPGA and executing simulations. The results show the root cause of DCs that are in good agreement with experimental delay measurements. The proposed circuit level models are, to the best of our knowledge, the first work on modeling of combinational delays in FPGAs.In addition, the design of a delay monitor circuit for DC detection is investigated in the second part of this thesis. This monitor allowed to show experimentally cumulative DCs on interconnects in FPGA. To this end, by avoiding the use of triple modular redundancy (TMR), a mitigation technique for DCs is proposed and the system downtime is minimized. A method is also proposed to decrease the clock frequency after DC detection without interrupting the process
    • …
    corecore