1,051 research outputs found

    Restoring Reliability in Fault Tolerant Reconfigurable Systems

    Get PDF
    The new generations of SRAM-based FPGAdevices, built on nanometer technology, are thepreferred choice for the implementation ofreconfigurable computing platforms. However,smaller technological scales increase theirvulnerability to manufacturing imperfections andhence to the occurrence of electromigration.Moreover, the large internal RAM (for configurationpurposes or as embedded memory blocks) makesthem more prone to soft errors.The incorporation of self-reconfigurationcapabilities in recent FPGAs, allied to the use of softand hard microprocessor cores, facilitates the offsetof these vulnerabilities by enabling the developmentof self-restoring fault tolerant reconfigurablesystems. In the methodology presented in this paper,the embedded microprocessor is also responsible forthe implementation of online self-test-and-repairstrategies, based on modular redundancy and onself-reconfiguration. The detection of faults, causedby soft or hard errors, may be followed by repairingactions, depending on the fault type. This approachleads to smoother system degradation, extending itslifetime and improving its reliability

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Nanoelectronic Design Based on a CNT Nano-Architecture

    Get PDF

    Fault modelling and accelerated simulation of integrated circuits manufacturing defects under process variation

    No full text
    As silicon manufacturing process scales to and beyond the 65-nm node, process variation can no longer be ignored. The impact of process variation on integrated circuit performance and power has received significant research input. Variation-aware test, on the other hand, is a relatively new research area that is currently receiving attention worldwide.Research has shown that test without considering process variation may lead to loss of test quality. Fault modelling and simulation serve as a backbone of manufacturing test. This thesis is concerned with developing efficient fault modelling techniques and simulation methodologies that take into account the effect of process variation on manufacturing defects with particular emphasis on resistive bridges and resistive opens.The first contribution of this thesis addresses the problem of long computation time required to generate logic fault of resistive bridges under process variation by developing a fast and accurate modelling technique to model logic fault behaviour of resistive bridges.The new technique is implemented by employing two efficient voltage calculation algorithms to calculate the logic threshold voltage of driven gates and critical resistance of a fault-site to enable the computation of bridge logic faults without using SPICE. Simulation results show that the technique is fast (on average 53 times faster) and accurate (worst case is 2.64% error) when compared with HSPICE. The second contribution analyses the complexity of delay fault simulation of resistive bridges to reduce the computation time of delay fault when considering process variation. An accelerated delay fault simulation methodology of resistive bridges is developed by employing a three-step strategy to speed up the calculation of transient gate output voltage which is needed to accurately compute delay faults. Simulation results show that the methodology is on average 17.4 times faster, with 5.2% error in accuracy, when compared with HSPICE. The final contribution presents an accelerated simulation methodology of resistive opens to address the problem of long simulation time of delay fault when considering process variation. The methodology is implemented by using two efficient algorithms to accelerate the computation of transient gate output voltage and timing critical resistance of an open fault-site. Simulation results show that the methodology is on average up to 52 times faster than HSPICE, with 4.2% error in accuracy

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    Studying the effects of intermittent faults on a microcontroller

    Full text link
    As CMOS technology scales to the nanometer range, designers have to deal with a growing number and variety of fault types. Particularly, intermittent faults are expected to be an important issue in modern VLSI circuits. The complexity of manufacturing processes, producing residues and parameter variations, together with special aging mechanisms, may increase the presence of such faults. This work presents a case study of the impact of intermittent faults on the behavior of a commercial microcontroller. In order to carry out an exhaustive reliability assessment, the methodology used lies in VHDL-based fault injection technique. In this way, a set of intermittent fault models at logic and register transfer abstraction levels have been generated and injected in the VHDL model of the system. From the simulation traces, the occurrences of failures and latent errors have been logged. The impact of intermittent faults has been also compared to that got when injecting transient and permanent faults. Finally, some injection experiments have been reproduced in a RISC microprocessor and compared with those of the microcontroller. © 2012 Elsevier Ltd. All rights reserved.This work has been funded by the Spanish Government under the Research Project TIN2009-13825.Gil Tomás, DA.; Gracia-Morán, J.; Baraza Calvo, JC.; Saiz-Adalid, L.; Gil Vicente, PJ. (2012). Studying the effects of intermittent faults on a microcontroller. Microelectronics Reliability. 52(11):2837-2846. https://doi.org/10.1016/j.microrel.2012.06.004S28372846521

    Interfacing single photons and single quantum dots with photonic nanostructures

    Full text link
    Photonic nanostructures provide means of tailoring the interaction between light and matter and the past decade has witnessed a tremendous experimental and theoretical progress in this subject. In particular, the combination with semiconductor quantum dots has proven successful. This manuscript reviews quantum optics with excitons in single quantum dots embedded in photonic nanostructures. The ability to engineer the light-matter interaction strength in integrated photonic nanostructures enables a range of fundamental quantum-electrodynamics experiments on, e.g., spontaneous-emission control, modified Lamb shifts, and enhanced dipole-dipole interaction. Furthermore, highly efficient single-photon sources and giant photon nonlinearities may be implemented with immediate applications for photonic quantum-information processing. The review summarizes the general theoretical framework of photon emission including the role of dephasing processes, and applies it to photonic nanostructures of current interest, such as photonic-crystal cavities and waveguides, dielectric nanowires, and plasmonic waveguides. The introduced concepts are generally applicable in quantum nanophotonics and apply to a large extent also to other quantum emitters, such as molecules, nitrogen vacancy ceters, or atoms. Finally, the progress and future prospects of applications in quantum-information processing are considered.Comment: Updated version resubmitted to Reviews of Modern Physic

    Quantitative optical microscopy of colloids : The legacy of Jean Perrin

    Get PDF
    When the discontinuous structure of matter was yet an intriguing hypothesis, Jean Perrin performed a set of elegant and pioneering experiments that marked the birth of what today we consider quantitative optical microscopy. Picking up the baton from Perrin, today microscopists face incredible challenges, aiming to extract quantitative information from the increasingly content-rich and complex images made available by modern microscopy techniques. Here, I provide an overview of these challenges and describe the solutions adopted to succeed in this complex task when investigating colloidal systems or systems in which colloidal particles are embedded as microrheological probes

    Techniques for Improving Security and Trustworthiness of Integrated Circuits

    Get PDF
    The integrated circuit (IC) development process is becoming increasingly vulnerable to malicious activities because untrusted parties could be involved in this IC development flow. There are four typical problems that impact the security and trustworthiness of ICs used in military, financial, transportation, or other critical systems: (i) Malicious inclusions and alterations, known as hardware Trojans, can be inserted into a design by modifying the design during GDSII development and fabrication. Hardware Trojans in ICs may cause malfunctions, lower the reliability of ICs, leak confidential information to adversaries or even destroy the system under specifically designed conditions. (ii) The number of circuit-related counterfeiting incidents reported by component manufacturers has increased significantly over the past few years with recycled ICs contributing the largest percentage of the total reported counterfeiting incidents. Since these recycled ICs have been used in the field before, the performance and reliability of such ICs has been degraded by aging effects and harsh recycling process. (iii) Reverse engineering (RE) is process of extracting a circuit’s gate-level netlist, and/or inferring its functionality. The RE causes threats to the design because attackers can steal and pirate a design (IP piracy), identify the device technology, or facilitate other hardware attacks. (iv) Traditional tools for uniquely identifying devices are vulnerable to non-invasive or invasive physical attacks. Securing the ID/key is of utmost importance since leakage of even a single device ID/key could be exploited by an adversary to hack other devices or produce pirated devices. In this work, we have developed a series of design and test methodologies to deal with these four challenging issues and thus enhance the security, trustworthiness and reliability of ICs. The techniques proposed in this thesis include: a path delay fingerprinting technique for detection of hardware Trojans, recycled ICs, and other types counterfeit ICs including remarked, overproduced, and cloned ICs with their unique identifiers; a Built-In Self-Authentication (BISA) technique to prevent hardware Trojan insertions by untrusted fabrication facilities; an efficient and secure split manufacturing via Obfuscated Built-In Self-Authentication (OBISA) technique to prevent reverse engineering by untrusted fabrication facilities; and a novel bit selection approach for obtaining the most reliable bits for SRAM-based physical unclonable function (PUF) across environmental conditions and silicon aging effects

    Quantitative optical microscopy of colloids : The legacy of Jean Perrin

    Get PDF
    When the discontinuous structure of matter was yet an intriguing hypothesis, Jean Perrin performed a set of elegant and pioneering experiments that marked the birth of what today we consider quantitative optical microscopy. Picking up the baton from Perrin, today microscopists face incredible challenges, aiming to extract quantitative information from the increasingly content-rich and complex images made available by modern microscopy techniques. Here, I provide an overview of these challenges and describe the solutions adopted to succeed in this complex task when investigating colloidal systems or systems in which colloidal particles are embedded as microrheological probes
    corecore