25 research outputs found

    Identification and Rejuvenation of NBTI-Critical Logic Paths in Nanoscale Circuits

    Get PDF
    The Negative Bias Temperature Instability (NBTI) phenomenon is agreed to be one of the main reliability concerns in nanoscale circuits. It increases the threshold voltage of pMOS transistors, thus, slows down signal propagation along logic paths between flip-flops. NBTI may cause intermittent faults and, ultimately, the circuit’s permanent functional failures. In this paper, we propose an innovative NBTI mitigation approach by rejuvenating the nanoscale logic along NBTI-critical paths. The method is based on hierarchical identification of NBTI-critical paths and the generation of rejuvenation stimuli using an Evolutionary Algorithm. A new, fast, yet accurate model for computation of NBTI-induced delays at gate-level is developed. This model is based on intensive SPICE simulations of individual gates. The generated rejuvenation stimuli are used to drive those pMOS transistors to the recovery phase, which are the most critical for the NBTI-induced path delay. It is intended to apply the rejuvenation procedure to the circuit, as an execution overhead, periodically. Experimental results performed on a set of designs demonstrate reduction of NBTI-induced delays by up to two times with an execution overhead of 0.1 % or less. The proposed approach is aimed at extending the reliable lifetime of nanoelectronics

    inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices

    Get PDF
    Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability. This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from \approx30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads

    Efficient Estimation of Reliability Metrics for Circuits in Deca-Nanometer Nodes

    Get PDF
    51 σ.Καθώς η τεχνολογία οδηγεί στη κατασκευή τρανζίστορ ολοένα και μικρότερων διαστάσεων, έχουν εμφανιστεί αρκετά φαινόμενα που επηρεάζουν την αξιοπιστία των ολοκληρωμένων κυκλωμάτων. Ένα από αυτά τα φαινόμενα ονομάζεται "Bias Temperature Instability", αποτελεί σημαντικό κίνδυνο για την αξιοπιστία των ολοκληρωμένων κυκλωμάτων και έχει παρατηρηθεί εδώ και πάνω από 30 χρόνια. Το πρώτο μοντέλο που προσπάθησε να εξηγήσει αυτό το φαινόμενο εμφανίστηκε πριν από 30 περίπου χρόνια, βασίστηκε στη διάχυση υδρογόνου και ως εκ τούτου ονομάστηκε "Reaction-Diffusion model". Πριν από μερικά χρόνια δημιουργήθηκε ένα νέο ατομιστικό μοντέλο το οποίο βασίζεται κυρίως στην εμφάνιση ελαττωμάτων στο διηλεκτρικό μεταξύ της πύλης και του καναλιού των FET τρανζίστορ. Μελετώντας κανείς τη βιβλιογραφία που αφορά στο ατομιστικό αυτό μοντέλο, μπορεί να συναντήσει εργαλεία που προσομοιώνουν με ακρίβεια το μοντέλο αλλά δυστυχώς απαιτούν αρκετό χρόνο για να εκτελεστούν, κάτι το οποίο τα καθιστά απαγορευτικά για εκτενή χρήση. Παράλληλα, υπάρχουν εργαλεία βασισμένα στο μοντέλο της διάχυσης τα οποία βέβαια αδυνατούν να παράξουν σωστά και λεπτομερή αποτελέσματα, κυρίως σε τεχνολογίες μικρών διαστάσεων. Η παρούσα λοιπόν διπλωματική εργασία παρουσιάζει τα αποτελέσματα ενός νέου και καινοτόμου εργαλείου το οποίο βασίζεται στο ατομιστικό μοντέλο, ωστόσο προσομοιώνει αποδοτικά αλλά και με ακρίβεια το φαινόμενο της γήρανσης. Ένα αντιπροσωπευτικό μονοπάτι στατικής μνήμης (SRAM) θα χρησιμοποιηθεί ως παράδειγμα της λειτουργίας του μοντέλου ενώ παράλληλα θα υπολογισθούν, με βάση τα αποτελέσματα των προσομοιώσεων αυτών, μετρικές, σημαντικές για το χαρακτηρισμό της απόδοσης και αξιοπιστίας του κυκλώματος, ενώ παράλληλα θα μελετηθούν λεπτομερώς και οι σχέσεις που τις συνδέουν.In modern technologies of integrated circuits (IC) and with the downscaling of device dimensions, various degradation modes constitute major reliability concerns. Bias Temperature Instability (BTI) is a representative example, posing as a significant reliability threat in Field-Effect Transistor (FET) technologies and has been known for more than 30 years. At first, the model that tried to explain this phenomenon was based on the Reaction-Diffusion (RD) theory and was developed nearly 30 years ago. Recently, an atomistic model has been proposed, that enables the modeling of BTI in modern technologies. By observing the amount of software designed to simulate the BTI degradation, tools can be found that are based on the atomistic theory but are computationally prohibitive when it comes to simulating complex circuits consisting of a large number of devices. Tools based on the RD model are unable to accurately capture the BTI-induced degradation, especially in devices with small dimensions. The current thesis is appropriately positioned since it discusses a novel simulation framework that is efficient yet highly accurate. A subset of an embedded Static Random Access Memory (SRAM) is used for verification purposes. The estimation of the functional yield of the circuit over three years of operation will be examined as well as other reliability metrics, such as defects per million (DPM), mean time to failure (MTTF) and failures in time (FIT rate). Finally, the interplay between these metrics is discussed and efficient computation methods are proposed for each one.Μιχαήλ Α. Νόλτση

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Variability-Aware Circuit Performance Optimisation Through Digital Reconfiguration

    Get PDF
    This thesis proposes optimisation methods for improving the performance of circuits imple- mented on a custom reconfigurable hardware platform with knowledge of intrinsic variations, through the use of digital reconfiguration. With the continuing trend of transistor shrinking, stochastic variations become first order effects, posing a significant challenge for device reliability. Traditional device models tend to be too conservative, as the margins are greatly increased to account for these variations. Variation-aware optimisation methods are then required to reduce the performance spread caused by these substrate variations. The Programmable Analogue and Digital Array (PAnDA) is a reconfigurable hardware plat- form which combines the traditional architecture of a Field Programmable Gate Array (FPGA) with the concept of configurable transistor widths, and is used in this thesis as a platform on which variability-aware circuits can be implemented. A model of the PAnDA architecture is designed to allow for rapid prototyping of devices, making the study of the effects of intrinsic variability on circuit performance – which re- quires expensive statistical simulations – feasible. This is achieved by means of importing statistically-enhanced transistor performance data from RandomSPICE simulations into a model of the PAnDA architecture implemented in hardware. Digital reconfiguration is then used to explore the hardware resources available for performance optimisation. A bio-inspired optimisation algorithm is used to explore the large solution space more efficiently. Results from test circuits suggest that variation-aware optimisation can provide a significant reduction in the spread of the distribution of performance across various instances of circuits, as well as an increase in performance for each. Even if transistor geometry flexibility is not available, as is the case of traditional architectures, it is still possible to make use of the substrate variations to reduce spread and increase performance by means of function relocation

    Thermal Management for Dependable On-Chip Systems

    Get PDF
    This thesis addresses the dependability issues in on-chip systems from a thermal perspective. This includes an explanation and analysis of models to show the relationship between dependability and tempature. Additionally, multiple novel methods for on-chip thermal management are introduced aiming to optimize thermal properties. Analysis of the methods is done through simulation and through infrared thermal camera measurements

    Simulation of charge-trapping in nano-scale MOSFETs in the presence of random-dopants-induced variability

    Get PDF
    The growing variability of electrical characteristics is a major issue associated with continuous downscaling of contemporary bulk MOSFETs. In addition, the operating conditions brought about by these same scaling trends have pushed MOSFET degradation mechanisms such as Bias Temperature Instability (BTI) to the forefront as a critical reliability threat. This thesis investigates the impact of this ageing phenomena, in conjunction with device variability, on key MOSFET electrical parameters. A three-dimensional drift-diffusion approximation is adopted as the simulation approach in this work, with random dopant fluctuations—the dominant source of statistical variability—included in the simulations. The testbed device is a realistic 35 nm physical gate length n-channel conventional bulk MOSFET. 1000 microscopically different implementations of the transistor are simulated and subjected to charge-trapping at the oxide interface. The statistical simulations reveal relatively rare but very large threshold voltage shifts, with magnitudes over 3 times than that predicted by the conventional theoretical approach. The physical origin of this effect is investigated in terms of the electrostatic influences of the random dopants and trapped charges on the channel electron concentration. Simulations with progressively increased trapped charge densities—emulating the characteristic condition of BTI degradation—result in further variability of the threshold voltage distribution. Weak correlations of the order of 10-2 are found between the pre-degradation threshold voltage and post-degradation threshold voltage shift distributions. The importance of accounting for random dopant fluctuations in the simulations is emphasised in order to obtain qualitative agreement between simulation results and published experimental measurements. Finally, the information gained from these device-level physical simulations is integrated into statistical compact models, making the information available to circuit designers

    Solid State Circuits Technologies

    Get PDF
    The evolution of solid-state circuit technology has a long history within a relatively short period of time. This technology has lead to the modern information society that connects us and tools, a large market, and many types of products and applications. The solid-state circuit technology continuously evolves via breakthroughs and improvements every year. This book is devoted to review and present novel approaches for some of the main issues involved in this exciting and vigorous technology. The book is composed of 22 chapters, written by authors coming from 30 different institutions located in 12 different countries throughout the Americas, Asia and Europe. Thus, reflecting the wide international contribution to the book. The broad range of subjects presented in the book offers a general overview of the main issues in modern solid-state circuit technology. Furthermore, the book offers an in depth analysis on specific subjects for specialists. We believe the book is of great scientific and educational value for many readers. I am profoundly indebted to the support provided by all of those involved in the work. First and foremost I would like to acknowledge and thank the authors who worked hard and generously agreed to share their results and knowledge. Second I would like to express my gratitude to the Intech team that invited me to edit the book and give me their full support and a fruitful experience while working together to combine this book

    Efficient Evaluation of Probability and Reliability with Digital Integrated Circuits

    Get PDF
    As complementary metal–oxide–semiconductor (CMOS) devices shrink to nanoscale, digital integrated circuits (ICs) are more susceptible to various environmental parameters, such as temperature, supply voltage, wiring, noise, and fabrication process variations. This would reduce the circuit operation reliability (i.e., the probability that a circuit or component is performing its intended logic function). Signal probability (the probability that a digital signal is producing logic 1) is another factor that measures circuit’s dynamic behavior and power dissipation. Research shows that signal probability and reliability within ICs may interact with each other in a complicated way. Generally speaking, as signal probability changes due to input probability variations, so does the signal reliability, and vice versa. This motivates simultaneous evaluation of both for digital ICs towards their performance improvement. However, this evaluation could be a challenge especially for large-scale circuits, due to signal correlations caused by reconvergent fanouts within circuits. Out of two existing evaluation methods, i.e., numerical and analytical methods, the former can give high accuracy level at the cost of expensive computation, while the latter does exactly the opposite. This thesis provides a hybrid solution by taking advantage of both numerical and analytical methods to achieve fast and accurate evaluation for signal probability and reliability for ICs (including both combinational and sequential circuits). First, we develop a categorization-based analytical model for combinational circuits to deal with a variety of signal correlations. For strongly correlated or independent cases, analytical solutions are applied for accurate results. For cases with moderate correlation strength, we use local bitstream simulations for fast estimation. Our simulation results show that the proposed method is hundreds of times faster than Monte-Carlo (MC) simulation, while keeping almost same level of accuracy. We then extend the above method to sequential circuits (with finite-state-machine model) for probability and reliability evaluation. Since sequential circuits can be viewed as an unfolded network of combinational logic, our focus is on how both probability and reliability converge to a final stable state over a certain number of cycles/iterations. To improve the efficiency of this convergence process, we propose a two-step-convergence (TSC) model instead of using traditional step-size based convergence. Simulation results show that the proposed method speeds up the process by around 30% on average compared to traditional method while maintaining a high level of accuracy. Finally, we study the impact of device aging on circuit reliability. After years of operation, CMOS (especially PMOS) devices would experience an increase in their threshold voltage, a phenomenon called Negative Bias Temperature Instability (NBTI). This aging effect leads to the increased gate delay with late arrival time of signals, making circuits temporally unreliable. Threshold voltage changes may also negatively affect the probability that transistors perform intended logical operations, causing them spatially more unreliable. Our investigation focuses on evaluation of the overall reliability at circuit-level by considering both spatial (solely considering the correctness of signal logic values) and temporal (considering the signal arrival time to catch up sampling action) aspects of it. This would help circuit designers predict the circuit lifetime. Simulations on benchmark circuits show that the reliability degradation rate due to aging effect ranges from 1.5% to 8.2% over one-year period, depending on specific circuits

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
    corecore