298 research outputs found

    Low-power emerging memristive designs towards secure hardware systems for applications in internet of things

    Get PDF
    Emerging memristive devices offer enormous advantages for applications such as non-volatile memories and in-memory computing (IMC), but there is a rising interest in using memristive technologies for security applications in the era of internet of things (IoT). In this review article, for achieving secure hardware systems in IoT, low-power design techniques based on emerging memristive technology for hardware security primitives/systems are presented. By reviewing the state-of-the-art in three highlighted memristive application areas, i.e. memristive non-volatile memory, memristive reconfigurable logic computing and memristive artificial intelligent computing, their application-level impacts on the novel implementations of secret key generation, crypto functions and machine learning attacks are explored, respectively. For the low-power security applications in IoT, it is essential to understand how to best realize cryptographic circuitry using memristive circuitries, and to assess the implications of memristive crypto implementations on security and to develop novel computing paradigms that will enhance their security. This review article aims to help researchers to explore security solutions, to analyze new possible threats and to develop corresponding protections for the secure hardware systems based on low-cost memristive circuit designs

    Multilevel Modeling, Formal Analysis, and Characterization of Single Event Transients Propagation in Digital Systems

    Get PDF
    RÉSUMÉ La croissance exponentielle du nombre de transistors par puce a apporté des progrès considérables aux performances et fonctionnalités des dispositifs semi-conducteurs avec une miniaturisation des dimensions physiques ainsi qu’une augmentation de vitesse. De nos jours, les appareils électroniques utilisés dans un large éventail d’applications telles que les systèmes de divertissement personnels, l’industrie automobile, les systèmes électroniques médicaux, et le secteur financier ont changé notre façon de vivre. Cependant, des études récentes ont démontré que le rétrécissement permanent de la taille des transistors qui s’approchent des dimensions nanométriques fait surgir des défis majeurs. La réduction de la fiabilité au sens large (c.-à-d., la capacité à fournir la fonction attendue) est l’un d’entre eux. Lorsqu’un système est conçu avec une technologie avancée, on s’attend à ce qu’ il connaît plus de défaillances dans sa durée de vie. De telles défaillances peuvent avoir des conséquences graves allant des pertes financières aux pertes humaines. Les erreurs douces induites par la radiation, qui sont apparues d’abord comme une source de panne plutôt exotique causant des anomalies dans les satellites, sont devenues l’un des problèmes les plus difficiles qui influencent la fiabilité des systèmes microélectroniques modernes, y compris les dispositifs terrestres. Dans le secteur médical par exemple, les erreurs douces ont été responsables de l’échec et du rappel de plusieurs stimulateurs cardiaques implantables. En fonction du transistor affecté lors de la fabrication, le passage d’une particule peut induire des perturbations isolées qui se manifestent comme un basculement du contenu d’une cellule de mémoire (c.-à-d., Single Event Upsets (SEU)) ou un changement temporaire de la sortie (sous forme de bruit) dans la logique combinatoire (c.-à-d., Single Event Transients (SETs)). Les SEU ont été largement étudiés au cours des trois dernières décennies, car ils étaient considérés comme la cause principale des erreurs douces. Néanmoins, des études expérimentales ont montré qu’avec plus de miniaturisation technologique, la contribution des SET au taux d’erreurs douces est remarquable et qu’elle peut même dépasser celui des SEU dans les systèmes à haute fréquence [1], [2]. Afin de minimiser l’impact des erreurs douces, l’effet des SET doit être modélisé, prédit et atténué. Toutefois, malgré les progrès considérables accomplis dans la vérification fonctionnelle des circuits numériques, il y a eu très peu de progrès en matiàre de vérification non-fonctionnelle (par exemple, l’analyse des erreurs douces). Ceci est dû au fait que la modélisation et l’analyse des propriétés non-fonctionnelles des SET pose un grand défi. Cela est lié à la nature aléatoire des défauts et à la difficulté de modéliser la variation de leurs caractéristiques lorsqu’ils se propagent.----------ABSTRACT The exponential growth in the number of transistors per chip brought tremendous progress in the performance and the functionality of semiconductor devices associated with reduced physical dimensions and higher speed. Electronic devices used in a wide range of applications such as personal entertainment systems, automotive industry, medical electronic systems, and financial sector changed the way we live nowadays. However, recent studies reveal that further downscaling of the transistor size at nano-scale technology leads to major challenges. Reliability (i.e., ability to provide intended functionality) is one of them, where a system designed in nano-scale nodes is expected to experience more failures in its lifetime than if it was designed using larger technology node size. Such failures can lead to serious conséquences ranging from financial losses to even loss of human life. Soft errors induced by radiation, which were initially considered as a rather exotic failure mechanism causing anomalies in satellites, have become one of the most challenging issues that impact the reliability of modern microelectronic systems, including devices at terrestrial altitudes. For instance, in the medical industry, soft errors have been responsible of the failure and recall of many implantable cardiac pacemakers. Depending on the affected transistor in the design, a particle strike can manifest as a bit flip in a state element (i.e., Single Event Upset (SEU)) or temporally change the output of a combinational gate (i.e., Single Event Transients (SETs)). Initially, SEUs have been widely studied over the last three decades as they were considered to be the main source of soft errors. However, recent experiments show that with further technology downscaling, the contribution of SETs to the overall soft error rate is remarkable and in high frequency systems, it might exceed that of SEUs [1], [2]. In order to minimize the impact of soft errors, the impact of SETs needs to be modeled, predicted, and mitigated. However, despite considerable progress towards developing efficient methodologies for the functional verification of digital designs, advances in non-functional verification (e.g., soft error analysis) have been lagging. This is due to the fact that the modeling and analysis of non-functional properties related to SETs is very challenging. This can be related to the random nature of these faults and the difficulty of modeling the variation in its characteristics while propagating. Moreover, many details about the design structure and the SETs characteristics may not be available at high abstraction levels. Thus, in high level analysis, many assumptions about the SETs behavior are usually made, which impacts the accuracy of the generated results. Consequently, the lowcost detection of soft errors due to SETs is very challenging and requires more sophisticated techniques

    Reliability-aware and energy-efficient system level design for networks-on-chip

    Get PDF
    2015 Spring.Includes bibliographical references.With CMOS technology aggressively scaling into the ultra-deep sub-micron (UDSM) regime and application complexity growing rapidly in recent years, processors today are being driven to integrate multiple cores on a chip. Such chip multiprocessor (CMP) architectures offer unprecedented levels of computing performance for highly parallel emerging applications in the era of digital convergence. However, a major challenge facing the designers of these emerging multicore architectures is the increased likelihood of failure due to the rise in transient, permanent, and intermittent faults caused by a variety of factors that are becoming more and more prevalent with technology scaling. On-chip interconnect architectures are particularly susceptible to faults that can corrupt transmitted data or prevent it from reaching its destination. Reliability concerns in UDSM nodes have in part contributed to the shift from traditional bus-based communication fabrics to network-on-chip (NoC) architectures that provide better scalability, performance, and utilization than buses. In this thesis, to overcome potential faults in NoCs, my research began by exploring fault-tolerant routing algorithms. Under the constraint of deadlock freedom, we make use of the inherent redundancy in NoCs due to multiple paths between packet sources and sinks and propose different fault-tolerant routing schemes to achieve much better fault tolerance capabilities than possible with traditional routing schemes. The proposed schemes also use replication opportunistically to optimize the balance between energy overhead and arrival rate. As 3D integrated circuit (3D-IC) technology with wafer-to-wafer bonding has been recently proposed as a promising candidate for future CMPs, we also propose a fault-tolerant routing scheme for 3D NoCs which outperforms the existing popular routing schemes in terms of energy consumption, performance and reliability. To quantify reliability and provide different levels of intelligent protection, for the first time, we propose the network vulnerability factor (NVF) metric to characterize the vulnerability of NoC components to faults. NVF determines the probabilities that faults in NoC components manifest as errors in the final program output of the CMP system. With NVF aware partial protection for NoC components, almost 50% energy cost can be saved compared to the traditional approach of comprehensively protecting all NoC components. Lastly, we focus on the problem of fault-tolerant NoC design, that involves many NP-hard sub-problems such as core mapping, fault-tolerant routing, and fault-tolerant router configuration. We propose a novel design-time (RESYN) and a hybrid design and runtime (HEFT) synthesis framework to trade-off energy consumption and reliability in the NoC fabric at the system level for CMPs. Together, our research in fault-tolerant NoC routing, reliability modeling, and reliability aware NoC synthesis substantially enhances NoC reliability and energy-efficiency beyond what is possible with traditional approaches and state-of-the-art strategies from prior work

    Timing speculation and adaptive reliable overclocking techniques for aggressive computer systems

    Get PDF
    Computers have changed our lives beyond our own imagination in the past several decades. The continued and progressive advancements in VLSI technology and numerous micro-architectural innovations have played a key role in the design of spectacular low-cost high performance computing systems that have become omnipresent in today\u27s technology driven world. Performance and dependability have become key concerns as these ubiquitous computing machines continue to drive our everyday life. Every application has unique demands, as they run in diverse operating environments. Dependable, aggressive and adaptive systems improve efficiency in terms of speed, reliability and energy consumption. Traditional computing systems run at a fixed clock frequency, which is determined by taking into account the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable overclocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. The success of this design methodology relies on the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case design methodology. Better-than-worst-case design methodology is advocated by several recent research pursuits, which exploit dependability techniques to enhance computer system performance. In this dissertation, we address different aspects of timing speculation based adaptive reliable overclocking schemes, and evaluate their role in the design of low-cost, high performance, energy efficient and dependable systems. We visualize various control knobs in the design that can be favorably controlled to ensure different design targets. As part of this research, we extend the SPRIT3E, or Superscalar PeRformance Improvement Through Tolerating Timing Errors, framework, and characterize the extent of application dependent performance acceleration achievable in superscalar processors by scrutinizing the various parameters that impact the operation beyond worst-case limits. We study the limitations imposed by short-path constraints on our technique, and present ways to exploit them to maximize performance gains. We analyze the sensitivity of our technique\u27s adaptiveness by exploring the necessary hardware requirements for dynamic overclocking schemes. Experimental analysis based on SPEC2000 benchmarks running on a SimpleScalar Alpha processor simulator, augmented with error rate data obtained from hardware simulations of a superscalar processor, are presented. Even though reliable overclocking guarantees functional correctness, it leads to higher power consumption. As a consequence, reliable overclocking without considering on-chip temperatures will bring down the lifetime reliability of the chip. In this thesis, we analyze how reliable overclocking impacts the on-chip temperature of a microprocessor and evaluate the effects of overheating, due to such reliable dynamic frequency tuning mechanisms, on the lifetime reliability of these systems. We then evaluate the effect of performing thermal throttling, a technique that clamps the on-chip temperature below a predefined value, on system performance and reliability. Our study shows that a reliably overclocked system with dynamic thermal management achieves 25% performance improvement, while lasting for 14 years when being operated within 353K. Over the past five decades, technology scaling, as predicted by Moore\u27s law, has been the bedrock of semiconductor technology evolution. The continued downscaling of CMOS technology to deep sub-micron gate lengths has been the primary reason for its dominance in today\u27s omnipresent silicon microchips. Even as the transition to the next technology node is indispensable, the initial cost and time associated in doing so presents a non-level playing field for the competitors in the semiconductor business. As part of this thesis, we evaluate the capability of speculative reliable overclocking mechanisms to maximize performance at a given technology level. We evaluate its competitiveness when compared to technology scaling, in terms of performance, power consumption, energy and energy delay product. We present a comprehensive comparison for integer and floating point SPEC2000 benchmarks running on a simulated Alpha processor at three different technology nodes in normal and enhanced modes. Our results suggest that adopting reliable overclocking strategies will help skip a technology node altogether, or be competitive in the market, while porting to the next technology node. Reliability has become a serious concern as systems embrace nanometer technologies. In this dissertation, we propose a novel fault tolerant aggressive system that combines soft error protection and timing error tolerance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed Conjoined Pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate-level timing simulations, using 45nm technology, of a conjoined two-stage arithmetic pipeline and a conjoined five-stage DLX pipeline processor, with forwarding logic, show that our approach, even under a severe fault injection campaign, achieves near 100% fault coverage and an average performance improvement of about 20%, when dynamically overclocked

    Développement de circuits logiques programmables résistants aux alas logiques en technologie CMOS submicrométrique

    Get PDF
    The electronics associated to the particle detectors of the Large Hadron Collider (LHC), under construction at CERN, will operate in a very harsh radiation environment. Most of the microelectronics components developed for the first generation of LHC experiments have been designed with very precise experiment-specific goals and are hardly adaptable to other applications. Commercial Off-The-Shelf (COTS) components cannot be used in the vicinity of particle collision due to their poor radiation tolerance. This thesis is a contribution to the effort to cover the need for radiation-tolerant SEU-robust programmable components for application in High Energy Physics (HEP) experiments. Two components are under development: a Programmable Logic Device (PLD) and a Field-Programmable Gate Array (FPGA). The PLD is a fuse-based, 10-input, 8-I/O general architecture device in 0.25 micron CMOS technology. The FPGA under development is instead a 32x32 logic block array, equivalent to ~25k gates, in 0.13 micron CMOS. This work focussed also on the research for an SEU-robust register in both the mentioned technologies. The SEU-robust register is employed as a user data flip-flop in the FPGA and PLD designs and as a configuration cell as well in the FPGA design

    Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures

    Get PDF
    The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems\u27 reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages

    High power microwave interference effects on analog and digital circuits in IC's

    Get PDF
    Microwave or electromagnetic interference (EMI) can couple into electronic circuits and systems intentionally from high power microwave (HPM) sources or unintentionally due to the proximity to general electromagnetic (EM) environments, and cause "soft" reversible upsets and "hard" irreversible failures. As scaling-down of device feature size and bias voltage progresses, the circuits and systems become more susceptible to the interference. Thus, even low power interference can disrupt the operation of the circuits and systems. Furthermore, it is reported that even electronic systems under high level of shielding can be upset by intentional electromagnetic interference (IEMI), which has been drawing a great deal of concern from both the civil and military communities, but little has been done in terms of systematic study and investigation of these effects on IC circuits and devices. We have investigated the effects of high power microwave interference on three levels, (a) on fundamental single MOSFET devices, (b) on basic CMOS IC inverters and cascaded inverters, and (c) on a representative large IC timer circuit for automotive applications. We have studied and identified the most vulnerable static and dynamic parameters of operation related to device upsets. Fundamental upset mechanisms in MOSFETs and CMOS inverters and their relation to the characteristics of microwave interference (power, frequency, width, and period) and the device properties such as size, mobility, dopant concentration, and contact resistances, were investigated. Critical upsets in n-channel MOSFET devices resulting in loss of amplifier characteristics, were identified for the power levels above 10dBm in the frequency range between 1 and 20 GHz. We have found that microwave interference induced excess charges are responsible for the upsets. Upsets in the static operation of CMOS inverters such as noise margins, output voltages, power dissipation, and bit-flip errors were identified using a load-line characteristic analysis. We developed a parameter extraction method that can predict the dynamic operation of inverters under microwave interference from DC load-line characteristics. Using the method, the effects of microwave interference on propagation delays, output voltage swings, and output currents as well as their relation to device scaling, were investigated. Two new critical hard error sources in MOSFETs and CMOS inverters regarding power dissipation and power budget disruption were found. EMI hardened design for digital circuits has been proposed to mitigate the stress on the devices, the contacts, and the interconnects. We found important new bit-flip and latch-up errors under pulsed microwave interference, which demonstrated that the excess charge effects are due to electron-hole pair generation under microwave interference. We proposed a theory of excess charge effects and obtained good agreement of our excess charge model with our experimental results. Further work is proposed to improve the vulnerabilities of integrated circuits

    Study Of Nanoscale Cmos Device And Circuit Reliability

    Get PDF
    The development of semiconductor technology has led to the significant scaling of the transistor dimensions -The transistor gate length drops down to tens of nanometers and the gate oxide thickness to 1 nm. In the future several years, the deep submicron devices will dominate the semiconductor industry for the high transistor density and the corresponding performance enhancement. For these devices, the reliability issues are the first concern for the commercialization. The major reliability issues caused by voltage and/or temperature stress are gate oxide breakdown (BD), hot carrier effects (HCs), and negative bias temperature instability (NBTI). They become even more important for the nanoscale CMOS devices, because of the high electrical field due to the small device size and high temperature due to the high transistor densities and high-speed performances. This dissertation focuses on the study of voltage and temperature stress-induced reliability issues in nanoscale CMOS devices and circuits. The physical mechanisms for BD, HCs, and NBTI have been presented. A practical and accurate equivalent circuit model for nanoscale devices was employed to simulate the RF performance degradation in circuit level. The parameter measurement and model extraction have been addressed. Furthermore, a methodology was developed to predict the HC, TDDB, and NBTI effects on the RF circuits with the nanoscale CMOS. It provides guidance for the reliability considerations of the RF circuit design. The BD, HC, and NBTI effects on digital gates and RF building blocks with the nanoscale devices low noise amplifier, oscillator, mixer, and power amplifier, have been investigated systematically. The contributions of this dissertation include: It provides a thorough study of the reliability issues caused by voltage and/or temperature stresses on nanoscale devices from device level to circuit level; The more real voltage stress case high frequency (900 MHz) dynamic stress, has been first explored and compared with the traditional DC stress; A simple and practical analytical method to predict RF performance degradation due to voltage stress in the nanoscale devices and RF circuits was given based on the normalized parameter degradations in device models. It provides a quick way for the designers to evaluate the performance degradations; Measurement and model extraction technologies, special for the nanoscale MOSFETs with ultra-thin, ultra-leaky gate oxide, were addressed and employed for the model establishments; Using the present existing computer-aided design tools (Cadence, Agilent ADS) with the developed models for performance degradation evaluation due to voltage or/and temperature stress by simulations provides a potential way that industry could use to save tens of millions of dollars annually in testing costs. The world now stands at the threshold of the age of nanotechnology, and scientists and engineers have been exploring here for years. The reliability is the first challenge for the commercialization of the nanoscale CMOS devices, which will be further downscaling into several tens or ten nanometers. The reliability is no longer the post-design evaluation, but the pre-design consideration. The successful and fruitful results of this dissertation, from device level to circuit level, provide not only an insight on how the voltage and/or temperature stress effects on the performances, but also methods and guidance for the designers to achieve more reliable circuits with nanoscale MOSFETs in the future

    Cross-layer reliability evaluation, moving from the hardware architecture to the system level: A CLERECO EU project overview

    Get PDF
    Advanced computing systems realized in forthcoming technologies hold the promise of a significant increase of computational capabilities. However, the same path that is leading technologies toward these remarkable achievements is also making electronic devices increasingly unreliable. Developing new methods to evaluate the reliability of these systems in an early design stage has the potential to save costs, produce optimized designs and have a positive impact on the product time-to-market. CLERECO European FP7 research project addresses early reliability evaluation with a cross-layer approach across different computing disciplines, across computing system layers and across computing market segments. The fundamental objective of the project is to investigate in depth a methodology to assess system reliability early in the design cycle of the future systems of the emerging computing continuum. This paper presents a general overview of the CLERECO project focusing on the main tools and models that are being developed that could be of interest for the research community and engineering practice

    Program Annual Technology Report: Cosmic Origins Program Office

    Get PDF
    What is the Cosmic Origins (COR) Program? From ancient times, humans have looked up at the night sky and wondered: Are we alone? How did the universe come to be? How does the universe work? COR focuses on the second question. Scientists investigating this broad theme seek to understand the origin and evolution of the universe from the Big Bang to the present day, determining how the expanding universe grew into a grand cosmic web of dark matter enmeshed with galaxies and pristine gas, forming, merging, and evolving over time. COR also seeks to understand how stars and planets form from clouds in these galaxies to create the heavy elements that are essential to life, starting with the first generation of stars to seed the universe, and continuing through the birth and eventual death of all subsequent generations of stars. The COR Programs purview includes the majority of the field known as astronomy
    • …
    corecore