111 research outputs found

    Fiabilisation de convertisseurs analogique-numérique à modulation Sigma-Delta

    Get PDF
    This thesis concentrates on reliability-aware methodology development, reliability analysis based on simulation as well as failure prediction of CMOS 65nm analog and mixed signal (AMS) ICs. Sigma-Delta modulators are concerned as the object of reliability study at system level. A hierarchical statistical approach for reliability is proposed to analysis the performance of Sigma-Delta modulators under ageing effects and process variations. Statistical methods are combined into this analysis flow.Ce travail de thèse a porté sur des problèmes de fiabilité de circuits intégrés en technologie CMOS 65 nm, en particulier sur la conception en vue de la fiabilité, la simulation et l'amélioration de la fiabilité. Les mécanismes dominants de vieillissement HCI et NBTI ainsi que la variation du processus ont été étudiés et évalués quantitativement au niveau du circuit et au niveau du système. Ces méthodes ont été appliquées aux modulateurs Sigma-Delta afin de déterminer la fiabilité de ce type de composant qui est très utilisé

    Fiabilisation de Convertisseurs Analogique-Num´erique a Modulation Sigma-Delta

    Get PDF
    Due to the continuously scaling down of CMOS technology, system-on-chips (SoCs) reliability becomes important in sub-90 nm CMOS node. Integrated circuits and systems applied to aerospace, avionic, vehicle transport and biomedicine are highly sensitive to reliability problems such as ageing mechanisms and parametric process variations. Novel SoCs with new materials and architectures of high complexity further aggravate reliability as a critical aspect of process integration. For instance, random and systematic defects as well as parametric process variations have a large influence on quality and yield of the manufactured ICs, right after production. During ICs usage time, time-dependent ageing mechanisms such as negative bias temperature instability (NBTI) and hot carrier injection (HCI) can significantly degrade ICs performance.La fiabilit´e des ICs est d´efinie ainsi : la capacit´e d’un circuit ou un syst`eme int´egr´e `amaintenir ses param`etres durant une p´eriode donn´ee sous des conditions d´efinies. Les rapportsITRS 2011 consid`ere la fiabilit´e comme un aspect critique du processus d’int´egration.Par cons´equent, il faut faire appel des m´ethodologies innovatrices prenant en comptela fiabilit´e afin d’assurer la fonctionnalit´e du SoCs et la fiabilit´e dans les technologiesCMOS `a l’´echelle nanom´etrique. Cela nous permettra de d´evelopper des m´ethodologiesind´ependantes du design et de la technologie CMOS, en revanche, sp´ecialis´ees en fiabilit´e

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    On variability and reliability of poly-Si thin-film transistors

    Get PDF
    In contrast to conventional bulk-silicon technology, polysilicon (poly-Si) thin-film transistors (TFTs) can be implanted in flexible substrate and can have low process temperature. These attributes make poly-Si TFT technology more attractive for new applications, such as flexible displays, biosensors, and smart clothing. However, due to the random nature of grain boundaries (GBs) in poly-Si film and self-heating enhanced negative bias temperature instability (NBTI), the variability and reliability of poly-Si TFTs are the main obstacles that impede the application of poly-Si TFTs in high-performance circuits. The primary focus of this dissertation is to develop new design methodologies and modeling techniques for facilitating new applications of poly-Si TFT technology. In order to do that, a physical model is first presented to characterize the GB-induced transistor threshold voltage (V th)variations considering not only the number but also the position and orientation of each GB in 3-D space. The fast computation time of the proposed model makes it suitable for evaluation of GB-induced transistor Vthvariation in the early design phase. Furthermore, a self-consistent electro-thermal model that considers the effects of device geometry, substrate material, and stress conditions on NBTI is proposed. With the proposed modeling methodology, the significant impacts of device geometry, substrate, and supply voltage on NBTI in poly-Si TFTs are shown. From a circuit design perspective, a voltage programming pixel circuit is developed for active-matrix organic light emitting diode (AMOLED) displays for compensating the shift of Vth and mobility in driver TFTs as well as compensating the supply voltage degradation. In addition, a self-repair design methodology is proposed to compensate the GB-induced variations for liquid crystal displays (LCDs) and AMOLED displays. Based on the simulation results, the proposed circuit can decrease the required supply voltage by 20% without performance and yield degradation. In the final section of this dissertation, an optimization methodology for circuit-level reliability tests is explored. To effectively predict circuit lifetime, accelerated aging (i.e. elevated voltage and temperature) is commonly applied in circuit-level reliability tests, such as constant voltage stress (CVS) and ramp voltage stress (RVS) tests. However, due to the accelerated aging, shifting of dominant degradation mechanism might occur leading to the wrong lifetime prediction. To get around this issue, we proposed a technique to determine the proper stress range for accelerated aging tests

    Cross-Layer Resiliency Modeling and Optimization: A Device to Circuit Approach

    Get PDF
    The never ending demand for higher performance and lower power consumption pushes the VLSI industry to further scale the technology down. However, further downscaling of technology at nano-scale leads to major challenges. Reduced reliability is one of them, arising from multiple sources e.g. runtime variations, process variation, and transient errors. The objective of this thesis is to tackle unreliability with a cross layer approach from device up to circuit level

    Reliability-aware memory design using advanced reconfiguration mechanisms

    Get PDF
    Fast and Complex Data Memory systems has become a necessity in modern computational units in today's integrated circuits. These memory systems are integrated in form of large embedded memory for data manipulation and storage. This goal has been achieved by the aggressive scaling of transistor dimensions to few nanometer (nm) sizes, though; such a progress comes with a drawback, making it critical to obtain high yields of the chips. Process variability, due to manufacturing imperfections, along with temporal aging, mainly induced by higher electric fields and temperature, are two of the more significant threats that can no longer be ignored in nano-scale embedded memory circuits, and can have high impact on their robustness. Static Random Access Memory (SRAM) is one of the most used embedded memories; generally implemented with the smallest device dimensions and therefore its robustness can be highly important in nanometer domain design paradigm. Their reliable operation needs to be considered and achieved both in cell and also in architectural SRAM array design. Recently, and with the approach to near/below 10nm design generations, novel non-FET devices such as Memristors are attracting high attention as a possible candidate to replace the conventional memory technologies. In spite of their favorable characteristics such as being low power and highly scalable, they also suffer with reliability challenges, such as process variability and endurance degradation, which needs to be mitigated at device and architectural level. This thesis work tackles such problem of reliability concerns in memories by utilizing advanced reconfiguration techniques. In both SRAM arrays and Memristive crossbar memories novel reconfiguration strategies are considered and analyzed, which can extend the memory lifetime. These techniques include monitoring circuits to check the reliability status of the memory units, and architectural implementations in order to reconfigure the memory system to a more reliable configuration before a fail happens.Actualmente, el diseño de sistemas de memoria en circuitos integrados busca continuamente que sean más rápidos y complejos, lo cual se ha vuelto de gran necesidad para las unidades de computación modernas. Estos sistemas de memoria están integrados en forma de memoria embebida para una mejor manipulación de los datos y de su almacenamiento. Dicho objetivo ha sido conseguido gracias al agresivo escalado de las dimensiones del transistor, el cual está llegando a las dimensiones nanométricas. Ahora bien, tal progreso ha conllevado el inconveniente de una menor fiabilidad, dado que ha sido altamente difícil obtener elevados rendimientos de los chips. La variabilidad de proceso - debido a las imperfecciones de fabricación - junto con la degradación de los dispositivos - principalmente inducido por el elevado campo eléctrico y altas temperaturas - son dos de las más relevantes amenazas que no pueden ni deben ser ignoradas por más tiempo en los circuitos embebidos de memoria, echo que puede tener un elevado impacto en su robusteza final. Static Random Access Memory (SRAM) es una de las celdas de memoria más utilizadas en la actualidad. Generalmente, estas celdas son implementadas con las menores dimensiones de dispositivos, lo que conlleva que el estudio de su robusteza es de gran relevancia en el actual paradigma de diseño en el rango nanométrico. La fiabilidad de sus operaciones necesita ser considerada y conseguida tanto a nivel de celda de memoria como en el diseño de arquitecturas complejas basadas en celdas de memoria SRAM. Actualmente, con el diseño de sistemas basados en dispositivos de 10nm, dispositivos nuevos no-FET tales como los memristores están atrayendo una elevada atención como posibles candidatos para reemplazar las actuales tecnologías de memorias convencionales. A pesar de sus características favorables, tales como el bajo consumo como la alta escabilidad, ellos también padecen de relevantes retos de fiabilidad, como son la variabilidad de proceso y la degradación de la resistencia, la cual necesita ser mitigada tanto a nivel de dispositivo como a nivel arquitectural. Con todo esto, esta tesis doctoral afronta tales problemas de fiabilidad en memorias mediante la utilización de técnicas de reconfiguración avanzada. La consideración de nuevas estrategias de reconfiguración han resultado ser validas tanto para las memorias basadas en celdas SRAM como en `memristive crossbar¿, donde se ha observado una mejora significativa del tiempo de vida en ambos casos. Estas técnicas incluyen circuitos de monitorización para comprobar la fiabilidad de las unidades de memoria, y la implementación arquitectural con el objetivo de reconfigurar los sistemas de memoria hacia una configuración mucho más fiables antes de que el fallo suced

    Characterisation and mitigation of long-term degradation effects in programmable logic

    No full text
    Reliability has always been an issue in silicon device engineering, but until now it has been managed by the carefully tuned fabrication process. In the future the underlying physical limitations of silicon-based electronics, plus the practical challenges of manufacturing with such complexity at such a small scale, will lead to a crunch point where transistor-level reliability must be forfeited to continue achieving better productivity. Field-programmable gate arrays (FPGAs) are built on state-of-the-art silicon processes, but it has been recognised for some time that their distinctive characteristics put them in a favourable position over application-specific integrated circuits in the face of the reliability challenge. The literature shows how a regular structure, interchangeable resources and an ability to reconfigure can all be exploited to detect, locate, and overcome degradation and keep an FPGA application running. To fully exploit these characteristics, a better understanding is needed of the behavioural changes that are seen in the resources that make up an FPGA under ageing. Modelling is an attractive approach to this and in this thesis the causes and effects are explored of three important degradation mechanisms. All are shown to have an adverse affect on FPGA operation, but their characteristics show novel opportunities for ageing mitigation. Any modelling exercise is built on assumptions and so an empirical method is developed for investigating ageing on hardware with an accelerated-life test. Here, experiments show that timing degradation due to negative-bias temperature instability is the dominant process in the technology considered. Building on simulated and experimental results, this work also demonstrates a variety of methods for increasing the lifetime of FPGA lookup tables. The pre-emptive measure of wear-levelling is investigated in particular detail, and it is shown by experiment how di fferent reconfiguration algorithms can result in a significant reduction to the rate of degradation

    Design for prognostics and security in field programmable gate arrays (FPGAs).

    Get PDF
    There is an evolutionary progression of Field Programmable Gate Arrays (FPGAs) toward more complex and high power density architectures such as Systems-on- Chip (SoC) and Adaptive Compute Acceleration Platforms (ACAP). Primarily, this is attributable to the continual transistor miniaturisation and more innovative and efficient IC manufacturing processes. Concurrently, degradation mechanism of Bias Temperature Instability (BTI) has become more pronounced with respect to its ageing impact. It could weaken the reliability of VLSI devices, FPGAs in particular due to their run-time reconfigurability. At the same time, vulnerability of FPGAs to device-level attacks in the increasing cyber and hardware threat environment is also quadrupling as the susceptible reliability realm opens door for the rogue elements to intervene. Insertion of highly stealthy and malicious circuitry, called hardware Trojans, in FPGAs is one of such malicious interventions. On the one hand where such attacks/interventions adversely affect the security ambit of these devices, they also undermine their reliability substantially. Hitherto, the security and reliability are treated as two separate entities impacting the FPGA health. This has resulted in fragmented solutions that do not reflect the true state of the FPGA operational and functional readiness, thereby making them even more prone to hardware attacks. The recent episodes of Spectre and Meltdown vulnerabilities are some of the key examples. This research addresses these concerns by adopting an integrated approach and investigating the FPGA security and reliability as two inter-dependent entities with an additional dimension of health estimation/ prognostics. The design and implementation of a small footprint frequency and threshold voltage-shift detection sensor, a novel hardware Trojan, and an online transistor dynamic scaling circuitry present a viable FPGA security scheme that helps build a strong microarchitectural level defence against unscrupulous hardware attacks. Augmented with an efficient Kernel-based learning technique for FPGA health estimation/prognostics, the optimal integrated solution proves to be more dependable and trustworthy than the prevalent disjointed approach.Samie, Mohammad (Associate)PhD in Transport System

    Design, Fabrication, and Run-time Strategies for Hardware-Assisted Security

    Get PDF
    Today, electronic computing devices are critically involved in our daily lives, basic infrastructure, and national defense systems. With the growing number of threats against them, hardware-based security features offer the best chance for building secure and trustworthy cyber systems. In this dissertation, we investigate ways of making hardware-based security into a reality with primary focus on two areas: Hardware Trojan Detection and Physically Unclonable Functions (PUFs). Hardware Trojans are malicious modifications made to original IC designs or layouts that can jeopardize the integrity of hardware and software platforms. Since most modern systems critically depend on ICs, detection of hardware Trojans has garnered significant interest in academia, industry, as well as governmental agencies. The majority of existing detection schemes focus on test-time because of the limited hardware resources available at run-time. In this dissertation, we explore innovative run-time solutions that utilize on-chip thermal sensor measurements and fundamental estimation/detection theory to expose changes in IC power/thermal profile caused by Trojan activation. The proposed solutions are low overhead and also generalizable to many other sensing modalities and problem instances. Simulation results using state-of-the-art tools on publicly available Trojan benchmarks verify that our approaches can detect Trojans quickly and with few false positives. Physically Unclonable Functions (PUFs) are circuits that rely on IC fabrication variations to generate unique signatures for various security applications such as IC authentication, anti-counterfeiting, cryptographic key generation, and tamper resistance. While the existence of variations has been well exploited in PUF design, knowledge of exactly how variations come into existence has largely been ignored. Yet, for several decades the Design-for-Manufacturability (DFM) community has actually investigated the fundamental sources of these variations. Furthermore, since manufacturing variations are often harmful to IC yield, the existing DFM tools have been geared towards suppressing them (counter-intuitive for PUFs). In this dissertation, we make several improvements over current state-of-the-art work in PUFs. First, our approaches exploit existing DFM models to improve PUFs at physical layout and mask generation levels. Second, our proposed algorithms reverse the role of standard DFM tools and extend them towards improving PUF quality without harming non-PUF portions of the IC. Finally, since our approaches occur after design and before fabrication, they are applicable to all types of PUFs and have little overhead in terms of area, power, etc. The innovative and unconventional techniques presented in this dissertation should act as important building blocks for future work in cyber security
    • …
    corecore