66 research outputs found

    BTI impact on logical gates in nano-scale CMOS technology

    Full text link
    status: publishe

    Cross-Layer Resiliency Modeling and Optimization: A Device to Circuit Approach

    Get PDF
    The never ending demand for higher performance and lower power consumption pushes the VLSI industry to further scale the technology down. However, further downscaling of technology at nano-scale leads to major challenges. Reduced reliability is one of them, arising from multiple sources e.g. runtime variations, process variation, and transient errors. The objective of this thesis is to tackle unreliability with a cross layer approach from device up to circuit level

    Impact of Bias Temperature Instability on Soft Error Susceptibility

    Get PDF
    In this paper, we address the issue of analyzing the effects of aging mechanisms on ICs' soft error (SE) susceptibility. In particular, we consider bias temperature instability (BTI), namely negative BTI in pMOS transistors and positive BTI in nMOS transistors that are recognized as the most critical aging mechanisms reducing the reliability of ICs. We show that BTI reduces significantly the critical charge of nodes of combinational circuits during their in-field operation, thus increasing the SE susceptibility of the whole IC. We then propose a time dependent model for SE susceptibility evaluation, enabling the use of adaptive SE hardening approaches, based on the ICs lifetime

    Reliable Power Gating with NBTI Aging Benefits

    Get PDF
    In this paper, we show that Negative Bias Temperature Instability (NBTI) aging of sleep transistors (STs), together with its detrimental effect for circuit performance and lifetime, presents considerable benefits for power gated circuits. Indeed, it reduces static power due to leakage current, and increases ST switch efficiency, making power gating more efficient and effective over time. The magnitude of these aging benefits depends on operating and environmental conditions. By means of HSPICE simulations, considering a 32nm CMOS technology, we demonstrate that static power may reduce by more than 80% in 10 years of operation. Static power decrease over time due to NBTI aging is also proven experimentally, using a test-chip manufactured with a TSMC 65nm technology. We propose an ST design strategy for reliable power gating, in order to harvest the benefits offered by NBTI aging. It relies on the design of STs with a proper lower Vth compared to the standard power switching fabric. This can be achieved by either re-designing the STs with the identified Vth value, or applying a proper forward body bias to the available power switching fabrics. Through HSPICE simulations, we show lifetime extension up to 21.4X and average static power reduction up to 16.3% compared to standard ST design approach, without additional area overhead. Finally, we show lifetime extension and several performance-cost trade-offs when a target maximum lifetime is considered

    Sensor de performance para células de memória CMOS

    Get PDF
    Vivemos hoje em dia tempos em que quase tudo tem um pequeno componente eletrónico e por sua vez esse componente precisa de uma memória para guardar as suas instruções. Dentro dos vários tipos de memórias, as Complementary Metal Oxide Semiconductor (CMOS) são as que mais utilização têm nos circuitos integrados e, com o avançar da tecnologia a ficar cada vez com uma escala mais reduzida, faz com que os problemas de performance e fiabilidade sejam uma constante. Efeitos como o BTI (Bias Thermal Instability), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), ao longo do tempo vão deteriorando os parâmetros físicos dos transístores de efeito de campo (MOSFET), mudando as suas propriedades elétricas. Associado ao efeito de BTI podemos ter o efeito PBTI (Positive BTI), que afeta mais os transístores NMOS, e o efeito NBTI (Negative BTI), que afeta mais os transístores PMOS. Se para nanotecnologias até 32 nanómetros o efeito NBTI é dominante, para tecnologias mais baixas os 2 efeitos são igualmente importantes. Porém, existem ainda outras variações no desempenho que podem colocar em causa o bom funcionamento dos circuitos, como as variações de processo (P), tensão (V) e temperatura (T), ou considerando todas estas variações, e de uma forma genérica, PVTA (Process, Voltage, Temperature and Aging). Tendo como base as células de memória de acesso aleatório (RAM, Random Access Memory), em particular as memórias estáticas (SRAM, Static Random Access Memory) e dinâmicas (DRAM, Dynamic Random Access Memory) que possuem tempos de leitura e escrita precisos, estas ficam bastante expostas ao envelhecimento dos seus componentes e, consecutivamente, acontece um decréscimo na sua performance, resultando em transições mais lentas, que por sua vez fará com que existam leituras e escritas mais lentas e poderão ocorrer erros nessas leituras e escritas . Para além destes fenómenos, temos também o facto de a margem de sinal ruido (SNM - Static Noise Margin) diminuir, fazendo com que a fiabilidade da memória seja colocada em causa. O envelhecimento das memórias CMOS traduz-se, portanto, na ocorrência de erros nas memórias ao longo do tempo, o que é indesejável, especialmente em sistemas críticos onde a ocorrência de um erro ou uma falha na memória pode significar por em risco sistemas de elevada importância e fundamentais (por exemplo, em sistemas de segurança, um erro pode desencadear um conjunto de ações não desejadas). Anteriormente já foram apresentadas algumas soluções para esta monitorização dos erros de uma memória, disponíveis na literatura, como é o caso do sensor de envelhecimento embebido no circuito OCAS (On-Chip Aging Sensor), que permite detetar envelhecimento numa SRAM provocado pelo envelhecimento por NBTI. Contudo este sensor demonstra algumas limitações, pois apenas se aplica a um conjunto de células SRAM conectadas a uma bit line, não sendo aplicado individualmente a outras células de memória como uma DRAM e não contemplando o efeito PBTI. Outra solução apresentada anteriormente é o Sensor de Envelhecimento para Células de Memória CMOS que demonstra alguma evolução em relação ao sensor OCAS. Contudo, ainda tem limitações, como é o caso de estar bastante dependente do sincronismo com a memória e não permitir qualquer tipo de calibração do sistema ao longo do seu funcionamento. O trabalho apresentado nesta dissertação resolve muitos dos problemas existentes nos trabalhos anteriores. Isto é, apresenta-se um sensor de performance para memórias capaz de reconhecer quando é que a memória pode estar na eminência de falhar, devido a fatores que afetam o desempenho da memória nas operações de escrita e leitura. Ou seja, sinaliza de forma preditiva as falhas. Este sensor está dividido em três grandes partes, como a seguir se descreve. O Transistion Detector é uma delas, que funciona como um “conversor” das transições na bit line da memória para o sensor, criando pulsos de duração proporcional à duração da transição na bit line, sendo que uma transição rápida resulta em pulsos curtos e uma transição lenta resulta em pulsos longos. Esta parte do circuito apresenta 2 tipos de configurações para o caso de ser aplicado numa SRAM, sendo que uma das configurações é para as memórias SRAM inicializadas a VDD, e a segunda configuração para memórias SRAM inicializadas a VDD/2. É também apresentada uma terceira configuração para o caso de o detetor ser aplicado numa DRAM. O funcionamento do detetor de transições está baseado num conjunto de inversores desequilibrados (ou seja, com capacidades de condução diferentes entre o transístor N e P no inversor), criando assim inversores do tipo N (com o transístor N mais condutivo que o P) e inversores do tipo P (com o transístor P mais condutivo que o N) que respondem de forma diferente às transições de 1 para 0 e vice-versa. Estas diferenças serão cruciais para a criação do pulso final que entrará no Pulse Detetor. Este segundo bloco do sensor é responsável por carregar um condensador com uma tensão proporcional ao tempo que a bit line levou a transitar. É nesta parte que se apresenta uma caraterística nova e importante, quando comparado com as soluções já existentes, que é a capacidade do sensor poder ser calibrado. Para isso, é utilizado um conjunto de transístores para carregar o condensador durante o impulso gerado no detetor de transições, que permitem aumentar ou diminuir a resistência de carga do condensador, ficando este com mais ou menos tensão (a tensão proporcional ao tempo da transição da bit line) a ser usada na Comparação seguinte. O terceiro grande bloco deste sensor é resumidamente um bloco comparador, que compara a tensão guardada no condensador com uma tensão de referência disponível no sensor e definida durante o projeto. Este comparador tem a função de identificar qual destas 2 tensões é a mais alta (a do condensador, que é proporcional ao tempo de transição da bit line, ou a tensão de referência) e fazer com a mesma seja “disparada” para VDD, sendo que a tensão mais baixa será colocada a VSS. Desta forma é sinalizado se a transição que está a ser avaliada deve ser considerada um erro ou não. Para controlar todo o processo, o sensor tem na sua base de funcionamento um controlador (uma máquina de estados finita composta por 3 estados). O primeiro estado do controlador é o estado de Reset, que faz com que todos os pontos do circuito estejam com as tenções necessárias ao início de funcionamento do mesmo. O segundo estado é o Sample, que fica a aguardar uma transição na bit line para ser validada pelo sensor e fazer com que o mesmo avance para o terceiro estado, que é o de Compare, onde ativa o comparador do sensor e coloca no exterior o resultado dessa comparação. Assim, se for detetado uma transição demasiado lenta na bit line, que é um sinal de erro, o mesmo será sinalizado para o exterior activando o sinal de saída. Caso o sensor não detete nenhum erro nas transições, o sinal de saída não é activado. O sensor tem a capacidade de funcionar em modo on-line, ou seja, não é preciso desligar o circuito de memória do seu funcionamento normal para poder ser testado. Para além disso, pode ainda ser utilizado internamente na memória, como sensor local (monitorizando as células reais de memória), ou externamente, como sensor global, caso seja colocado a monitorizar uma célula de memória fictícia.Within the several types of memories, the Complementary Metal Oxide Semiconductor (CMOS) are the most used in the integrated circuits and, as technology advances and becomes increasingly smaller in scale, it makes performance and reliability a constant problem. Effects such as BTI (Bias Thermal Instability), the positive (PBTI - Positive BTI) and the negative (NBTI - Negative BTI), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), etc., are aging effects that contribute to a cumulatively degradation of the transistors. Moreover, other parametric variations may also jeopardize the proper functioning of circuits and contribute to reduce circuits’ performance, such as process variations (P), power-supply voltage variations (V) and temperature variations (T), or considering all these variations, and in a generic way, PVTA (Process, Voltage, Temperature and Aging). The Sensor proposed in this paper aims to signalize these problems so that the user knows when the memory operation may be compromised. The sensor is made up of three important parts, the Transition Detector, the Pulse Detector and the Comparator, creating a sensor that converts bit line transition created in a memory operation (read or write) into a pulse and a voltage, that can be compared with a reference voltage available in the sensor. If the reference voltage is higher than the voltage proportional to the bit line transition time, the sensor output is not activated; but if the bit line transition time is high enough to generate a voltage higher than the reference voltage in the sensor, the sensor output signalizes a predictive error, denoting that the memory performance is in a critical state that may lead to an error if corrective measures are not taken. One important feature in this sensor topology is that it can be calibrated during operation, by controlling sensor’s sensibility to the bit line transition. Another important feature is that it can be applied locally, to monitor the online operation of the memory, or globally, by monitoring a dummy memory in pre-defined conditions. Moreover, it can be applied to SRAM or DRAM, being the first online sensor available for DRAM memories

    DESIGN AND TEST OF DIGITAL CIRCUITS AND SYSTEMS USING CMOS AND EMERGING RESISTIVE DEVICES

    Get PDF
    The memristor is an emerging nano-device. Low power operation, high density, scalability, non-volatility, and compatibility with CMOS Technology have made it a promising technology for memory, Boolean implementation, computing, and logic systems. This dissertation focuses on testing and design of such applications. In particular, we investigate on testing of memristor-based memories, design of memristive implementation of Boolean functions, and reliability and design of neuromorphic computing such as neural network. In addition, we show how to modify threshold logic gates to implement more functions. Although memristor is a promising emerging technology but is prone to defects due to uncertainties in nanoscale fabrication. Fast March tests are proposed in Chapter 2 that benefit from fast write operations. The test application time is reduced significantly while simultaneously reducing the average test energy per cell. Experimental evaluation in 45 nm technology show a speed-up of approximately 70% with a decrease in energy by approximately 40%. DfT schemes are proposed to implement the new test methods. In Chapter 3, an Integer Linear Programming based framework to identify current-mode threshold logic functions is presented. It is shown that threshold logic functions can be implemented in CMOS-based current mode logic with reduced transistor count when the input weights are not restricted to be integers. Experimental results show that many more functions can be implemented with predetermined hardware overhead, and the hardware requirement of a large percentage of existing threshold functions is reduced when comparing to the traditional CMOS-based threshold logic implementation. In Chapter 4, a new method to implement threshold logic functions using memristors is presented. This method benefits from the high range of memristor’s resistivity which is used to define different weight values, and reduces significantly the transistor count. The proposed approach implements many more functions as threshold logic gates when comparing to existing implementations. Experimental results in 45 nm technology show that the proposed memristive approach implements threshold logic gates with less area and power consumption. Finally, Chapter 5 focuses on current-based designs for neural networks. CMOS aging impacts the total synaptic current and this impacts the accuracy. Chapter 5 introduces an enhanced memristive crossbar array (MCA) based analog neural network architecture to improve reliability due to the aging effect. A built-in current-based calibration circuit is introduced to restore the total synaptic current. The calibration circuit is a current sensor that receives the ideal reference current for non-aged column and restores the reduced sensed current at each column to the ideal value. Experimental results show that the proposed approach restores the currents with less than 1% precision, and the area overhead is negligible

    Miniaturized Transistors, Volume II

    Get PDF
    In this book, we aim to address the ever-advancing progress in microelectronic device scaling. Complementary Metal-Oxide-Semiconductor (CMOS) devices continue to endure miniaturization, irrespective of the seeming physical limitations, helped by advancing fabrication techniques. We observe that miniaturization does not always refer to the latest technology node for digital transistors. Rather, by applying novel materials and device geometries, a significant reduction in the size of microelectronic devices for a broad set of applications can be achieved. The achievements made in the scaling of devices for applications beyond digital logic (e.g., high power, optoelectronics, and sensors) are taking the forefront in microelectronic miniaturization. Furthermore, all these achievements are assisted by improvements in the simulation and modeling of the involved materials and device structures. In particular, process and device technology computer-aided design (TCAD) has become indispensable in the design cycle of novel devices and technologies. It is our sincere hope that the results provided in this Special Issue prove useful to scientists and engineers who find themselves at the forefront of this rapidly evolving and broadening field. Now, more than ever, it is essential to look for solutions to find the next disrupting technologies which will allow for transistor miniaturization well beyond silicon’s physical limits and the current state-of-the-art. This requires a broad attack, including studies of novel and innovative designs as well as emerging materials which are becoming more application-specific than ever before

    Degradation in FPGAs: Monitoring, Modeling and Mitigation

    Get PDF
    This dissertation targets the transistor aging degradation as well as the associated thermal challenges in FPGAs (since there is an exponential relation between aging and chip temperature). The main objectives are to perform experimentation, analysis and device-level model abstraction for modeling the degradation in FPGAs, then to monitor the FPGA to keep track of aging rates and ultimately to propose an aging-aware FPGA design flow to mitigate the aging

    Reliability-aware memory design using advanced reconfiguration mechanisms

    Get PDF
    Fast and Complex Data Memory systems has become a necessity in modern computational units in today's integrated circuits. These memory systems are integrated in form of large embedded memory for data manipulation and storage. This goal has been achieved by the aggressive scaling of transistor dimensions to few nanometer (nm) sizes, though; such a progress comes with a drawback, making it critical to obtain high yields of the chips. Process variability, due to manufacturing imperfections, along with temporal aging, mainly induced by higher electric fields and temperature, are two of the more significant threats that can no longer be ignored in nano-scale embedded memory circuits, and can have high impact on their robustness. Static Random Access Memory (SRAM) is one of the most used embedded memories; generally implemented with the smallest device dimensions and therefore its robustness can be highly important in nanometer domain design paradigm. Their reliable operation needs to be considered and achieved both in cell and also in architectural SRAM array design. Recently, and with the approach to near/below 10nm design generations, novel non-FET devices such as Memristors are attracting high attention as a possible candidate to replace the conventional memory technologies. In spite of their favorable characteristics such as being low power and highly scalable, they also suffer with reliability challenges, such as process variability and endurance degradation, which needs to be mitigated at device and architectural level. This thesis work tackles such problem of reliability concerns in memories by utilizing advanced reconfiguration techniques. In both SRAM arrays and Memristive crossbar memories novel reconfiguration strategies are considered and analyzed, which can extend the memory lifetime. These techniques include monitoring circuits to check the reliability status of the memory units, and architectural implementations in order to reconfigure the memory system to a more reliable configuration before a fail happens.Actualmente, el diseño de sistemas de memoria en circuitos integrados busca continuamente que sean más rápidos y complejos, lo cual se ha vuelto de gran necesidad para las unidades de computación modernas. Estos sistemas de memoria están integrados en forma de memoria embebida para una mejor manipulación de los datos y de su almacenamiento. Dicho objetivo ha sido conseguido gracias al agresivo escalado de las dimensiones del transistor, el cual está llegando a las dimensiones nanométricas. Ahora bien, tal progreso ha conllevado el inconveniente de una menor fiabilidad, dado que ha sido altamente difícil obtener elevados rendimientos de los chips. La variabilidad de proceso - debido a las imperfecciones de fabricación - junto con la degradación de los dispositivos - principalmente inducido por el elevado campo eléctrico y altas temperaturas - son dos de las más relevantes amenazas que no pueden ni deben ser ignoradas por más tiempo en los circuitos embebidos de memoria, echo que puede tener un elevado impacto en su robusteza final. Static Random Access Memory (SRAM) es una de las celdas de memoria más utilizadas en la actualidad. Generalmente, estas celdas son implementadas con las menores dimensiones de dispositivos, lo que conlleva que el estudio de su robusteza es de gran relevancia en el actual paradigma de diseño en el rango nanométrico. La fiabilidad de sus operaciones necesita ser considerada y conseguida tanto a nivel de celda de memoria como en el diseño de arquitecturas complejas basadas en celdas de memoria SRAM. Actualmente, con el diseño de sistemas basados en dispositivos de 10nm, dispositivos nuevos no-FET tales como los memristores están atrayendo una elevada atención como posibles candidatos para reemplazar las actuales tecnologías de memorias convencionales. A pesar de sus características favorables, tales como el bajo consumo como la alta escabilidad, ellos también padecen de relevantes retos de fiabilidad, como son la variabilidad de proceso y la degradación de la resistencia, la cual necesita ser mitigada tanto a nivel de dispositivo como a nivel arquitectural. Con todo esto, esta tesis doctoral afronta tales problemas de fiabilidad en memorias mediante la utilización de técnicas de reconfiguración avanzada. La consideración de nuevas estrategias de reconfiguración han resultado ser validas tanto para las memorias basadas en celdas SRAM como en `memristive crossbar¿, donde se ha observado una mejora significativa del tiempo de vida en ambos casos. Estas técnicas incluyen circuitos de monitorización para comprobar la fiabilidad de las unidades de memoria, y la implementación arquitectural con el objetivo de reconfigurar los sistemas de memoria hacia una configuración mucho más fiables antes de que el fallo suced

    Characterization of self-heating effects and assessment of its impact on reliability in finfet technology

    Get PDF
    The systematically growing power (heat) dissipation in CMOS transistors with each successive technology node is reaching levels which could impact its reliable operation. The emergence of technologies such as bulk/SOI FinFETs has dramatically confined the heat in the device channel due to its vertical geometry and it is expected to further exacerbate with gate-all-around transistors. This work studies heat generation in the channel of semiconductor devices and measures its dissipation by means of wafer level characterization and predictive thermal simulation. The experimental work is based on several existing device thermometry techniques to which additional layout improvements are made in state of the art bulk FinFET and SOI FinFET 14nm technology nodes. The sensors produce excellent matching results which are confirmed through TCAD thermal simulation, differences between sensor types are quantified and error bars on measurements are established. The lateral heat transport measurements determine that heat from the source is mostly dissipated at a distance of 1µm and 1.5µm in bulk FinFET and SOI FinFET, respectively. Heat additivity is successfully confirmed to prove and highlight the fact that the whole system needs to be considered when performing thermal analysis. Furthermore, an investigation is devoted to study self-heating with different layout densities by varying the number of fins and fingers per active region (RX). Fin thermal resistance is measured at different ambient temperatures to show its variation of up to 70% between -40°C to 175°C. Therefore, the Si fin has a more dominant effect in heat transport and its varying thermal conductivity should be taken into account. The effect of ambient temperature on self-heating measurement is confirmed by supplying heat through thermal chuck and adjacent heater devices themselves. Motivation for this work is the continuous evolution of the transistor geometry and use of exotic materials, which in the recent technology nodes made heat removal more challenging. This poses reliability and performance concerns. Therefore, this work studies the impact of self-heating on reliability testing at DC conditions as well as realistic CMOS logic operating (AC) conditions. Front-end-of-line (FEOL) reliability mechanisms, such as hot carrier injection (HCI) and non-uniform time dependent dielectric breakdown (TDDB), are studied to show that self-heating effects can impact measurement results and recommendations are given on how to mitigate them. By performing an HCI stress at moderate bias conditions, this dissertation shows that the laborious techniques of heat subtraction are no longer necessary. Self-heating is also studied at more realistic device switching conditions by utilizing ring oscillators with several densities and stage counts to show that self-heating is considerably lower compared to constant voltage stress conditions and degradation is not distinguishable
    corecore