59 research outputs found

    Techniques for Aging, Soft Errors and Temperature to Increase the Reliability of Embedded On-Chip Systems

    Get PDF
    This thesis investigates the challenge of providing an abstracted, yet sufficiently accurate reliability estimation for embedded on-chip systems. In addition, it also proposes new techniques to increase the reliability of register files within processors against aging effects and soft errors. It also introduces a novel thermal measurement setup that perspicuously captures the infrared images of modern multi-core processors

    Degradation Models and Optimizations for CMOS Circuits

    Get PDF
    Die Gewährleistung der Zuverlässigkeit von CMOS-Schaltungen ist derzeit eines der größten Herausforderungen beim Chip- und Schaltungsentwurf. Mit dem Ende der Dennard-Skalierung erhöht jede neue Generation der Halbleitertechnologie die elektrischen Felder innerhalb der Transistoren. Dieses stärkere elektrische Feld stimuliert die Degradationsphänomene (Alterung der Transistoren, Selbsterhitzung, Rauschen, usw.), was zu einer immer stärkeren Degradation (Verschlechterung) der Transistoren führt. Daher erleiden die Transistoren in jeder neuen Technologiegeneration immer stärkere Verschlechterungen ihrer elektrischen Parameter. Um die Funktionalität und Zuverlässigkeit der Schaltung zu wahren, wird es daher unerlässlich, die Auswirkungen der geschwächten Transistoren auf die Schaltung präzise zu bestimmen. Die beiden wichtigsten Auswirkungen der Verschlechterungen sind ein verlangsamtes Schalten, sowie eine erhöhte Leistungsaufnahme der Schaltung. Bleiben diese Auswirkungen unberücksichtigt, kann die verlangsamte Schaltgeschwindigkeit zu Timing-Verletzungen führen (d.h. die Schaltung kann die Berechnung nicht rechtzeitig vor Beginn der nächsten Operation abschließen) und die Funktionalität der Schaltung beeinträchtigen (fehlerhafte Ausgabe, verfälschte Daten, usw.). Um diesen Verschlechterungen der Transistorparameter im Laufe der Zeit Rechnung zu tragen, werden Sicherheitstoleranzen eingeführt. So wird beispielsweise die Taktperiode der Schaltung künstlich verlängert, um ein langsameres Schaltverhalten zu tolerieren und somit Fehler zu vermeiden. Dies geht jedoch auf Kosten der Performanz, da eine längere Taktperiode eine niedrigere Taktfrequenz bedeutet. Die Ermittlung der richtigen Sicherheitstoleranz ist entscheidend. Wird die Sicherheitstoleranz zu klein bestimmt, führt dies in der Schaltung zu Fehlern, eine zu große Toleranz führt zu unnötigen Performanzseinbußen. Derzeit verlässt sich die Industrie bei der Zuverlässigkeitsbestimmung auf den schlimmstmöglichen Fall (maximal gealterter Schaltkreis, maximale Betriebstemperatur bei minimaler Spannung, ungünstigste Fertigung, etc.). Diese Annahme des schlimmsten Falls garantiert, dass der Chip (oder integrierte Schaltung) unter allen auftretenden Betriebsbedingungen funktionsfähig bleibt. Darüber hinaus ermöglicht die Betrachtung des schlimmsten Falles viele Vereinfachungen. Zum Beispiel muss die eigentliche Betriebstemperatur nicht bestimmt werden, sondern es kann einfach die schlimmstmögliche (sehr hohe) Betriebstemperatur angenommen werden. Leider lässt sich diese etablierte Praxis der Berücksichtigung des schlimmsten Falls (experimentell oder simulationsbasiert) nicht mehr aufrechterhalten. Diese Berücksichtigung bedingt solch harsche Betriebsbedingungen (maximale Temperatur, etc.) und Anforderungen (z.B. 25 Jahre Betrieb), dass die Transistoren unter den immer stärkeren elektrischen Felder enorme Verschlechterungen erleiden. Denn durch die Kombination an hoher Temperatur, Spannung und den steigenden elektrischen Feldern bei jeder Generation, nehmen die Degradationphänomene stetig zu. Das bedeutet, dass die unter dem schlimmsten Fall bestimmte Sicherheitstoleranz enorm pessimistisch ist und somit deutlich zu hoch ausfällt. Dieses Maß an Pessimismus führt zu erheblichen Performanzseinbußen, die unnötig und demnach vermeidbar sind. Während beispielsweise militärische Schaltungen 25 Jahre lang unter harschen Bedingungen arbeiten müssen, wird Unterhaltungselektronik bei niedrigeren Temperaturen betrieben und muss ihre Funktionalität nur für die Dauer der zweijährigen Garantie aufrechterhalten. Für letzteres können die Sicherheitstoleranzen also deutlich kleiner ausfallen, um die Performanz deutlich zu erhöhen, die zuvor im Namen der Zuverlässigkeit aufgegeben wurde. Diese Arbeit zielt darauf ab, maßgeschneiderte Sicherheitstoleranzen für die einzelnen Anwendungsszenarien einer Schaltung bereitzustellen. Für fordernde Umgebungen wie Weltraumanwendungen (wo eine Reparatur unmöglich ist) ist weiterhin der schlimmstmögliche Fall relevant. In den meisten Anwendungen, herrschen weniger harsche Betriebssbedingungen (z.B. sorgen Kühlsysteme für niedrigere Temperaturen). Hier können Sicherheitstoleranzen maßgeschneidert und anwendungsspezifisch bestimmt werden, sodass Verschlechterungen exakt toleriert werden können und somit die Zuverlässigkeit zu minimalen Kosten (Performanz, etc.) gewahrt wird. Leider sind die derzeitigen Standardentwurfswerkzeuge für diese anwendungsspezifische Bestimmung der Sicherheitstoleranz nicht gut gerüstet. Diese Arbeit zielt darauf ab, Standardentwurfswerkzeuge in die Lage zu versetzen, diesen Bedarf an Zuverlässigkeitsbestimmungen für beliebige Schaltungen unter beliebigen Betriebsbedingungen zu erfüllen. Zu diesem Zweck stellen wir unsere Forschungsbeiträge als vier Schritte auf dem Weg zu anwendungsspezifischen Sicherheitstoleranzen vor: Schritt 1 verbessert die Modellierung der Degradationsphänomene (Transistor-Alterung, -Selbsterhitzung, -Rauschen, etc.). Das Ziel von Schritt 1 ist es, ein umfassendes, einheitliches Modell für die Degradationsphänomene zu erstellen. Durch die Verwendung von materialwissenschaftlichen Defektmodellierungen werden die zugrundeliegenden physikalischen Prozesse der Degradationsphänomena modelliert, um ihre Wechselwirkungen zu berücksichtigen (z.B. Phänomen A kann Phänomen B beschleunigen) und ein einheitliches Modell für die simultane Modellierung verschiedener Phänomene zu erzeugen. Weiterhin werden die jüngst entdeckten Phänomene ebenfalls modelliert und berücksichtigt. In Summe, erlaubt dies eine genaue Degradationsmodellierung von Transistoren unter gleichzeitiger Berücksichtigung aller essenziellen Phänomene. Schritt 2 beschleunigt diese Degradationsmodelle von mehreren Minuten pro Transistor (Modelle der Physiker zielen auf Genauigkeit statt Performanz) auf wenige Millisekunden pro Transistor. Die Forschungsbeiträge dieser Dissertation beschleunigen die Modelle um ein Vielfaches, indem sie zuerst die Berechnungen so weit wie möglich vereinfachen (z.B. sind nur die Spitzenwerte der Degradation erforderlich und nicht alle Werte über einem zeitlichen Verlauf) und anschließend die Parallelität heutiger Computerhardware nutzen. Beide Ansätze erhöhen die Auswertungsgeschwindigkeit, ohne die Genauigkeit der Berechnung zu beeinflussen. In Schritt 3 werden diese beschleunigte Degradationsmodelle in die Standardwerkzeuge integriert. Die Standardwerkzeuge berücksichtigen derzeit nur die bestmöglichen, typischen und schlechtestmöglichen Standardzellen (digital) oder Transistoren (analog). Diese drei Typen von Zellen/Transistoren werden von der Foundry (Halbleiterhersteller) aufwendig experimentell bestimmt. Da nur diese drei Typen bestimmt werden, nehmen die Werkzeuge keine Zuverlässigkeitsbestimmung für eine spezifische Anwendung (Temperatur, Spannung, Aktivität) vor. Simulationen mit Degradationsmodellen ermöglichen eine Bestimmung für spezifische Anwendungen, jedoch muss diese Fähigkeit erst integriert werden. Diese Integration ist eines der Beiträge dieser Dissertation. Schritt 4 beschleunigt die Standardwerkzeuge. Digitale Schaltungsentwürfe, die nicht auf Standardzellen basieren, sowie komplexe analoge Schaltungen können derzeit nicht mit analogen Schaltungssimulatoren ausgewertet werden. Ihre Performanz reicht für solch umfangreiche Simulationen nicht aus. Diese Dissertation stellt Techniken vor, um diese Werkzeuge zu beschleunigen und somit diese umfangreichen Schaltungen simulieren zu können. Diese Forschungsbeiträge, die sich jeweils über mehrere Veröffentlichungen erstrecken, ermöglichen es Standardwerkzeugen, die Sicherheitstoleranz für kundenspezifische Anwendungsszenarien zu bestimmen. Für eine gegebene Schaltungslebensdauer, Temperatur, Spannung und Aktivität (Schaltverhalten durch Software-Applikationen) können die Auswirkungen der Transistordegradation ausgewertet werden und somit die erforderliche (weder unter- noch überschätzte) Sicherheitstoleranz bestimmt werden. Diese anwendungsspezifische Sicherheitstoleranz, garantiert die Zuverlässigkeit und Funktionalität der Schaltung für genau diese Anwendung bei minimalen Performanzeinbußen

    A novel deep submicron bulk planar sizing strategy for low energy subthreshold standard cell libraries

    Get PDF
    Engineering andPhysical Science ResearchCouncil (EPSRC) and Arm Ltd for providing funding in the form of grants and studentshipsThis work investigates bulk planar deep submicron semiconductor physics in an attempt to improve standard cell libraries aimed at operation in the subthreshold regime and in Ultra Wide Dynamic Voltage Scaling schemes. The current state of research in the field is examined, with particular emphasis on how subthreshold physical effects degrade robustness, variability and performance. How prevalent these physical effects are in a commercial 65nm library is then investigated by extensive modeling of a BSIM4.5 compact model. Three distinct sizing strategies emerge, cells of each strategy are laid out and post-layout parasitically extracted models simulated to determine the advantages/disadvantages of each. Full custom ring oscillators are designed and manufactured. Measured results reveal a close correlation with the simulated results, with frequency improvements of up to 2.75X/2.43X obs erved for RVT/LVT devices respectively. The experiment provides the first silicon evidence of the improvement capability of the Inverse Narrow Width Effect over a wide supply voltage range, as well as a mechanism of additional temperature stability in the subthreshold regime. A novel sizing strategy is proposed and pursued to determine whether it is able to produce a superior complex circuit design using a commercial digital synthesis flow. Two 128 bit AES cores are synthesized from the novel sizing strategy and compared against a third AES core synthesized from a state-of-the-art subthreshold standard cell library used by ARM. Results show improvements in energy-per-cycle of up to 27.3% and frequency improvements of up to 10.25X. The novel subthreshold sizing strategy proves superior over a temperature range of 0 °C to 85 °C with a nominal (20 °C) improvement in energy-per-cycle of 24% and frequency improvement of 8.65X. A comparison to prior art is then performed. Valid cases are presented where the proposed sizing strategy would be a candidate to produce superior subthreshold circuits

    Sensor de envelhecimento para células de memória CMOS

    Get PDF
    Dissertação de Mestrado, Engenharia e Tecnologia, Instituto Superior de Engenharia, Universidade do Algarve, 2016As memórias Complementary Metal Oxide Semiconductor (CMOS) ocupam uma percentagem de área significativa nos circuitos integrados e, com o desenvolvimento de tecnologias de fabrico a uma escala cada vez mais reduzida, surgem problemas de performance e de fiabilidade. Efeitos como o BTI (Bias Thermal Instability), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), degradam os parâmetros físicos dos transístores de efeito de campo (MOSFET), alterando as suas propriedades elétricas ao longo do tempo. O efeito BTI pode ser subdividido em NBTI (Negative BTI) e PBTI (Positive BTI). O efeito NBTI é dominante no processo de degradação e envelhecimento dos transístores CMOS, afetando os transístores PMOS, enquanto o efeito PBTI assume especial relevância na degradação dos transístores NMOS. A degradação provocada por estes efeitos, manifesta-se nos transístores através do incremento do módulo da tensão de limiar de condução |ℎ| ao longo do tempo. A degradação dos transístores é designada por envelhecimento, sendo estes efeitos cumulativos e possuindo um grande impacto na performance do circuito, em particular se ocorrerem outras variações paramétricas. Outras variações paramétricas adicionais que podem ocorrer são as variações de processo (P), tensão (V) e temperatura (T), ou considerando todas estas variações, e de uma forma genérica, PVTA (Process, Voltage, Temperature and Aging). As células de memória de acesso aleatório (RAM, Random Access Memory), em particular as memórias estáticas (SRAM, Static Random Access Memory) e dinâmicas (DRAM, Dynamic Random Access Memory), possuem tempos de leitura e escrita precisos. Quando ao longo do tempo ocorre o envelhecimento das células de memória, devido à degradação das propriedades dos transístores MOSFET, ocorre também uma degradação da performance das células de memória. A degradação de performance é, portanto, resultado das transições lentas que ocorrem, devido ao envelhecimento dos transístores MOSFET que comutam mais tarde, comparativamente a transístores novos. A degradação de performance nas memórias devido às transições lentas pode traduzir-se em leituras e escritas mais lentas, bem como em alterações na capacidade de armazenamento da memória. Esta propriedade pode ser expressa através da margem de sinal ruído (SNM). O SNM é reduzido com o envelhecimento dos transístores MOSFET e, quando o valor do SNM é baixo, a célula perde a sua capacidade de armazenamento, tornando-se mais vulnerável a fontes de ruído. O SNM é, portanto, um valor que permite efetuar a aferição (benchmarking) e comparar as características da memória perante o envelhecimento ou outras variações paramétricas que possam ocorrer. O envelhecimento das memórias CMOS traduz-se portanto na ocorrência de erros nas memórias ao longo do tempo, o que é indesejável especialmente em sistemas críticos. O trabalho apresentado nesta dissertação tem como objetivo o desenvolvimento de um sensor de envelhecimento e performance para memórias CMOS, detetando e sinalizando para o exterior o envelhecimento em células de memória SRAM devido à constante monitorização da sua performance. O sensor de envelhecimento e performance é ligado na bit line da célula de memória e monitoriza ativamente as operações de leitura e escrita decorrentes da operação da memória. O sensor de envelhecimento é composto por dois blocos: um detetor de transições e um detetor de pulsos. O detetor de transições é constituído por oito inversores e uma porta lógica XOR realizada com portas de passagem. Os inversores possuem diferentes relações nos tamanhos dos transístores P/N, permitindo tempos de comutação em diferentes valores de tensão. Assim, quando os inversores com tensões de comutações diferentes são estimulados pelo mesmo sinal de entrada e são ligados a uma porta XOR, permitem gerar na saída um impulso sempre que existe uma comutação na bit line. O impulso terá, portanto, uma duração proporcional ao tempo de comutação do sinal de entrada, que neste caso particular são as operações de leitura e escrita da memória. Quando o envelhecimento ocorre e as transições se tornam mais lentas, os pulsos possuem uma duração superior face aos pulsos gerados numa SRAM nova. Os pulsos gerados seguem para um elemento de atraso (delay element) que provoca um atraso aos pulsos, invertendo-os de seguida, e garantindo que a duração dos pulsos é suficiente para que exista uma deteção. O impulso gerado é ligado ao bloco seguinte que compõe o sensor de envelhecimento e performance, sendo um circuito detetor de pulso. O detetor de pulso implementa um NOR CMOS, controlado por um sinal de relógio (clock) e pelos pulsos invertidos. Quando os dois sinais de input do NOR são ‘0’ o output resultante será ‘1’, criando desta forma uma janela de deteção. O sensor de envelhecimento será ajustado em cada implementação, de forma a que numa célula de memória nova os pulsos invertidos se encontrem alinhados temporalmente com os pulsos de relógio. Este ajuste é feito durante a fase de projeto, em função da frequência de operação requerida para a célula, quer pelo dimensionamento do delay element (ajustando o seu atraso), quer pela definição do período do sinal de relógio. À medida que o envelhecimento dos circuitos ocorre e as comutações nos transístores se tornam mais lentas, a duração dos pulsos aumenta e consequentemente entram na janela de deteção, originando uma sinalização na saída do sensor. Assim, caso ocorram operações de leitura e escrita instáveis, ou seja, que apresentem tempos de execução acima do expectável ou que os seus níveis lógicos estejam degradados, o sensor de envelhecimento e performance devolve para o exterior ‘1’, sinalizando um desempenho crítico para a operação realizada, caso contrário a saída será ‘0’, indicando que não é verificado nenhum erro no desempenho das operações de escrita e leitura. Os transístores do sensor de envelhecimento e performance são dimensionados de acordo com a implementação; por exemplo, os modelos dos transístores selecionados, tensões de alimentação, ou número de células de memória conectadas na bit line, influenciam o dimensionamento prévio do sensor, já que tanto a performance da memória como o desempenho do sensor dependem das condições de operação. Outras soluções previamente propostas e disponíveis na literatura, nomeadamente o sensor de envelhecimento embebido no circuito OCAS (On-Chip Aging Sensor), permitem detetar envelhecimento numa SRAM devido ao envelhecimento por NBTI. Porém esta solução OCAS apenas se aplica a um conjunto de células SRAM conectadas a uma bit line, não sendo aplicado individualmente a outras células de memória como uma DRAM e não contemplando o efeito PBTI. Uma outra solução já existente, o sensor Scout flip-flop utilizado para aplicações ASIC (Application Specific Integrated Circuit) em circuitos digitais síncronos, atua também como um sensor de performance local e responde de forma preditiva na monitorização de faltas por atraso, utilizando por base janelas de deteção. Esta solução não foi projetada para a monitorização de operações de leitura e escrita em memórias SRAM e DRAM. No entanto, pela sua forma de atuar, esta solução aproxima-se mais da solução proposta neste trabalho, uma vez que o seu funcionamento se baseia em sinalização de sinais atrasados. Nesta dissertação, o recurso a simulações SPICE (Simulation Program with Integrated Circuit Emphasis) permite validar e testar o sensor de envelhecimento e performance. O caso de estudo utilizado para aplicar o sensor é uma memória CMOS, SRAM, composta por 6 transístores, juntamente com os seus circuitos periféricos, nomeadamente o amplificador sensor e o circuito de pré-carga e equalização, desenvolvidos em tecnologia CMOS de 65nm e 22nm, com recurso aos modelos de MOSFET ”Berkeley Predictive Technology Models (PTM)”. O sensor é devolvido e testado em 65nm e em 22nm com os modelos PTM, permitindo caracterizar o sensor de envelhecimento e performance desenvolvido, avaliando também de que forma o envelhecimento degrada as operações de leitura e escrita da SRAM, bem como a sua capacidade de armazenamento e robustez face ao ruído. Por fim, as simulações apresentadas provam que o sensor de envelhecimento e performance desenvolvido nesta tese de mestrado permite monitorizar com sucesso a performance e o envelhecimento de circuitos de memória SRAM, ultrapassando os desafios existentes nas anteriores soluções disponíveis para envelhecimento de memórias. Verificou-se que na presença de um envelhecimento que provoque uma degradação igual ou superior a 10%, o sensor de envelhecimento e performance deteta eficazmente a degradação na performance, sinalizando os erros. A sua utilização em memórias DRAM, embora possível, não foi testada nesta dissertação, ficando reservada para trabalho futuro

    Energy and Reliability in Future NOC Interconnected CMPS

    Get PDF
    In this dissertation, I explore energy and reliability in future NoC (Network-on-Chip) interconnected CMPs (chip multiprocessors) as they have become a first-order constraint in future CMP design. In the first part, we target the root cause of network energy consumption through techniques that reduce link and router-level switching activity. We specifically focus on memory subsystem traffic, as it comprises the bulk of NoC load in a CMP. By transmitting only the flits that contain words that we predicted would be useful using a novel spatial locality predictor, our scheme seeks to reduce network activity. We aim to further lower NoC energy consumption through microarchitectural mechanisms that inhibit datapath switching activity caused by unused words in individual flits. Using simulation-based performance studies and detailed energy models based on synthesized router designs and different link wire types, we show that (a) the pre- diction mechanism achieves very high accuracy, with an average rate of false-unused prediction of just 2.5%; (b) the combined NoC energy savings enabled by the predictor and microarchitectural support are 36% on average and up to 57% in the best case; and (c) there is no system performance penalty as a result of this technique. In the second part, we present a method for dynamic voltage/frequency scaling of networks-on-chip and last level caches in CMP designs, where the shared resources form a single voltage/frequency domain. We develop a new technique for monitoring and control and validate it by running PARSEC benchmarks through full system simulations. These techniques reduce energy-delay product by 46% compared to a state-of-the-art prior work. In the third part, we develop critical path models for HCI- and NBTI-induced wear assuming stress caused under realistic workload conditions, and apply them onto the interconnect microarchitecture. A key finding from this modeling is that, counter to prevailing wisdom, wearout in the CMP on-chip interconnect is correlated with a lack of load observed in the NoC routers, rather than high load. We then develop a novel wearout-decelerating scheme in which routers under low load have their wearout-sensitive components exercised without significantly impacting the router’s cycle time, pipeline depth, and area or power consumption. We subsequently show that the proposed design yields a 13.8∼65× increase in CMP lifetime

    Reliability in the face of variability in nanometer embedded memories

    Get PDF
    In this thesis, we have investigated the impact of parametric variations on the behaviour of one performance-critical processor structure - embedded memories. As variations manifest as a spread in power and performance, as a first step, we propose a novel modeling methodology that helps evaluate the impact of circuit-level optimizations on architecture-level design choices. Choices made at the design-stage ensure conflicting requirements from higher-levels are decoupled. We then complement such design-time optimizations with a runtime mechanism that takes advantage of adaptive body-biasing to lower power whilst improving performance in the presence of variability. Our proposal uses a novel fully-digital variation tracking hardware using embedded DRAM (eDRAM) cells to monitor run-time changes in cache latency and leakage. A special fine-grain body-bias generator uses the measurements to generate an optimal body-bias that is needed to meet the required yield targets. A novel variation-tolerant and soft-error hardened eDRAM cell is also proposed as an alternate candidate for replacing existing SRAM-based designs in latency critical memory structures. In the ultra low-power domain where reliable operation is limited by the minimum voltage of operation (Vddmin), we analyse the impact of failures on cache functional margin and functional yield. Towards this end, we have developed a fully automated tool (INFORMER) capable of estimating memory-wide metrics such as power, performance and yield accurately and rapidly. Using the developed tool, we then evaluate the #effectiveness of a new class of hybrid techniques in improving cache yield through failure prevention and correction. Having a holistic perspective of memory-wide metrics helps us arrive at design-choices optimized simultaneously for multiple metrics needed for maintaining lifetime requirements

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Miniaturized Transistors, Volume II

    Get PDF
    In this book, we aim to address the ever-advancing progress in microelectronic device scaling. Complementary Metal-Oxide-Semiconductor (CMOS) devices continue to endure miniaturization, irrespective of the seeming physical limitations, helped by advancing fabrication techniques. We observe that miniaturization does not always refer to the latest technology node for digital transistors. Rather, by applying novel materials and device geometries, a significant reduction in the size of microelectronic devices for a broad set of applications can be achieved. The achievements made in the scaling of devices for applications beyond digital logic (e.g., high power, optoelectronics, and sensors) are taking the forefront in microelectronic miniaturization. Furthermore, all these achievements are assisted by improvements in the simulation and modeling of the involved materials and device structures. In particular, process and device technology computer-aided design (TCAD) has become indispensable in the design cycle of novel devices and technologies. It is our sincere hope that the results provided in this Special Issue prove useful to scientists and engineers who find themselves at the forefront of this rapidly evolving and broadening field. Now, more than ever, it is essential to look for solutions to find the next disrupting technologies which will allow for transistor miniaturization well beyond silicon’s physical limits and the current state-of-the-art. This requires a broad attack, including studies of novel and innovative designs as well as emerging materials which are becoming more application-specific than ever before

    A design methodology for robust, energy-efficient, application-aware memory systems

    Get PDF
    Memory design is a crucial component of VLSI system design from area, power and performance perspectives. To meet the increasingly challenging system specifications, architecture, circuit and device level innovations are required for existing memory technologies. Emerging memory solutions are widely explored to cater to strict budgets. This thesis presents design methodologies for custom memory design with the objective of power-performance benefits across specific applications. Taking example of STTRAM (spin transfer torque random access memory) as an emerging memory candidate, the design space is explored to find optimal energy design solution. A thorough thermal reliability study is performed to estimate detection reliability challenges and circuit solutions are proposed to ensure reliable operation. Adoption of the application-specific optimal energy solution is shown to yield considerable energy benefits in a read-heavy application called MBC (memory based computing). Circuit level customizations are studied for the volatile SRAM (static random access memory) memory, which will provide improved energy-delay product (EDP) for the same MBC application. Memory design has to be aware of upcoming challenges from not only the application nature but also from the packaging front. Taking 3D die-folding as an example, SRAM performance shift under die-folding is illustrated. Overall the thesis demonstrates how knowledge of the system and packaging can help in achieving power efficient and high performance memory design.Ph.D

    Journal of Telecommunications and Information Technology, 2007, nr 2

    Get PDF
    kwartalni
    corecore