221 research outputs found

    Function Implementation in a Multi-Gate Junctionless FET Structure

    Get PDF
    Title from PDF of title page, viewed September 18, 2023Dissertation advisor: Mostafizur RahmanVitaIncludes bibliographical references (pages 95-117)Dissertation (Ph.D.)--Department of Computer Science and Electrical Engineering, Department of Physics and Astronomy. University of Missouri--Kansas City, 2023This dissertation explores designing and implementing a multi-gate junctionless field-effect transistor (JLFET) structure and its potential applications beyond conventional devices. The JLFET is a promising alternative to conventional transistors due to its simplified fabrication process and improved electrical characteristics. However, previous research has focused primarily on the device's performance at the individual transistor level, neglecting its potential for implementing complex functions. This dissertation fills this research gap by investigating the function implementation capabilities of the JLFET structure and proposing novel circuit designs based on this technology. The first part of this dissertation presents a comprehensive review of the existing literature on JLFETs, including their fabrication techniques, operating principles, and performance metrics. It highlights the advantages of JLFETs over traditional metal-oxide-semiconductor field-effect transistors (MOSFETs) and discusses the challenges associated with their implementation. Additionally, the review explores the limitations of conventional transistor technologies, emphasizing the need for exploring alternative device architectures. Building upon the theoretical foundation, the dissertation presents a detailed analysis of the multi-gate JLFET structure and its potential for realizing advanced functions. The study explores the impact of different design parameters, such as channel length, gate oxide thickness, and doping profiles, on the device performance. It investigates the trade-offs between power consumption, speed, and noise immunity, and proposes design guidelines for optimizing the function implementation capabilities of the JLFET. To demonstrate the practical applicability of the JLFET structure, this dissertation introduces several novel circuit designs based on this technology. These designs leverage the unique characteristics of the JLFET, such as its steep subthreshold slope and improved on/off current ratio, to implement complex functions efficiently. The proposed circuits include arithmetic units, memory cells, and digital logic gates. Detailed simulations and analyses are conducted to evaluate their performance, power consumption, and scalability. Furthermore, this dissertation explores the potential of the JLFET structure for emerging technologies, such as neuromorphic computing and bioelectronics. It investigates how the JLFET can be employed to realize energy-efficient and biocompatible devices for applications in artificial intelligence and biomedical engineering. The study investigates the compatibility of the JLFET with various materials and substrates, as well as its integration with other functional components. In conclusion, this dissertation contributes to the field of nanoelectronics by providing a comprehensive investigation into the function implementation capabilities of the multi-gate JLFET structure. It highlights the potential of this device beyond its individual transistor performance and proposes novel circuit designs based on this technology. The findings of this research pave the way for the development of advanced electronic systems that are more energy-efficient, faster, and compatible with emerging applications in diverse fields.Introduction -- Literature review -- Crosstalk principle -- Experiment of crosstalk -- Device architecture -- Simulation & results -- Conclusio

    A novel deep submicron bulk planar sizing strategy for low energy subthreshold standard cell libraries

    Get PDF
    Engineering andPhysical Science ResearchCouncil (EPSRC) and Arm Ltd for providing funding in the form of grants and studentshipsThis work investigates bulk planar deep submicron semiconductor physics in an attempt to improve standard cell libraries aimed at operation in the subthreshold regime and in Ultra Wide Dynamic Voltage Scaling schemes. The current state of research in the field is examined, with particular emphasis on how subthreshold physical effects degrade robustness, variability and performance. How prevalent these physical effects are in a commercial 65nm library is then investigated by extensive modeling of a BSIM4.5 compact model. Three distinct sizing strategies emerge, cells of each strategy are laid out and post-layout parasitically extracted models simulated to determine the advantages/disadvantages of each. Full custom ring oscillators are designed and manufactured. Measured results reveal a close correlation with the simulated results, with frequency improvements of up to 2.75X/2.43X obs erved for RVT/LVT devices respectively. The experiment provides the first silicon evidence of the improvement capability of the Inverse Narrow Width Effect over a wide supply voltage range, as well as a mechanism of additional temperature stability in the subthreshold regime. A novel sizing strategy is proposed and pursued to determine whether it is able to produce a superior complex circuit design using a commercial digital synthesis flow. Two 128 bit AES cores are synthesized from the novel sizing strategy and compared against a third AES core synthesized from a state-of-the-art subthreshold standard cell library used by ARM. Results show improvements in energy-per-cycle of up to 27.3% and frequency improvements of up to 10.25X. The novel subthreshold sizing strategy proves superior over a temperature range of 0 °C to 85 °C with a nominal (20 °C) improvement in energy-per-cycle of 24% and frequency improvement of 8.65X. A comparison to prior art is then performed. Valid cases are presented where the proposed sizing strategy would be a candidate to produce superior subthreshold circuits

    Design Automation and Application for Emerging Reconfigurable Nanotechnologies

    Get PDF
    In the last few decades, two major phenomena have revolutionized the electronic industry – the ever-increasing dependence on electronic circuits and the Complementary Metal Oxide Semiconductor (CMOS) downscaling. These two phenomena have been complementing each other in a way that while electronics, in general, have demanded more computations per functional unit, CMOS downscaling has aptly supported such needs. However, while the computational demand is still rising exponentially, CMOS downscaling is reaching its physical limits. Hence, the need to explore viable emerging nanotechnologies is more imperative than ever. This thesis focuses on streamlining the existing design automation techniques for a class of emerging reconfigurable nanotechnologies. Transistors based on this technology exhibit duality in conduction, i.e. they can be configured dynamically either as a p-type or an n-type device on the application of an external bias. Owing to this dynamic reconfiguration, these transistors are also referred to as Reconfigurable Field-Effect Transistors (RFETs). Exploring and developing new technologies just like CMOS, require tackling two main challenges – first, design automation flow has to be modified to enable tailor- made circuit designs. Second, possible application opportunities should be explored where such technologies can outsmart the existing CMOS technologies. This thesis targets the above two objectives for emerging reconfigurable nanotechnologies by proposing approaches for enabling an Electronic Design Automation (EDA) flow for circuits based on RFETs and exploring hardware security as an application that exploits the transistor-level dynamic reconfiguration offered by this technology. This thesis explains the bottom-up approach adopted to propose a logic synthesis flow by identifying new logic gates and circuit design paradigms that can particularly exploit the dynamic reconfiguration offered by these novel nanotechnologies. This led to the subsequent need of finding natural Boolean logic abstraction for emerging reconfigurable nanotechnologies as it is shown that the existing abstraction of negative unate logic for CMOS technologies is sub-optimal for RFETs-based circuits. In this direction, it has been shown that duality in Boolean logic is a natural abstraction for this technology and can truly represent the duality in conduction offered by individual transistors. Finding this abstraction paved the way for defining suitable primitives and proposing various algorithms for logic synthesis and technology mapping. The following step is to explore compatible physical synthesis flow for emerging reconfigurable nanotechnologies. Using silicon nanowire-based RFETs, .lef and .lib files have been provided which can provide an end-to-end flow to generate .GDSII file for circuits exclusively based on RFETs. Additionally, new approaches have been explored to improve placement and routing for circuits based on reconfigurable nanotechnologies. It has been demonstrated how these approaches led to superior results as compared to the native flow meant for CMOS. Lastly, the unique property of transistor-level reconfiguration offered by RFETs is utilized to implement efficient Intellectual Property (IP) protection schemes against adversarial attacks. The ability to control the conduction of individual transistors can be argued as one of the impactful features of this technology and suitably fits into the paradigm of security measures. Prior security schemes based on CMOS technology often come with large overheads in terms of area, power, and delay. In contrast, RFETs-based hardware security measures such as logic locking, split manufacturing, etc. proposed in this thesis, demonstrate affordable security solutions with low overheads. Overall, this thesis lays a strong foundation for the two main objectives – design automation, and hardware security as an application, to push emerging reconfigurable nanotechnologies for commercial integration. Additionally, contributions done in this thesis are made available under open-source licenses so as to foster new research directions and collaborations.:Abstract List of Figures List of Tables 1 Introduction 1.1 What are emerging reconfigurable nanotechnologies? 1.2 Why does this technology look so promising? 1.3 Electronics Design Automation 1.4 The game of see-saw: key challenges vs benefits for emerging reconfigurable nanotechnologies 1.4.1 Abstracting ambipolarity in logic gate designs 1.4.2 Enabling electronic design automation for RFETs 1.4.3 Enhanced functionality: a suitable fit for hardware security applications 1.5 Research questions 1.6 Entire RFET-centric EDA Flow 1.7 Key Contributions and Thesis Organization 2 Preliminaries 2.1 Reconfigurable Nanotechnology 2.1.1 1D devices 2.1.2 2D devices 2.1.3 Factors favoring circuit-flexibility 2.2 Feasibility aspects of RFET technology 2.3 Logic Synthesis Preliminaries 2.3.1 Circuit Model 2.3.2 Boolean Algebra 2.3.3 Monotone Function and the property of Unateness 2.3.4 Logic Representations 3 Exploring Circuit Design Topologies for RFETs 3.1 Contributions 3.2 Organization 3.3 Related Works 3.4 Exploring design topologies for combinational circuits: functionality-enhanced logic gates 3.4.1 List of Combinational Functionality-Enhanced Logic Gates based on RFETs 3.4.2 Estimation of gate delay using the logical effort theory 3.5 Invariable design of Inverters 3.6 Sequential Circuits 3.6.1 Dual edge-triggered TSPC-based D-flip flop 3.6.2 Exploiting RFET’s ambipolarity for metastability 3.7 Evaluations 3.7.1 Evaluation of combinational logic gates 3.7.2 Novel design of 1-bit ALU 3.7.3 Comparison of the sequential circuit with an equivalent CMOS-based design 3.8 Concluding remarks 4 Standard Cells and Technology Mapping 4.1 Contributions 4.2 Organization 4.3 Related Work 4.4 Standard cells based on RFETs 4.4.1 Interchangeable Pull-Up and Pull-Down Networks 4.4.2 Reconfigurable Truth-Table 4.5 Distilling standard cells 4.6 HOF-based Technology Mapping Flow for RFETs-based circuits 4.6.1 Area adjustments through inverter sharings 4.6.2 Technology Mapping Flow 4.6.3 Realizing Parameters For The Generic Library 4.6.4 Defining RFETs-based Genlib for HOF-based mapping 4.7 Experiments 4.7.1 Experiment 1: Distilling standard-cells from a benchmark suite 4.7.2 Experiment 2A: HOF-based mapping . 4.7.3 Experiment 2B: Using the distilled standard-cells during mapping 4.8 Concluding Remarks 5 Logic Synthesis with XOR-Majority Graphs 5.1 Contributions 5.2 Organization 5.3 Motivation 5.4 Background and Preliminaries 5.4.1 Terminologies 5.4.2 Self-duality in NPN classes 5.4.3 Majority logic synthesis 5.4.4 Earlier work on XMG 5.4.5 Classification of Boolean functions 5.5 Preserving Self-Duality 5.5.1 During logic synthesis 5.5.2 During versatile technology mapping 5.6 Advanced Logic synthesis techniques 5.6.1 XMG resubstitution 5.6.2 Exact XMG rewriting 5.7 Logic representation-agnostic Mapping 5.7.1 Versatile Mapper 5.7.2 Support of supergates 5.8 Creating Self-dual Benchmarks 5.9 Experiments 5.9.1 XMG-based Flow 5.9.2 Experimental Setup 5.9.3 Synthetic self-dual benchmarks 5.9.4 Cryptographic benchmark suite 5.10 Concluding remarks and future research directions 6 Physical synthesis flow and liberty generation 6.1 Contributions 6.2 Organization 6.3 Background and Related Work 6.3.1 Related Works 6.3.2 Motivation 6.4 Silicon Nanowire Reconfigurable Transistors 6.5 Layouts for Logic Gates 6.5.1 Layouts for Static Functional Logic Gates 6.5.2 Layout for Reconfigurable Logic Gate 6.6 Table Model for Silicon Nanowire RFETs 6.7 Exploring Approaches for Physical Synthesis 6.7.1 Using the Standard Place & Route Flow 6.7.2 Open-source Flow 6.7.3 Concept of Driver Cells 6.7.4 Native Approach 6.7.5 Island-based Approach 6.7.6 Utilization Factor 6.7.7 Placement of the Island on the Chip 6.8 Experiments 6.8.1 Preliminary comparison with CMOS technology 6.8.2 Evaluating different physical synthesis approaches 6.9 Results and discussions 6.9.1 Parameters Which Affect The Area 6.9.2 Use of Germanium Nanowires Channels 6.10 Concluding Remarks 7 Polymporphic Primitives for Hardware Security 7.1 Contributions 7.2 Organization 7.3 The Shift To Explore Emerging Technologies For Security 7.4 Background 7.4.1 IP protection schemes 7.4.2 Preliminaries 7.5 Security Promises 7.5.1 RFETs for logic locking (transistor-level locking) 7.5.2 RFETs for split manufacturing 7.6 Security Vulnerabilities 7.6.1 Realization of short-circuit and open-circuit scenarios in an RFET-based inverter 7.6.2 Circuit evaluation on sub-circuits 7.6.3 Reliability concerns: A consequence of short-circuit scenario 7.6.4 Implication of the proposed security vulnerability 7.7 Analytical Evaluation 7.7.1 Investigating the security promises 7.7.2 Investigating the security vulnerabilities 7.8 Concluding remarks and future research directions 8 Conclusion 8.1 Concluding Remarks 8.2 Directions for Future Work Appendices A Distilling standard-cells B RFETs-based Genlib C Layout Extraction File (.lef) for Silicon Nanowire-based RFET D Liberty (.lib) file for Silicon Nanowire-based RFET

    Timing speculation and adaptive reliable overclocking techniques for aggressive computer systems

    Get PDF
    Computers have changed our lives beyond our own imagination in the past several decades. The continued and progressive advancements in VLSI technology and numerous micro-architectural innovations have played a key role in the design of spectacular low-cost high performance computing systems that have become omnipresent in today\u27s technology driven world. Performance and dependability have become key concerns as these ubiquitous computing machines continue to drive our everyday life. Every application has unique demands, as they run in diverse operating environments. Dependable, aggressive and adaptive systems improve efficiency in terms of speed, reliability and energy consumption. Traditional computing systems run at a fixed clock frequency, which is determined by taking into account the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable overclocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. The success of this design methodology relies on the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case design methodology. Better-than-worst-case design methodology is advocated by several recent research pursuits, which exploit dependability techniques to enhance computer system performance. In this dissertation, we address different aspects of timing speculation based adaptive reliable overclocking schemes, and evaluate their role in the design of low-cost, high performance, energy efficient and dependable systems. We visualize various control knobs in the design that can be favorably controlled to ensure different design targets. As part of this research, we extend the SPRIT3E, or Superscalar PeRformance Improvement Through Tolerating Timing Errors, framework, and characterize the extent of application dependent performance acceleration achievable in superscalar processors by scrutinizing the various parameters that impact the operation beyond worst-case limits. We study the limitations imposed by short-path constraints on our technique, and present ways to exploit them to maximize performance gains. We analyze the sensitivity of our technique\u27s adaptiveness by exploring the necessary hardware requirements for dynamic overclocking schemes. Experimental analysis based on SPEC2000 benchmarks running on a SimpleScalar Alpha processor simulator, augmented with error rate data obtained from hardware simulations of a superscalar processor, are presented. Even though reliable overclocking guarantees functional correctness, it leads to higher power consumption. As a consequence, reliable overclocking without considering on-chip temperatures will bring down the lifetime reliability of the chip. In this thesis, we analyze how reliable overclocking impacts the on-chip temperature of a microprocessor and evaluate the effects of overheating, due to such reliable dynamic frequency tuning mechanisms, on the lifetime reliability of these systems. We then evaluate the effect of performing thermal throttling, a technique that clamps the on-chip temperature below a predefined value, on system performance and reliability. Our study shows that a reliably overclocked system with dynamic thermal management achieves 25% performance improvement, while lasting for 14 years when being operated within 353K. Over the past five decades, technology scaling, as predicted by Moore\u27s law, has been the bedrock of semiconductor technology evolution. The continued downscaling of CMOS technology to deep sub-micron gate lengths has been the primary reason for its dominance in today\u27s omnipresent silicon microchips. Even as the transition to the next technology node is indispensable, the initial cost and time associated in doing so presents a non-level playing field for the competitors in the semiconductor business. As part of this thesis, we evaluate the capability of speculative reliable overclocking mechanisms to maximize performance at a given technology level. We evaluate its competitiveness when compared to technology scaling, in terms of performance, power consumption, energy and energy delay product. We present a comprehensive comparison for integer and floating point SPEC2000 benchmarks running on a simulated Alpha processor at three different technology nodes in normal and enhanced modes. Our results suggest that adopting reliable overclocking strategies will help skip a technology node altogether, or be competitive in the market, while porting to the next technology node. Reliability has become a serious concern as systems embrace nanometer technologies. In this dissertation, we propose a novel fault tolerant aggressive system that combines soft error protection and timing error tolerance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed Conjoined Pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate-level timing simulations, using 45nm technology, of a conjoined two-stage arithmetic pipeline and a conjoined five-stage DLX pipeline processor, with forwarding logic, show that our approach, even under a severe fault injection campaign, achieves near 100% fault coverage and an average performance improvement of about 20%, when dynamically overclocked

    Side Channel Information Leakage: Design and Implementation of Hardware Countermeasure

    Get PDF
    Deployment of Dynamic Differential Logics (DDL) appears to be a promising choice for providing resistance against leakage of side channel information. However, the resistance provided by these logics is too costly for widespread area-constrained applications. Implementation of a secure DDL-based countermeasure also requires a complex layout methodology for balancing the load at the differential outputs. This thesis, unlike previous logic level approaches, presents a novel exploitation of static and single-ended logic for designing the side channel countermeasure. The proposed technique is used in the implementation of a protected crypto core consisting of the AES “AddRoundKey” and “SubByte” transformation. The test chip including the protected and unprotected crypto cores is fabricated in 180nm CMOS technology. A correlation analysis on the unprotected core results in revealing the key at the output of the combinational networks and the registers. The quality of the measurements is further improved by introducing an enhanced data capturing method that inserts a minimum power consuming input as a reference vector. In comparison, no key-related information is leaked from the protected core even with an order of magnitude increase in the number of averaged traces. For the first time, fabricated chip results are used to validate a new logic level side channel countermeasure that offers lower area and reduced circuit design complexity compared to the DDL-based countermeasures. This thesis also provides insight into the side channel vulnerability of cryptosystems in sub-90nm CMOS technology nodes. In particular, data dependency of leakage power is analyzed. The number of traces to disclose the key is seen to decrease by 35% from 90nm to 45nm CMOS technology nodes. Analysis shows that the temperature dependency of the subthreshold leakage has an important role in increasing the ability to attack future nanoscale crypto cores. For the first time, the effectiveness of a circuit-based leakage reduction technique is examined for side channel security. This investigation demonstrates that high threshold voltage transistor assignment improves resistance against information leakage. The analysis initiated in this thesis is crucial for rolling out the guidelines of side channel security for the next generation of Cryptosystem.1 yea

    Defect Induced Aging and Breakdown in High-k Dielectrics

    Get PDF
    abstract: High-k dielectrics have been employed in the metal-oxide semiconductor field effect transistors (MOSFETs) since 45 nm technology node. In this MOSFET industry, Moore’s law projects the feature size of MOSFET scales half within every 18 months. Such scaling down theory has not only led to the physical limit of manufacturing but also raised the reliability issues in MOSFETs. After the incorporation of HfO2 based high-k dielectrics, the stacked oxides based gate insulator is facing rather challenging reliability issues due to the vulnerable HfO2 layer, ultra-thin interfacial SiO2 layer, and even messy interface between SiO2 and HfO2. Bias temperature instabilities (BTI), hot channel electrons injections (HCI), stress-induced leakage current (SILC), and time dependent dielectric breakdown (TDDB) are the four most prominent reliability challenges impacting the lifetime of the chips under use. In order to fully understand the origins that could potentially challenge the reliability of the MOSFETs the defects induced aging and breakdown of the high-k dielectrics have been profoundly investigated here. BTI aging has been investigated to be related to charging effects from the bulk oxide traps and generations of Si-H bonds related interface traps. CVS and RVS induced dielectric breakdown studies have been performed and investigated. The breakdown process is regarded to be related to oxygen vacancies generations triggered by hot hole injections from anode. Post breakdown conduction study in the RRAM devices have shown irreversible characteristics of the dielectrics, although the resistance could be switched into high resistance state.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Predicting power scalability in a reconfigurable platform

    Get PDF
    This thesis focuses on the evolution of digital hardware systems. A reconfigurable platform is proposed and analysed based on thin-body, fully-depleted silicon-on-insulator Schottky-barrier transistors with metal gates and silicide source/drain (TBFDSBSOI). These offer the potential for simplified processing that will allow them to reach ultimate nanoscale gate dimensions. Technology CAD was used to show that the threshold voltage in TBFDSBSOI devices will be controllable by gate potentials that scale down with the channel dimensions while remaining within appropriate gate reliability limits. SPICE simulations determined that the magnitude of the threshold shift predicted by TCAD software would be sufficient to control the logic configuration of a simple, regular array of these TBFDSBSOI transistors as well as to constrain its overall subthreshold power growth. Using these devices, a reconfigurable platform is proposed based on a regular 6-input, 6-output NOR LUT block in which the logic and configuration functions of the array are mapped onto separate gates of the double-gate device. A new analytic model of the relationship between power (P), area (A) and performance (T) has been developed based on a simple VLSI complexity metric of the form ATσ = constant. As σ defines the performance “return” gained as a result of an increase in area, it also represents a bound on the architectural options available in power-scalable digital systems. This analytic model was used to determine that simple computing functions mapped to the reconfigurable platform will exhibit continuous power-area-performance scaling behavior. A number of simple arithmetic circuits were mapped to the array and their delay and subthreshold leakage analysed over a representative range of supply and threshold voltages, thus determining a worse-case range for the device/circuit-level parameters of the model. Finally, an architectural simulation was built in VHDL-AMS. The frequency scaling described by σ, combined with the device/circuit-level parameters predicts the overall power and performance scaling of parallel architectures mapped to the array

    VLSI Design

    Get PDF
    This book provides some recent advances in design nanometer VLSI chips. The selected topics try to present some open problems and challenges with important topics ranging from design tools, new post-silicon devices, GPU-based parallel computing, emerging 3D integration, and antenna design. The book consists of two parts, with chapters such as: VLSI design for multi-sensor smart systems on a chip, Three-dimensional integrated circuits design for thousand-core processors, Parallel symbolic analysis of large analog circuits on GPU platforms, Algorithms for CAD tools VLSI design, A multilevel memetic algorithm for large SAT-encoded problems, etc

    Degradation Models and Optimizations for CMOS Circuits

    Get PDF
    Die GewĂ€hrleistung der ZuverlĂ€ssigkeit von CMOS-Schaltungen ist derzeit eines der grĂ¶ĂŸten Herausforderungen beim Chip- und Schaltungsentwurf. Mit dem Ende der Dennard-Skalierung erhöht jede neue Generation der Halbleitertechnologie die elektrischen Felder innerhalb der Transistoren. Dieses stĂ€rkere elektrische Feld stimuliert die DegradationsphĂ€nomene (Alterung der Transistoren, Selbsterhitzung, Rauschen, usw.), was zu einer immer stĂ€rkeren Degradation (Verschlechterung) der Transistoren fĂŒhrt. Daher erleiden die Transistoren in jeder neuen Technologiegeneration immer stĂ€rkere Verschlechterungen ihrer elektrischen Parameter. Um die FunktionalitĂ€t und ZuverlĂ€ssigkeit der Schaltung zu wahren, wird es daher unerlĂ€sslich, die Auswirkungen der geschwĂ€chten Transistoren auf die Schaltung prĂ€zise zu bestimmen. Die beiden wichtigsten Auswirkungen der Verschlechterungen sind ein verlangsamtes Schalten, sowie eine erhöhte Leistungsaufnahme der Schaltung. Bleiben diese Auswirkungen unberĂŒcksichtigt, kann die verlangsamte Schaltgeschwindigkeit zu Timing-Verletzungen fĂŒhren (d.h. die Schaltung kann die Berechnung nicht rechtzeitig vor Beginn der nĂ€chsten Operation abschließen) und die FunktionalitĂ€t der Schaltung beeintrĂ€chtigen (fehlerhafte Ausgabe, verfĂ€lschte Daten, usw.). Um diesen Verschlechterungen der Transistorparameter im Laufe der Zeit Rechnung zu tragen, werden Sicherheitstoleranzen eingefĂŒhrt. So wird beispielsweise die Taktperiode der Schaltung kĂŒnstlich verlĂ€ngert, um ein langsameres Schaltverhalten zu tolerieren und somit Fehler zu vermeiden. Dies geht jedoch auf Kosten der Performanz, da eine lĂ€ngere Taktperiode eine niedrigere Taktfrequenz bedeutet. Die Ermittlung der richtigen Sicherheitstoleranz ist entscheidend. Wird die Sicherheitstoleranz zu klein bestimmt, fĂŒhrt dies in der Schaltung zu Fehlern, eine zu große Toleranz fĂŒhrt zu unnötigen Performanzseinbußen. Derzeit verlĂ€sst sich die Industrie bei der ZuverlĂ€ssigkeitsbestimmung auf den schlimmstmöglichen Fall (maximal gealterter Schaltkreis, maximale Betriebstemperatur bei minimaler Spannung, ungĂŒnstigste Fertigung, etc.). Diese Annahme des schlimmsten Falls garantiert, dass der Chip (oder integrierte Schaltung) unter allen auftretenden Betriebsbedingungen funktionsfĂ€hig bleibt. DarĂŒber hinaus ermöglicht die Betrachtung des schlimmsten Falles viele Vereinfachungen. Zum Beispiel muss die eigentliche Betriebstemperatur nicht bestimmt werden, sondern es kann einfach die schlimmstmögliche (sehr hohe) Betriebstemperatur angenommen werden. Leider lĂ€sst sich diese etablierte Praxis der BerĂŒcksichtigung des schlimmsten Falls (experimentell oder simulationsbasiert) nicht mehr aufrechterhalten. Diese BerĂŒcksichtigung bedingt solch harsche Betriebsbedingungen (maximale Temperatur, etc.) und Anforderungen (z.B. 25 Jahre Betrieb), dass die Transistoren unter den immer stĂ€rkeren elektrischen Felder enorme Verschlechterungen erleiden. Denn durch die Kombination an hoher Temperatur, Spannung und den steigenden elektrischen Feldern bei jeder Generation, nehmen die DegradationphĂ€nomene stetig zu. Das bedeutet, dass die unter dem schlimmsten Fall bestimmte Sicherheitstoleranz enorm pessimistisch ist und somit deutlich zu hoch ausfĂ€llt. Dieses Maß an Pessimismus fĂŒhrt zu erheblichen Performanzseinbußen, die unnötig und demnach vermeidbar sind. WĂ€hrend beispielsweise militĂ€rische Schaltungen 25 Jahre lang unter harschen Bedingungen arbeiten mĂŒssen, wird Unterhaltungselektronik bei niedrigeren Temperaturen betrieben und muss ihre FunktionalitĂ€t nur fĂŒr die Dauer der zweijĂ€hrigen Garantie aufrechterhalten. FĂŒr letzteres können die Sicherheitstoleranzen also deutlich kleiner ausfallen, um die Performanz deutlich zu erhöhen, die zuvor im Namen der ZuverlĂ€ssigkeit aufgegeben wurde. Diese Arbeit zielt darauf ab, maßgeschneiderte Sicherheitstoleranzen fĂŒr die einzelnen Anwendungsszenarien einer Schaltung bereitzustellen. FĂŒr fordernde Umgebungen wie Weltraumanwendungen (wo eine Reparatur unmöglich ist) ist weiterhin der schlimmstmögliche Fall relevant. In den meisten Anwendungen, herrschen weniger harsche Betriebssbedingungen (z.B. sorgen KĂŒhlsysteme fĂŒr niedrigere Temperaturen). Hier können Sicherheitstoleranzen maßgeschneidert und anwendungsspezifisch bestimmt werden, sodass Verschlechterungen exakt toleriert werden können und somit die ZuverlĂ€ssigkeit zu minimalen Kosten (Performanz, etc.) gewahrt wird. Leider sind die derzeitigen Standardentwurfswerkzeuge fĂŒr diese anwendungsspezifische Bestimmung der Sicherheitstoleranz nicht gut gerĂŒstet. Diese Arbeit zielt darauf ab, Standardentwurfswerkzeuge in die Lage zu versetzen, diesen Bedarf an ZuverlĂ€ssigkeitsbestimmungen fĂŒr beliebige Schaltungen unter beliebigen Betriebsbedingungen zu erfĂŒllen. Zu diesem Zweck stellen wir unsere ForschungsbeitrĂ€ge als vier Schritte auf dem Weg zu anwendungsspezifischen Sicherheitstoleranzen vor: Schritt 1 verbessert die Modellierung der DegradationsphĂ€nomene (Transistor-Alterung, -Selbsterhitzung, -Rauschen, etc.). Das Ziel von Schritt 1 ist es, ein umfassendes, einheitliches Modell fĂŒr die DegradationsphĂ€nomene zu erstellen. Durch die Verwendung von materialwissenschaftlichen Defektmodellierungen werden die zugrundeliegenden physikalischen Prozesse der DegradationsphĂ€nomena modelliert, um ihre Wechselwirkungen zu berĂŒcksichtigen (z.B. PhĂ€nomen A kann PhĂ€nomen B beschleunigen) und ein einheitliches Modell fĂŒr die simultane Modellierung verschiedener PhĂ€nomene zu erzeugen. Weiterhin werden die jĂŒngst entdeckten PhĂ€nomene ebenfalls modelliert und berĂŒcksichtigt. In Summe, erlaubt dies eine genaue Degradationsmodellierung von Transistoren unter gleichzeitiger BerĂŒcksichtigung aller essenziellen PhĂ€nomene. Schritt 2 beschleunigt diese Degradationsmodelle von mehreren Minuten pro Transistor (Modelle der Physiker zielen auf Genauigkeit statt Performanz) auf wenige Millisekunden pro Transistor. Die ForschungsbeitrĂ€ge dieser Dissertation beschleunigen die Modelle um ein Vielfaches, indem sie zuerst die Berechnungen so weit wie möglich vereinfachen (z.B. sind nur die Spitzenwerte der Degradation erforderlich und nicht alle Werte ĂŒber einem zeitlichen Verlauf) und anschließend die ParallelitĂ€t heutiger Computerhardware nutzen. Beide AnsĂ€tze erhöhen die Auswertungsgeschwindigkeit, ohne die Genauigkeit der Berechnung zu beeinflussen. In Schritt 3 werden diese beschleunigte Degradationsmodelle in die Standardwerkzeuge integriert. Die Standardwerkzeuge berĂŒcksichtigen derzeit nur die bestmöglichen, typischen und schlechtestmöglichen Standardzellen (digital) oder Transistoren (analog). Diese drei Typen von Zellen/Transistoren werden von der Foundry (Halbleiterhersteller) aufwendig experimentell bestimmt. Da nur diese drei Typen bestimmt werden, nehmen die Werkzeuge keine ZuverlĂ€ssigkeitsbestimmung fĂŒr eine spezifische Anwendung (Temperatur, Spannung, AktivitĂ€t) vor. Simulationen mit Degradationsmodellen ermöglichen eine Bestimmung fĂŒr spezifische Anwendungen, jedoch muss diese FĂ€higkeit erst integriert werden. Diese Integration ist eines der BeitrĂ€ge dieser Dissertation. Schritt 4 beschleunigt die Standardwerkzeuge. Digitale SchaltungsentwĂŒrfe, die nicht auf Standardzellen basieren, sowie komplexe analoge Schaltungen können derzeit nicht mit analogen Schaltungssimulatoren ausgewertet werden. Ihre Performanz reicht fĂŒr solch umfangreiche Simulationen nicht aus. Diese Dissertation stellt Techniken vor, um diese Werkzeuge zu beschleunigen und somit diese umfangreichen Schaltungen simulieren zu können. Diese ForschungsbeitrĂ€ge, die sich jeweils ĂŒber mehrere Veröffentlichungen erstrecken, ermöglichen es Standardwerkzeugen, die Sicherheitstoleranz fĂŒr kundenspezifische Anwendungsszenarien zu bestimmen. FĂŒr eine gegebene Schaltungslebensdauer, Temperatur, Spannung und AktivitĂ€t (Schaltverhalten durch Software-Applikationen) können die Auswirkungen der Transistordegradation ausgewertet werden und somit die erforderliche (weder unter- noch ĂŒberschĂ€tzte) Sicherheitstoleranz bestimmt werden. Diese anwendungsspezifische Sicherheitstoleranz, garantiert die ZuverlĂ€ssigkeit und FunktionalitĂ€t der Schaltung fĂŒr genau diese Anwendung bei minimalen Performanzeinbußen
    • 

    corecore