18 research outputs found

    Analog design for manufacturability: lithography-aware analog layout retargeting

    Get PDF
    As transistor sizes shrink over time in the advanced nanometer technologies, lithography effects have become a dominant contributor of integrated circuit (IC) yield degradation. Random manufacturing variations, such as photolithographic defect or spot defect, may cause fatal functional failures, while systematic process variations, such as dose fluctuation and defocus, can result in wafer pattern distortions and in turn ruin circuit performance. This dissertation is focused on yield optimization at the circuit design stage or so-called design for manufacturability (DFM) with respect to analog ICs, which has not yet been sufficiently addressed by traditional DFM solutions. On top of a graph-based analog layout retargeting framework, in this dissertation the photolithographic defects and lithography process variations are alleviated by geometrical layout manipulation operations including wire widening, wire shifting, process variation band (PV-band) shifting, and optical proximity correction (OPC). The ultimate objective of this research is to develop efficient algorithms and methodologies in order to achieve lithography-robust analog IC layout design without circuit performance degradation

    Analog layout design automation: ILP-based analog routers

    Get PDF
    The shrinking design window and high parasitic sensitivity in the advanced technology have imposed special challenges on the analog and radio frequency (RF) integrated circuit design. In this thesis, we propose a new methodology to address such a deficiency based on integer linear programming (ILP) but without compromising the capability of handling any special constraints for the analog routing problems. Distinct from the conventional methods, our algorithm utilizes adaptive resolutions for various routing regions. For a more congested region, a routing grid with higher resolution is employed, whereas a lower-resolution grid is adopted to a less crowded routing region. Moreover, we strengthen its speciality in handling interconnect width control so as to route the electrical nets based on analog constraints while considering proper interconnect width to address the acute interconnect parasitics, mismatch minimization, and electromigration effects simultaneously. In addition, to tackle the performance degradation due to layout dependent effects (LDEs) and take advantage of optical proximity correction (OPC) for resolution enhancement of subwavelength lithography, in this thesis we have also proposed an innovative LDE-aware analog layout migration scheme, which is equipped with our special routing methodology. The LDE constraints are first identified with aid of a special sensitivity analysis and then satisfied during the layout migration process. Afterwards the electrical nets are routed by an extended OPC-inclusive ILP-based analog router to improve the final layout image fidelity while the routability and analog constraints are respected in the meantime. The experimental results demonstrate the effectiveness and efficiency of our proposed methods in terms of both circuit performance and image quality compared to the previous works

    Integrated Circuits Parasitic Capacitance Extraction Using Machine Learning and its Application to Layout Optimization

    Get PDF
    The impact of parasitic elements on the overall circuit performance keeps increasing from one technology generation to the next. In advanced process nodes, the parasitic effects dominate the overall circuit performance. As a result, the accuracy requirements of parasitic extraction processes significantly increased, especially for parasitic capacitance extraction. Existing parasitic capacitance extraction tools face many challenges to cope with such new accuracy requirements that are set by semiconductor foundries (\u3c 5% error). Although field-solver methods can meet such requirements, they are very slow and have a limited capacity. The other alternative is the rule-based parasitic capacitance extraction methods, which are faster and have a high capacity; however, they cannot consistently provide good accuracy as they use a pre-characterized library of capacitance formulas that cover a limited number of layout patterns. On the other hand, the new parasitic extraction accuracy requirements also added more challenges on existing parasitic-aware routing optimization methods, where simplified parasitic models are used to optimize layouts. This dissertation provides new solutions for interconnect parasitic capacitance extraction and parasitic-aware routing optimization methodologies in order to cope with the new accuracy requirements of advanced process nodes as follows. First, machine learning compact models are developed in rule-based extractors to predict parasitic capacitances of cross-section layout patterns efficiently. The developed models mitigate the problems of the pre-characterized library approach, where each compact model is designed to extract parasitic capacitances of cross-sections of arbitrary distributed metal polygons that belong to a specific set of metal layers (i.e., layer combination) efficiently. Therefore, the number of covered layout patterns significantly increased. Second, machine learning compact models are developed to predict parasitic capacitances of middle-end-of-line (MEOL) layers around FINFETs and MOSFETs. Each compact model extracts parasitic capacitances of 3D MEOL patterns of a specific device type regardless of its metal polygons distribution. Therefore, the developed MEOL models can replace field-solvers in extracting MEOL patterns. Third, a novel accuracy-based hybrid parasitic capacitance extraction method is developed. The proposed hybrid flow divides a layout into windows and extracts the parasitic capacitances of each window using one of three parasitic capacitance extraction methods that include: 1) rule-based; 2) novel deep-neural-networks-based; and 3) field-solver methods. This hybrid methodology uses neural-networks classifiers to determine an appropriate extraction method for each window. Moreover, as an intermediate parasitic capacitance extraction method between rule-based and field-solver methods, a novel deep-neural-networks-based extraction method is developed. This intermediate level of accuracy and speed is needed since using only rule-based and field-solver methods (for hybrid extraction) results in using field-solver most of the time for any required high accuracy extraction. Eventually, a parasitic-aware layout routing optimization and analysis methodology is implemented based on an incremental parasitic extraction and a fast optimization methodology. Unlike existing flows that do not provide a mechanism to analyze the impact of modifying layout geometries on a circuit performance, the proposed methodology provides novel sensitivity circuit models to analyze the integrity of signals in layout routes. Such circuit models are based on an accurate matrix circuit representation, a cost function, and an accurate parasitic sensitivity extraction. The circuit models identify critical parasitic elements along with the corresponding layout geometries in a certain route, where they measure the sensitivity of a route’s performance to corresponding layout geometries very fast. Moreover, the proposed methodology uses a nonlinear programming technique to optimize problematic routes with pre-determined degrees of freedom using the proposed circuit models. Furthermore, it uses a novel incremental parasitic extraction method to extract parasitic elements of modified geometries efficiently, where the incremental extraction is used as a part of the routing optimization process to improve the optimization runtime and increase the optimization accuracy

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Technology Independent Synthesis of CMOS Operational Amplifiers

    Get PDF
    Analog circuit design does not enjoy as much automation as its digital counterpart. Analog sizing is inherently knowledge intensive and requires accurate modeling of the different parametric effects of the devices. Besides, the set of constraints in a typical analog design problem is large, involving complex tradeoffs. For these reasons, the task of modeling an analog design problem in a form viable for automation is much more tedious than the digital design. Consequently, analog blocks are still handcrafted intuitively and often become a bottleneck in the integrated circuit design, thereby increasing the time to market. In this work, we address the problem of automatically solving an analog circuit design problem. Specifically, we propose methods to automate the transistor-level sizing of OpAmps. Given the specifications and the netlist of the OpAmp, our methodology produces a design that has the accuracy of the BSIM models used for simulation and the advantage of a quick design time. The approach is based on generating an initial first-order design and then refining it. In principle, the refining approach is a simulated-annealing scheme that uses (i) localized simulations and (ii) convex optimization scheme (COS). The optimal set of input variables for localized simulations has been selected by using techniques from Design of Experiments (DOE). To formulate the design problem as a COS problem, we have used monomial circuit models that are fitted from simulation data. These models accurately predict the performance of the circuit in the proximity of the initial guess. The models can also be used to gain valuable insight into the behavior of the circuit and understand the interrelations between the different performance constraints. A software framework that implements this methodology has been coded in SKILL language of Cadence. The methodology can be applied to design different OpAmp topologies across different technologies. In other words, the framework is both technology independent and topology independent. In addition, we develop a scheme to empirically model the small signal parameters like \u27gm\u27 and \u27gds\u27 of CMOS transistors. The monomial device models are reusable for a given technology and can be used to formulate the OpAmp design problem as a COS problem. The efficacy of the framework has been demonstrated by automatically designing different OpAmp topologies across different technologies. We designed a two-stage OpAmp and a telescopic OpAmp in TSMC025 and AMI016 technologies. Our results show significant (10–15%) improvement in the performance of both the OpAmps in both the technologies. While the methodology has shown encouraging results in the sub-micrometer regime, the effectiveness of the tool has to be investigated in the deep-sub-micron technologies

    Algorithms and methodologies for interconnect reliability analysis of integrated circuits

    Get PDF
    The phenomenal progress of computing devices has been largely made possible by the sustained efforts of semiconductor industry in innovating techniques for extremely large-scale integration. Indeed, gigantically integrated circuits today contain multi-billion interconnects which enable the transistors to talk to each other -all in a space of few mm2. Such aggressively downscaled components (transistors and interconnects) silently suffer from increasing electric fields and impurities/defects during manufacturing. Compounded by the Gigahertz switching, the challenges of reliability and design integrity remains very much alive for chip designers, with Electro migration (EM) being the foremost interconnect reliability challenge. Traditionally, EM containment revolves around EM guidelines, generated at single-component level, whose non-compliance means that the component fails. Failure usually refers to deformation due to EM -manifested in form of resistance increase, which is unacceptable from circuit performance point of view. Subsequent aspects deal with correct-by-construct design of the chip followed by the signoff-verification of EM reliability. Interestingly, chip designs today have reached a dilemma point of reduced margin between the actual and reliably allowed current densities, versus, comparatively scarce system-failures. Consequently, this research is focused on improved algorithms and methodologies for interconnect reliability analysis enabling accurate and design-specific interpretation of EM events. In the first part, we present a new methodology for logic-IP (cell) internal EM verification: an inadequately attended area in the literature. Our SPICE-correlated model helps in evaluating the cell lifetime under any arbitrary reliability speciation, without generating additional data - unlike the traditional approaches. The model is apt for today's fab less eco-system, where there is a) increasing reuse of standard cells optimized for one market condition to another (e.g., wireless to automotive), as well as b) increasing 3rd party content on the chip requiring a rigorous sign-off. We present results from a 28nm production setup, demonstrating significant violations relaxation and flexibility to allow runtime level reliability retargeting. Subsequently, we focus on an important aspect of connecting the individual component-level failures to that of the system failure. We note that existing EM methodologies are based on serial reliability assumption, which deems the entire system to fail as soon as the first component in the system fails. With a highly redundant circuit topology, that of a clock grid, in perspective, we present algorithms for EM assessment, which allow us to incorporate and quantify the benefit from system redundancies. With the skew metric of clock-grid as a failure criterion, we demonstrate that unless such incorporations are done, chip lifetimes are underestimated by over 2x. This component-to-system reliability bridge is further extended through an extreme order statistics based approach, wherein, we demonstrate that system failures can be approximated by an asymptotic kth-component failure model, otherwise requiring costly Monte Carlo simulations. Using such approach, we can efficiently predict a system-criterion based time to failure within existing EDA frameworks. The last part of the research is related to incorporating the impact of global/local process variation on current densities as well as fundamental physical factors on EM. Through Hermite polynomial chaos based approach, we arrive at novel variations-aware current density models, which demonstrate significant margins (> 30 %) in EM lifetime when compared with the traditional worst case approach. The above research problems have been motivated by the decade-long work experience of the author dealing with reliability issues in industrial SoCs, first at Texas Instruments and later at Qualcomm.L'espectacular progrés dels dispositius de càlcul ha estat possible en gran part als esforços de la indústria dels semiconductors en proposar tècniques innovadores per circuits d'una alta escala d'integració. Els circuits integrats contenen milers de milions d'interconnexions que permeten connectar transistors dins d'un espai de pocs mm2. Tots aquests components estan afectats per camps elèctrics, impureses i defectes durant la seva fabricació. Degut a l’activitat a nivell de Gigahertzs, la fiabilitat i integritat són reptes importants pels dissenyadors de xips, on la Electromigració (EM) és un dels problemes més importants. Tradicionalment, el control de la EM ha girat entorn a directrius a nivell de component. L'incompliment d’alguna de les directrius implica un alt risc de falla. Per falla s'entén la degradació deguda a la EM, que es manifesta en forma d'augment de la resistència, la qual cosa és inacceptable des del punt de vista del rendiment del circuit. Altres aspectes tenen a veure amb la correcta construcció del xip i la verificació de fiabilitat abans d’enviar el xip a fabricar. Avui en dia, el disseny s’enfronta a dilemes importants a l’hora de definir els marges de fiabilitat dels xips. És un compromís entre eficiència i fiabilitat. La recerca en aquesta tesi se centra en la proposta d’algorismes i metodologies per a l'anàlisi de la fiabilitat d'interconnexió que permeten una interpretació precisa i específica d'esdeveniments d'EM. A la primera part de la tesi es presenta una nova metodologia pel disseny correcte-per-construcció i verificació d’EM a l’interior de les cel·les lògiques. Es presenta un model SPICE correlat que ajuda a avaluar el temps de vida de les cel·les segons qualsevol especificació arbitrària de fiabilitat i sense generar cap dada addicional, al contrari del que fan altres tècniques. El model és apte per l'ecosistema d'empreses de disseny quan hi ha a) una reutilització creixent de cel·les estàndard optimitzades per unes condicions de mercat i utilitzades en un altre (p.ex. de wireless a automoció), o b) la utilització de components del xip provinents de terceres parts i que necessiten una verificació rigorosa. Es presenten resultats en una tecnologia de 28nm, demostrant relaxacions significatives de les regles de fiabilitat i flexibilitat per permetre la reavaluació de la fiabilitat en temps d'execució. A continuació, el treball tracta un aspecte important sobre la relació entre les falles dels components i les falles del sistema. S'observa que les tècniques existents es basen en la suposició de fiabilitat en sèrie, que porta el sistema a fallar tant aviat hi ha un component que falla. Pensant en topologies redundants, com la de les graelles de rellotge, es proposen algorismes per l'anàlisi d'EM que permeten quantificar els beneficis de la redundància en el sistema. Utilitzant com a mètrica l’esbiaixi del senyal de rellotge, es demostra que la vida dels xips pot arribar a ser infravalorada per un factor de 2x. Aquest pont de fiabilitat entre component i sistema es perfecciona a través d'una tècnica basada en estadístics d'ordre extrem on es demostra que les falles poden ser aproximades amb un model asimptòtic de fallada de l'ièssim component, evitant així simulacions de Monte Carlo costoses. Amb aquesta tècnica, es pot predir eficientment el temps de fallada a nivell de sistema utilitzant eines industrials. La darrera part de la recerca està relacionada amb avaluar l'impacte de les variacions de procés en les densitats de corrent i factors físics de la EM. Mitjançant una tècnica basada en polinomis d'Hermite s'han obtingut uns nous models de densitat de corrent que mostren millores importants (>30%) en l'estimació de la vida del sistema comprades amb les tècniques basades en el cas pitjor. La recerca d'aquesta tesi ha estat motivada pel treball de l'autor durant més d'una dècada tractant temes de fiabilitat en sistemes, primer a Texas Instruments i després a Qualcomm.Postprint (published version

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument.Comment: Full report: 498 pages. Executive Summary: 14 pages. More information about HabEx can be found here: https://www.jpl.nasa.gov/habex

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    Etude d'un convertisseur analogique-numérique <br />à très grande dynamique à base de portes logiques supraconductrices

    Get PDF
    RSFQ (Rapid Single Flux Quantum) superconducting logic is suitable to process very high speed digital data with very low power dissipation and with performances well beyond what should be possible with CMOS technology in the next decades. The RSFQ circuit technology, based on superconducting niobium nitride (NbN), presently developed at CEA-G, involves NbN/Ta_{X}N/NbN internally shunted Josephson junctions with high critical current density and high maximum switching frequency close to 1 THz at 10 K, as required by ultra-fast RSFQ electronics. The purpose of this work is to apply the NbN technology to an ADC (Analog-to-Digital Converter) for space telecommunications. An ADC with sigma-delta architecture was studied oversamplingat 200 GHz clock frequency a signal with a bandwidth of 500 MHz modulated over a carrier frequency of 30 GHz. In particular a clock, a comparator and some other logic gates were studied at 200 GHz as well as a third order sigma-delta band-pass modulator whose the SNR and SFDR performances, after optimization, should satisfy the specifications. Decimation filter architecture complexity was analyzed. Some basic components of the filter, such frequency dividers and shift registers, were studied and designed, and in the end some possible methods to test the modulator were also proposed. The implementation of NbN circuits in a multi-levels technology was treated including the complete fabrication of two wafers. For some technology problems clarified afterwards, these two lots couldn't reach the ADC logic gate test. However the RSFQ gate margins were determined thanks to junctions, SQUIDs and microwaves filters (resonators) characterization. Finally, we compare some circuits based on self-shunted NbN junctions and other similar circuits in Nb foundry technology using Nb/AlO_{X}/Nb externally shunted junctions operating at 4 K. This comparative study shows all the advantages given by the NbN technology.La logique supraconductrice RSFQ (Rapide Single Flux Quantum) est une solution très attractive pour le traitement des données à très haute fréquence avec une dissipation très faible et des performances nettement supérieures à ce que la technologie CMOS pourra offrir dans la prochaine décennie. La technologie RSFQ en nitrure de niobium (NbN) en cours de développement au CEA-G est basée sur des jonctions Josephson NbN/Ta_{X}N/NbN auto-shuntées qui présentent une fréquence d'oscillation maximum proche du THz jusqu'à 10 K. L'objectif de cette recherche a été d'appliquer cette technologie NbN 9K à un CAN (Convertisseur Analogique-Numérique) adaptable aux télécommunications spatiales. Une architecture de type CAN sigma-delta a été étudiée, sur-échantillonnant à 200 GHz de fréquence d'horloge un signal avec une bande de 500 MHz et modulé sur une porteuse de 30 GHz. En particulier une horloge, un comparateur et différents portes logiques ont été étudiés et conçus pour opérer à 200 GHz ainsi qu'un modulateur sigma-delta passe-bande du troisième ordre dont les performances SNR, SFDR, devraient après optimisation satisfaire les objectifs visés. La complexité de l'architecture du filtre de décimation a été analysée. Certains composants de base du filtre, des diviseurs de fréquence et des registres à décalage, ont été étudiés et dessinés, enfin quelques méthodes de test du modulateur sont proposées. Le travail d'implémentation de circuits NbN en technologie multi-niveaux a été traité conduisant à la réalisation complète de deux lots de circuits qui pour des raisons technologiques clarifiées ensuite n'ont pu aboutir au test des portes logique du CAN. Cependant, les marges de fonctionnement des portes logiques NbN ont été déterminées grâce à la caractérisation de jonctions, SQUIDs et de filtres (résonateurs) micro-ondes. Finalement, une étude comparative entre des circuits à jonctions NbN auto-shuntées opérant à 9K en réfrigération allégée et des circuits similaires obtenus en fonderie Nb basés sur des jonctions Nb/AlO_{X}/Nb shuntées en externe opérant à 4K, démontre tous les avantages qu'on peut espérer attendre de la technologie NbN
    corecore