36 research outputs found

    Advanced analog layout design automation in compliance with density uniformity

    Get PDF
    To fabricate a reliable integrated circuit chip, foundries follow specific design rules and layout processing techniques. One of the parameters, which affect circuit performance and final electronic product quality, is the variation of thickness for each semiconductor layer within the fabricated chips. The thickness is closely dependent on the density of geometric features on that layer. Therefore, to ensure consistent thickness, foundries normally have to seriously control distribution of the feature density on each layer by using post-processing operations. In this research, the methods of controlling feature density distribution on different layers of an analog layout during the process of layout migration from an old technology to a new one or updated design specifications in the same technology have been investigated. We aim to achieve density-uniformity-aware layout retargeting for facilitating manufacturing process in the advanced technologies. This can offer an advantage right to the design stage for the designers to evaluate the effects of applying density uniformity to their drafted layouts, which are otherwise usually done by the foundries at the final manufacturing stage without considering circuit performance. Layout modification for density uniformity includes component position change and size modification, which may induce crosstalk noise caused by extra parasitic capacitance. To effectively control this effect, we have also investigated and proposed a simple yet accurate analytic method to model the parasitic capacitance on multi-layer VLSI chips. Supported by this capacitance modeling research, a unique methodology to deal with density-uniformity-aware analog layout retargeting with the capability of parasitic capacitance control has been presented. The proposed operations include layout geometry position rearrangement, interconnect size modification, and extra dummy fill insertion for enhancing layout density uniformity. All of these operations are holistically coordinated by a linear programming optimization scheme. The experimental results demonstrate the efficacy of the proposed methodology compared to the popular digital solutions in terms of minimum density variation and acute parasitic capacitance control

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument.Comment: Full report: 498 pages. Executive Summary: 14 pages. More information about HabEx can be found here: https://www.jpl.nasa.gov/habex

    Algorithms and methodologies for interconnect reliability analysis of integrated circuits

    Get PDF
    The phenomenal progress of computing devices has been largely made possible by the sustained efforts of semiconductor industry in innovating techniques for extremely large-scale integration. Indeed, gigantically integrated circuits today contain multi-billion interconnects which enable the transistors to talk to each other -all in a space of few mm2. Such aggressively downscaled components (transistors and interconnects) silently suffer from increasing electric fields and impurities/defects during manufacturing. Compounded by the Gigahertz switching, the challenges of reliability and design integrity remains very much alive for chip designers, with Electro migration (EM) being the foremost interconnect reliability challenge. Traditionally, EM containment revolves around EM guidelines, generated at single-component level, whose non-compliance means that the component fails. Failure usually refers to deformation due to EM -manifested in form of resistance increase, which is unacceptable from circuit performance point of view. Subsequent aspects deal with correct-by-construct design of the chip followed by the signoff-verification of EM reliability. Interestingly, chip designs today have reached a dilemma point of reduced margin between the actual and reliably allowed current densities, versus, comparatively scarce system-failures. Consequently, this research is focused on improved algorithms and methodologies for interconnect reliability analysis enabling accurate and design-specific interpretation of EM events. In the first part, we present a new methodology for logic-IP (cell) internal EM verification: an inadequately attended area in the literature. Our SPICE-correlated model helps in evaluating the cell lifetime under any arbitrary reliability speciation, without generating additional data - unlike the traditional approaches. The model is apt for today's fab less eco-system, where there is a) increasing reuse of standard cells optimized for one market condition to another (e.g., wireless to automotive), as well as b) increasing 3rd party content on the chip requiring a rigorous sign-off. We present results from a 28nm production setup, demonstrating significant violations relaxation and flexibility to allow runtime level reliability retargeting. Subsequently, we focus on an important aspect of connecting the individual component-level failures to that of the system failure. We note that existing EM methodologies are based on serial reliability assumption, which deems the entire system to fail as soon as the first component in the system fails. With a highly redundant circuit topology, that of a clock grid, in perspective, we present algorithms for EM assessment, which allow us to incorporate and quantify the benefit from system redundancies. With the skew metric of clock-grid as a failure criterion, we demonstrate that unless such incorporations are done, chip lifetimes are underestimated by over 2x. This component-to-system reliability bridge is further extended through an extreme order statistics based approach, wherein, we demonstrate that system failures can be approximated by an asymptotic kth-component failure model, otherwise requiring costly Monte Carlo simulations. Using such approach, we can efficiently predict a system-criterion based time to failure within existing EDA frameworks. The last part of the research is related to incorporating the impact of global/local process variation on current densities as well as fundamental physical factors on EM. Through Hermite polynomial chaos based approach, we arrive at novel variations-aware current density models, which demonstrate significant margins (> 30 %) in EM lifetime when compared with the traditional worst case approach. The above research problems have been motivated by the decade-long work experience of the author dealing with reliability issues in industrial SoCs, first at Texas Instruments and later at Qualcomm.L'espectacular progrés dels dispositius de càlcul ha estat possible en gran part als esforços de la indústria dels semiconductors en proposar tècniques innovadores per circuits d'una alta escala d'integració. Els circuits integrats contenen milers de milions d'interconnexions que permeten connectar transistors dins d'un espai de pocs mm2. Tots aquests components estan afectats per camps elèctrics, impureses i defectes durant la seva fabricació. Degut a l’activitat a nivell de Gigahertzs, la fiabilitat i integritat són reptes importants pels dissenyadors de xips, on la Electromigració (EM) és un dels problemes més importants. Tradicionalment, el control de la EM ha girat entorn a directrius a nivell de component. L'incompliment d’alguna de les directrius implica un alt risc de falla. Per falla s'entén la degradació deguda a la EM, que es manifesta en forma d'augment de la resistència, la qual cosa és inacceptable des del punt de vista del rendiment del circuit. Altres aspectes tenen a veure amb la correcta construcció del xip i la verificació de fiabilitat abans d’enviar el xip a fabricar. Avui en dia, el disseny s’enfronta a dilemes importants a l’hora de definir els marges de fiabilitat dels xips. És un compromís entre eficiència i fiabilitat. La recerca en aquesta tesi se centra en la proposta d’algorismes i metodologies per a l'anàlisi de la fiabilitat d'interconnexió que permeten una interpretació precisa i específica d'esdeveniments d'EM. A la primera part de la tesi es presenta una nova metodologia pel disseny correcte-per-construcció i verificació d’EM a l’interior de les cel·les lògiques. Es presenta un model SPICE correlat que ajuda a avaluar el temps de vida de les cel·les segons qualsevol especificació arbitrària de fiabilitat i sense generar cap dada addicional, al contrari del que fan altres tècniques. El model és apte per l'ecosistema d'empreses de disseny quan hi ha a) una reutilització creixent de cel·les estàndard optimitzades per unes condicions de mercat i utilitzades en un altre (p.ex. de wireless a automoció), o b) la utilització de components del xip provinents de terceres parts i que necessiten una verificació rigorosa. Es presenten resultats en una tecnologia de 28nm, demostrant relaxacions significatives de les regles de fiabilitat i flexibilitat per permetre la reavaluació de la fiabilitat en temps d'execució. A continuació, el treball tracta un aspecte important sobre la relació entre les falles dels components i les falles del sistema. S'observa que les tècniques existents es basen en la suposició de fiabilitat en sèrie, que porta el sistema a fallar tant aviat hi ha un component que falla. Pensant en topologies redundants, com la de les graelles de rellotge, es proposen algorismes per l'anàlisi d'EM que permeten quantificar els beneficis de la redundància en el sistema. Utilitzant com a mètrica l’esbiaixi del senyal de rellotge, es demostra que la vida dels xips pot arribar a ser infravalorada per un factor de 2x. Aquest pont de fiabilitat entre component i sistema es perfecciona a través d'una tècnica basada en estadístics d'ordre extrem on es demostra que les falles poden ser aproximades amb un model asimptòtic de fallada de l'ièssim component, evitant així simulacions de Monte Carlo costoses. Amb aquesta tècnica, es pot predir eficientment el temps de fallada a nivell de sistema utilitzant eines industrials. La darrera part de la recerca està relacionada amb avaluar l'impacte de les variacions de procés en les densitats de corrent i factors físics de la EM. Mitjançant una tècnica basada en polinomis d'Hermite s'han obtingut uns nous models de densitat de corrent que mostren millores importants (>30%) en l'estimació de la vida del sistema comprades amb les tècniques basades en el cas pitjor. La recerca d'aquesta tesi ha estat motivada pel treball de l'autor durant més d'una dècada tractant temes de fiabilitat en sistemes, primer a Texas Instruments i després a Qualcomm.Postprint (published version

    Multimodal Computational Attention for Scene Understanding

    Get PDF
    Robotic systems have limited computational capacities. Hence, computational attention models are important to focus on specific stimuli and allow for complex cognitive processing. For this purpose, we developed auditory and visual attention models that enable robotic platforms to efficiently explore and analyze natural scenes. To allow for attention guidance in human-robot interaction, we use machine learning to integrate the influence of verbal and non-verbal social signals into our models

    System design of the Pioneer Venus spacecraft. Volume 2: Science

    Get PDF
    The objectives of the low-cost Pioneer Venus space probe program are discussed. The space mission and science requirements are analyzed. The subjects considered are as follows: (1) the multiprobe mission, (2) the orbiter mission, (3) science payload accomodations, and (4) orbiter spacecraft experimental interface specifications. Tables of data are provided to show the science allocations for large and small probes. Illustrations of the systems and components of various probe configurations are included

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    Etude d'un convertisseur analogique-numérique <br />à très grande dynamique à base de portes logiques supraconductrices

    Get PDF
    RSFQ (Rapid Single Flux Quantum) superconducting logic is suitable to process very high speed digital data with very low power dissipation and with performances well beyond what should be possible with CMOS technology in the next decades. The RSFQ circuit technology, based on superconducting niobium nitride (NbN), presently developed at CEA-G, involves NbN/Ta_{X}N/NbN internally shunted Josephson junctions with high critical current density and high maximum switching frequency close to 1 THz at 10 K, as required by ultra-fast RSFQ electronics. The purpose of this work is to apply the NbN technology to an ADC (Analog-to-Digital Converter) for space telecommunications. An ADC with sigma-delta architecture was studied oversamplingat 200 GHz clock frequency a signal with a bandwidth of 500 MHz modulated over a carrier frequency of 30 GHz. In particular a clock, a comparator and some other logic gates were studied at 200 GHz as well as a third order sigma-delta band-pass modulator whose the SNR and SFDR performances, after optimization, should satisfy the specifications. Decimation filter architecture complexity was analyzed. Some basic components of the filter, such frequency dividers and shift registers, were studied and designed, and in the end some possible methods to test the modulator were also proposed. The implementation of NbN circuits in a multi-levels technology was treated including the complete fabrication of two wafers. For some technology problems clarified afterwards, these two lots couldn't reach the ADC logic gate test. However the RSFQ gate margins were determined thanks to junctions, SQUIDs and microwaves filters (resonators) characterization. Finally, we compare some circuits based on self-shunted NbN junctions and other similar circuits in Nb foundry technology using Nb/AlO_{X}/Nb externally shunted junctions operating at 4 K. This comparative study shows all the advantages given by the NbN technology.La logique supraconductrice RSFQ (Rapide Single Flux Quantum) est une solution très attractive pour le traitement des données à très haute fréquence avec une dissipation très faible et des performances nettement supérieures à ce que la technologie CMOS pourra offrir dans la prochaine décennie. La technologie RSFQ en nitrure de niobium (NbN) en cours de développement au CEA-G est basée sur des jonctions Josephson NbN/Ta_{X}N/NbN auto-shuntées qui présentent une fréquence d'oscillation maximum proche du THz jusqu'à 10 K. L'objectif de cette recherche a été d'appliquer cette technologie NbN 9K à un CAN (Convertisseur Analogique-Numérique) adaptable aux télécommunications spatiales. Une architecture de type CAN sigma-delta a été étudiée, sur-échantillonnant à 200 GHz de fréquence d'horloge un signal avec une bande de 500 MHz et modulé sur une porteuse de 30 GHz. En particulier une horloge, un comparateur et différents portes logiques ont été étudiés et conçus pour opérer à 200 GHz ainsi qu'un modulateur sigma-delta passe-bande du troisième ordre dont les performances SNR, SFDR, devraient après optimisation satisfaire les objectifs visés. La complexité de l'architecture du filtre de décimation a été analysée. Certains composants de base du filtre, des diviseurs de fréquence et des registres à décalage, ont été étudiés et dessinés, enfin quelques méthodes de test du modulateur sont proposées. Le travail d'implémentation de circuits NbN en technologie multi-niveaux a été traité conduisant à la réalisation complète de deux lots de circuits qui pour des raisons technologiques clarifiées ensuite n'ont pu aboutir au test des portes logique du CAN. Cependant, les marges de fonctionnement des portes logiques NbN ont été déterminées grâce à la caractérisation de jonctions, SQUIDs et de filtres (résonateurs) micro-ondes. Finalement, une étude comparative entre des circuits à jonctions NbN auto-shuntées opérant à 9K en réfrigération allégée et des circuits similaires obtenus en fonderie Nb basés sur des jonctions Nb/AlO_{X}/Nb shuntées en externe opérant à 4K, démontre tous les avantages qu'on peut espérer attendre de la technologie NbN

    Structure-function studies on Clostridium botulinum neurotoxins

    Get PDF

    Actor & Avatar: A Scientific and Artistic Catalog

    Get PDF
    What kind of relationship do we have with artificial beings (avatars, puppets, robots, etc.)? What does it mean to mirror ourselves in them, to perform them or to play trial identity games with them? Actor & Avatar addresses these questions from artistic and scholarly angles. Contributions on the making of "technical others" and philosophical reflections on artificial alterity are flanked by neuroscientific studies on different ways of perceiving living persons and artificial counterparts. The contributors have achieved a successful artistic-scientific collaboration with extensive visual material
    corecore