24 research outputs found

    On the suitability and development of layout templates for analog layout reuse and layout-aware synthesis

    Get PDF
    Accelerating the synthesis of increasingly complex analog integrated circuits is key to bridge the widening gap between what we can integrate and what we can design while meeting ever-tightening time-to-market constraints. It is a well-known fact in the semiconductor industry that such goal can only be attained by means of adequate CAD methodologies, techniques, and accompanying tools. This is particularly important in analog physical synthesis (a.k.a. layout generation), where large sensitivities of the circuit performances to the many subtle details of layout implementation (device matching, loading and coupling effects, reliability, and area features are of utmost importance to analog designers), render complete automation a truly challenging task. To approach the problem, two directions have been traditionally considered, knowledge-based and optimization-based, both with their own pros and cons. Besides, recently reported solutions oriented to speed up the overall design flow by means of reuse-based practices or by cutting off time-consuming, error-prone spins between electrical and layout synthesis (a technique known as layout-aware synthesis), rely on a outstandingly rapid yet efficient layout generation method. This paper analyses the suitability of procedural layout generation based on templates (a knowledge-based approach) by examining the requirements that both layout reuse and layout-aware solutions impose, and how layout templates face them. The ability to capture the know-how of experienced layout designers and the turnaround times for layout instancing are considered main comparative aspects in relation to other layout generation approaches. A discussion on the benefit-cost trade-off of using layout templates is also included. In addition to this analysis, the paper delves deeper into systematic techniques to develop fully reusable layout templates for analog circuits, either for a change of the circuit sizing (i.e., layout retargeting) or a change of the fabrication process (i.e., layout migration). Several examples implemented with the Cadence's Virtuoso tool suite are provided as demonstration of the paper's contributions.Ministerio de Educación y Ciencia TEC2004-0175

    Analog design for manufacturability: lithography-aware analog layout retargeting

    Get PDF
    As transistor sizes shrink over time in the advanced nanometer technologies, lithography effects have become a dominant contributor of integrated circuit (IC) yield degradation. Random manufacturing variations, such as photolithographic defect or spot defect, may cause fatal functional failures, while systematic process variations, such as dose fluctuation and defocus, can result in wafer pattern distortions and in turn ruin circuit performance. This dissertation is focused on yield optimization at the circuit design stage or so-called design for manufacturability (DFM) with respect to analog ICs, which has not yet been sufficiently addressed by traditional DFM solutions. On top of a graph-based analog layout retargeting framework, in this dissertation the photolithographic defects and lithography process variations are alleviated by geometrical layout manipulation operations including wire widening, wire shifting, process variation band (PV-band) shifting, and optical proximity correction (OPC). The ultimate objective of this research is to develop efficient algorithms and methodologies in order to achieve lithography-robust analog IC layout design without circuit performance degradation

    Technology Independent Synthesis of CMOS Operational Amplifiers

    Get PDF
    Analog circuit design does not enjoy as much automation as its digital counterpart. Analog sizing is inherently knowledge intensive and requires accurate modeling of the different parametric effects of the devices. Besides, the set of constraints in a typical analog design problem is large, involving complex tradeoffs. For these reasons, the task of modeling an analog design problem in a form viable for automation is much more tedious than the digital design. Consequently, analog blocks are still handcrafted intuitively and often become a bottleneck in the integrated circuit design, thereby increasing the time to market. In this work, we address the problem of automatically solving an analog circuit design problem. Specifically, we propose methods to automate the transistor-level sizing of OpAmps. Given the specifications and the netlist of the OpAmp, our methodology produces a design that has the accuracy of the BSIM models used for simulation and the advantage of a quick design time. The approach is based on generating an initial first-order design and then refining it. In principle, the refining approach is a simulated-annealing scheme that uses (i) localized simulations and (ii) convex optimization scheme (COS). The optimal set of input variables for localized simulations has been selected by using techniques from Design of Experiments (DOE). To formulate the design problem as a COS problem, we have used monomial circuit models that are fitted from simulation data. These models accurately predict the performance of the circuit in the proximity of the initial guess. The models can also be used to gain valuable insight into the behavior of the circuit and understand the interrelations between the different performance constraints. A software framework that implements this methodology has been coded in SKILL language of Cadence. The methodology can be applied to design different OpAmp topologies across different technologies. In other words, the framework is both technology independent and topology independent. In addition, we develop a scheme to empirically model the small signal parameters like \u27gm\u27 and \u27gds\u27 of CMOS transistors. The monomial device models are reusable for a given technology and can be used to formulate the OpAmp design problem as a COS problem. The efficacy of the framework has been demonstrated by automatically designing different OpAmp topologies across different technologies. We designed a two-stage OpAmp and a telescopic OpAmp in TSMC025 and AMI016 technologies. Our results show significant (10–15%) improvement in the performance of both the OpAmps in both the technologies. While the methodology has shown encouraging results in the sub-micrometer regime, the effectiveness of the tool has to be investigated in the deep-sub-micron technologies

    Behavioral Modeling of Mixed-Mode Integrated Circuits

    Get PDF
    Open Access.-- et al.This work is partially supported by CONACyT through the grant for the sabbatical stay of the first author at University of California at Riverside, during 2009-2010. The authors acknowledge the support from UC-MEXUS-CONACYT collaboration grant CN-09-310; by Promep México under the project UATLX-PTC-088, and by Consejeria de Innovacion Ciencia y Empresa, Junta de Andalucia, Spain, under the project number TIC-2532. The third author thanks the support of the JAE-Doc program of CSIC, co-funded by FSE.Peer Reviewe

    Integrated Circuits Parasitic Capacitance Extraction Using Machine Learning and its Application to Layout Optimization

    Get PDF
    The impact of parasitic elements on the overall circuit performance keeps increasing from one technology generation to the next. In advanced process nodes, the parasitic effects dominate the overall circuit performance. As a result, the accuracy requirements of parasitic extraction processes significantly increased, especially for parasitic capacitance extraction. Existing parasitic capacitance extraction tools face many challenges to cope with such new accuracy requirements that are set by semiconductor foundries (\u3c 5% error). Although field-solver methods can meet such requirements, they are very slow and have a limited capacity. The other alternative is the rule-based parasitic capacitance extraction methods, which are faster and have a high capacity; however, they cannot consistently provide good accuracy as they use a pre-characterized library of capacitance formulas that cover a limited number of layout patterns. On the other hand, the new parasitic extraction accuracy requirements also added more challenges on existing parasitic-aware routing optimization methods, where simplified parasitic models are used to optimize layouts. This dissertation provides new solutions for interconnect parasitic capacitance extraction and parasitic-aware routing optimization methodologies in order to cope with the new accuracy requirements of advanced process nodes as follows. First, machine learning compact models are developed in rule-based extractors to predict parasitic capacitances of cross-section layout patterns efficiently. The developed models mitigate the problems of the pre-characterized library approach, where each compact model is designed to extract parasitic capacitances of cross-sections of arbitrary distributed metal polygons that belong to a specific set of metal layers (i.e., layer combination) efficiently. Therefore, the number of covered layout patterns significantly increased. Second, machine learning compact models are developed to predict parasitic capacitances of middle-end-of-line (MEOL) layers around FINFETs and MOSFETs. Each compact model extracts parasitic capacitances of 3D MEOL patterns of a specific device type regardless of its metal polygons distribution. Therefore, the developed MEOL models can replace field-solvers in extracting MEOL patterns. Third, a novel accuracy-based hybrid parasitic capacitance extraction method is developed. The proposed hybrid flow divides a layout into windows and extracts the parasitic capacitances of each window using one of three parasitic capacitance extraction methods that include: 1) rule-based; 2) novel deep-neural-networks-based; and 3) field-solver methods. This hybrid methodology uses neural-networks classifiers to determine an appropriate extraction method for each window. Moreover, as an intermediate parasitic capacitance extraction method between rule-based and field-solver methods, a novel deep-neural-networks-based extraction method is developed. This intermediate level of accuracy and speed is needed since using only rule-based and field-solver methods (for hybrid extraction) results in using field-solver most of the time for any required high accuracy extraction. Eventually, a parasitic-aware layout routing optimization and analysis methodology is implemented based on an incremental parasitic extraction and a fast optimization methodology. Unlike existing flows that do not provide a mechanism to analyze the impact of modifying layout geometries on a circuit performance, the proposed methodology provides novel sensitivity circuit models to analyze the integrity of signals in layout routes. Such circuit models are based on an accurate matrix circuit representation, a cost function, and an accurate parasitic sensitivity extraction. The circuit models identify critical parasitic elements along with the corresponding layout geometries in a certain route, where they measure the sensitivity of a route’s performance to corresponding layout geometries very fast. Moreover, the proposed methodology uses a nonlinear programming technique to optimize problematic routes with pre-determined degrees of freedom using the proposed circuit models. Furthermore, it uses a novel incremental parasitic extraction method to extract parasitic elements of modified geometries efficiently, where the incremental extraction is used as a part of the routing optimization process to improve the optimization runtime and increase the optimization accuracy

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Layoutautomatisierung im analogen IC-Entwurf mit formalisiertem und nicht-formalisiertem Expertenwissen

    Get PDF
    After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively. Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone. Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale. Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.Nach mehr als drei Jahrzehnten Entwurfsautomatisierung werden die meisten Layouts für analoge integrierte Schaltkreise heute immer noch in aufwändiger Handarbeit entworfen. Gegenüber den hochautomatisierten Synthesewerkzeugen im Digitalbereich (die sich mit dem quantitativen Problem auseinandersetzen, mehr und mehr Komponenten auf einem einzelnen Chip unterzubringen – bestens bekannt als More Moore) kämpft die analoge Layoutautomatisierung mit den vielen verschiedenen und stark korrelierten funktionalen Anforderungen, die das analoge Entwurfsproblem zu einer More than Moore Herausforderung machen. Angesichts dieser qualitativen Komplexität bedarf es des umfassenden Expertenwissens erfahrener Layouter um sämtliche Entwurfsconstraints, die zwingend eingehalten werden müssen, zu berücksichtigen. Meist beinhaltet dies formal spezifiziertes als auch nicht-formal übermitteltes Expertenwissen, was eine explizite bzw. implizite Constraint Berücksichtigung nach sich zieht. Existierende Automatisierungsansätze können grundsätzlich unterteilt werden in Optimierungsalgorithmen (wo die Constraint Berücksichtigung explizit erfolgt) und prozedurale Generatoren (die Constraints nur implizit berücksichtigen können). Wie in dieser Arbeit eruiert wird, folgen diese beiden Automatisierungsstrategien zwei grundlegend unterschiedlichen Paradigmen, bezeichnet als top-down Automatisierung und bottom-up Automatisierung. Wesentliches Merkmal der top-down Automatisierung ist die Notwendigkeit einer umfassenden Problemformalisierung um eine eigenintelligente Lösungsfindung zu ermöglichen, während ein bottom-up Automatismus –parametergesteuert– lediglich Lösungen reproduziert, die vorab von einem Layoutexperten vorgedacht wurden. Da die Stärken des einen Paradigmas die Schwächen des anderen ausgleichen können, ist anzunehmen, dass eine Kombination beider Paradigmen –genannt bottom-up meets top down– weitaus mehr Potenzial hat, das analoge Entwurfsproblem in seiner Gesamtheit zu lösen als optimierungsbasierte oder generatorbasierte Ansätze für sich allein. Vor diesem Hintergrund stellt die vorliegende Arbeit Self-organized Wiring and Arrangement of Responsive Modules (SWARM) vor, eine interdisziplinäre Methodik, die das Entwurfsproblem mit einem dezentralisierten Multi-Agenten-System angeht. Das Grundprinzip besteht darin, ähnlich dem Zusammentreiben einer Schafherde, reaktionsfähige mobile Layoutmodule (realisiert als kontextbewusste prozedurale Generatoren) in einer benutzerdefinierten Layoutzone interagieren zu lassen. Jedes Modul darf sich selbständig bewegen, drehen und verformen, wobei ein übergeordnetes Kontrollorgan die Zone schrittweise verkleinert, um die Interaktion auf zunehmend kompakte (und constraintkonforme) Layoutanordnungen hinzulenken. Durch die Berücksichtigung diverser Selbstorganisationsgrundsätze und die Einarbeitung von Ideen bestehender dezentralisierter Systeme ist SWARM in der Lage, das Phänomen der Emergenz hervorzurufen: obwohl jedes Modul nur eine begrenzte Sichtweise hat und egoistisch seine eigenen Ziele verfolgt, können sich auf globaler Ebene bemerkenswerte Gesamtlösungen herausbilden. Mehrere Beispiele veranschaulichen dieses emergente Verhalten in SWARM, wobei besonders interessant ist, dass sogar optimale Lösungen aus der Modulinteraktion entstehen können. Weitere Beispiele demonstrieren SWARMs Eignung zwecks Floorplanning sowie die Anwendung auf praktische Place-and-Route Probleme. Letzteres verdeutlicht, wie die interagierenden Module ihre jeweiligen Entwurfsanforderungen implizit (also: bottom-up) beachten, während sie gleichzeitig High-Level-Constraints berücksichtigen (z.B. die Layoutkontur, die top-down vom übergeordneten Kontrollorgan auferlegt wird). Experimentelle Ergebnisse zeigen, dass Optimierungsalgorithmen und prozedurale Generatoren von SWARM sowohl bezüglich Layoutqualität als auch Entwurfsproduktivität übertroffen werden können. Aus akademischer Sicht besteht SWARMs große Errungenschaft in der Erschließung fruchtbaren Neulands für zukünftige Arbeiten an neuartigen bottom-up meets top-down Automatismen. Diese könnten eines Tages der Schlüssel sein, um die Automatisierungslücke im analogen Layoutentwurf zu schließen

    Null convention logic circuits for asynchronous computer architecture

    Get PDF
    For most of its history, computer architecture has been able to benefit from a rapid scaling in semiconductor technology, resulting in continuous improvements to CPU design. During that period, synchronous logic has dominated because of its inherent ease of design and abundant tools. However, with the scaling of semiconductor processes into deep sub-micron and then to nano-scale dimensions, computer architecture is hitting a number of roadblocks such as high power and increased process variability. Asynchronous techniques can potentially offer many advantages compared to conventional synchronous design, including average case vs. worse case performance, robustness in the face of process and operating point variability and the ready availability of high performance, fine grained pipeline architectures. Of the many alternative approaches to asynchronous design, Null Convention Logic (NCL) has the advantage that its quasi delay-insensitive behavior makes it relatively easy to set up complex circuits without the need for exhaustive timing analysis. This thesis examines the characteristics of an NCL based asynchronous RISC-V CPU and analyses the problems with applying NCL to CPU design. While a number of university and industry groups have previously developed small 8-bit microprocessor architectures using NCL techniques, it is still unclear whether these offer any real advantages over conventional synchronous design. A key objective of this work has been to analyse the impact of larger word widths and more complex architectures on NCL CPU implementations. The research commenced by re-evaluating existing techniques for implementing NCL on programmable devices such as FPGAs. The little work that has been undertaken previously on FPGA implementations of asynchronous logic has been inconclusive and seems to indicate that asynchronous systems cannot be easily implemented in these devices. However, most of this work related to an alternative technique called bundled data, which is not well suited to FPGA implementation because of the difficulty in controlling and matching delays in a 'bundle' of signals. On the other hand, this thesis clearly shows that such applications are not only possible with NCL, but there are some distinct advantages in being able to prototype complex asynchronous systems in a field-programmable technology such as the FPGA. A large part of the value of NCL derives from its architectural level behavior, inherent pipelining, and optimization opportunities such as the merging of register and combina- tional logic functions. In this work, a number of NCL multiplier architectures have been analyzed to reveal the performance trade-offs between various non-pipelined, 1D and 2D organizations. Two-dimensional pipelining can easily be applied to regular architectures such as array multipliers in a way that is both high performance and area-efficient. It was found that the performance of 2D pipelining for small networks such as multipliers is around 260% faster than the equivalent non-pipelined design. However, the design uses 265% more transistors so the methodology is mainly of benefit where performance is strongly favored over area. A pipelined 32bit x 32bit signed Baugh-Wooley multiplier with Wallace-Tree Carry Save Adders (CSA), which is representative of a real design used for CPUs and DSPs, was used to further explore this concept as it is faster and has fewer pipeline stages compared to the normal array multiplier using Ripple-Carry adders (RCA). It was found that 1D pipelining with ripple-carry chains is an efficient implementation option but becomes less so for larger multipliers, due to the completion logic for which the delay time depends largely on the number of bits involved in the completion network. The average-case performance of ripple-carry adders was explored using random input vectors and it was observed that it offers little advantage on the smaller multiplier blocks, but this particular timing characteristic of asynchronous design styles be- comes increasingly more important as word size grows. Finally, this research has resulted in the development of the first 32-Bit asynchronous RISC-V CPU core. Called the Redback RISC, the architecture is a structure of pipeline rings composed of computational oscillations linked with flow completeness relationships. It has been written using NELL, a commercial description/synthesis tool that outputs standard Verilog. The Redback has been analysed and compared to two approximately equivalent industry standard 32-Bit synchronous RISC-V cores (PicoRV32 and Rocket) that are already fabricated and used in industry. While the NCL implementation is larger than both commercial cores it has similar performance and lower power compared to the PicoRV32. The implementation results were also compared against an existing NCL design tool flow (UNCLE), which showed how much the results of these implementation strategies differ. The Redback RISC has achieved similar level of throughput and 43% better power and 34% better energy compared to one of the synchronous cores with the same benchmark test and test condition such as input sup- ply voltage. However, it was shown that area is the biggest drawback for NCL CPU design. The core is roughly 2.5× larger than synchronous designs. On the other hand its area is still 2.9× smaller than previous designs using UNCLE tools. The area penalty is largely due to the unavoidable translation into a dual-rail topology when using the standard NCL cell library

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma
    corecore