47 research outputs found

    Geometrically-constrained, parasitic-aware synthesis of analog ICs

    Get PDF
    In order to speed up the design process of analog ICs, iterations between different design stages should be avoided as much as possible. More specifically, spins between electrical and physical synthesis should be reduced for this is a very time-consuming task: if circuit performance including layout-induced degradations proves unacceptable, a re-design cycle must be entered, and electrical, physical, or both synthesis processes, would have to be repeated. It is also worth noting that if geometric optimization (e.g., area minimization) is undertaken after electrical synthesis, it may add up as another source of unexpected degradation of the circuit performance due to the impact of the geometric variables (e.g., transistor folds) on the device and the routing parasitic values. This awkward scenario is caused by the complete separation of said electrical and physical synthesis, a design practice commonly followed so far. Parasitic-aware synthesis, consisting in including parasitic estimates to the circuit netlist directly during electrical synthesis, has been proposed as solution. While most of the reported contributions either tackle parasitic-aware synthesis without paying special attention to geometric optimization or approach both issues only partially, this paper addresses the problem in a unified way. In what has been called layout-aware electrical synthesis, a simulation-based optimization algorithm explores the design space with geometric variables constrained to meet certain user-defined goals, which provides reliable estimates of layout-induced parasitics at each iteration, and, thereby, accurate evaluation of the circuit ultimate performance. This technique, demonstrated here through several design examples, requires knowing layout details beforehand; to facilitate this, procedural layout generation is used as physical synthesis approach due to its rapidness and ability to capture analog layout know-how.Ministerio de Educación y Ciencia TEC2004-0175

    On the suitability and development of layout templates for analog layout reuse and layout-aware synthesis

    Get PDF
    Accelerating the synthesis of increasingly complex analog integrated circuits is key to bridge the widening gap between what we can integrate and what we can design while meeting ever-tightening time-to-market constraints. It is a well-known fact in the semiconductor industry that such goal can only be attained by means of adequate CAD methodologies, techniques, and accompanying tools. This is particularly important in analog physical synthesis (a.k.a. layout generation), where large sensitivities of the circuit performances to the many subtle details of layout implementation (device matching, loading and coupling effects, reliability, and area features are of utmost importance to analog designers), render complete automation a truly challenging task. To approach the problem, two directions have been traditionally considered, knowledge-based and optimization-based, both with their own pros and cons. Besides, recently reported solutions oriented to speed up the overall design flow by means of reuse-based practices or by cutting off time-consuming, error-prone spins between electrical and layout synthesis (a technique known as layout-aware synthesis), rely on a outstandingly rapid yet efficient layout generation method. This paper analyses the suitability of procedural layout generation based on templates (a knowledge-based approach) by examining the requirements that both layout reuse and layout-aware solutions impose, and how layout templates face them. The ability to capture the know-how of experienced layout designers and the turnaround times for layout instancing are considered main comparative aspects in relation to other layout generation approaches. A discussion on the benefit-cost trade-off of using layout templates is also included. In addition to this analysis, the paper delves deeper into systematic techniques to develop fully reusable layout templates for analog circuits, either for a change of the circuit sizing (i.e., layout retargeting) or a change of the fabrication process (i.e., layout migration). Several examples implemented with the Cadence's Virtuoso tool suite are provided as demonstration of the paper's contributions.Ministerio de Educación y Ciencia TEC2004-0175

    Automatic Synthesis of VLSI Layout for CMOS Continuous-Time Filters

    Get PDF
    Automatic synthesis of digital VLSI layout has been available for many years. It has become a necessary part of the design industry as the window of time from conception to production shrinks with ever increasing competition. However, automatic synthesis of analog VLSI layout remains rare. With digital circuits, there is often room for signal drift. In a digital circuit, a signal can drift within a range before hitting the threshold which triggers a change in logic state. The effect of parasitic capacitances for the most part, hinders the timing margins of the signal, but not its functionality. The logic functionality is protected by the inherent noise immunity of digital circuits. With analog circuits, however, there is little room for signal drift. Parasitic directly influence signal integrity and the functionality of the circuit. The underlying problem, that the automatic VLSI layout programs face, is how to minimize this influence. This thesis describes a software tool that was written to show that the minimization of parasitic influence is possible in the case of automatic layout of continuous-time filters using transconductance-capacitor methods

    A Pre-Search Assisted ILP Approach to Analog Integrated Circuit Routing

    Get PDF
    The routing of analog integrated circuits (IC) has long been a challenge due to numerous constraints (such as symmetry and topology-matching) that matter for overall circuit performance. Existing automatic analog IC routing algorithms can be broadly categorized into two approaches: sequential approach that heuristically routes one net after another and constructive ILP (Integer Linear Programming). The former approach is usually fast but may miss opportunities of finding good solutions. The constructive ILP provides optimal solutions but can be very time consuming. We propose a simple yet efficient method that combines the advantages of both existing approaches. First, sequential routing is performed to obtain a set of candidate routing paths for each net. Then, an ILP is applied to commit each net to only one of its candidate routes. Experiments on two op-amp designs show that the post-layout performance (such as gain and phase margin) from our method is close to that of manual design. Our method also outperforms a previous work of automated analog IC routing

    Topics : a contribution to analog design automation

    Get PDF

    Advanced analog layout design automation in compliance with density uniformity

    Get PDF
    To fabricate a reliable integrated circuit chip, foundries follow specific design rules and layout processing techniques. One of the parameters, which affect circuit performance and final electronic product quality, is the variation of thickness for each semiconductor layer within the fabricated chips. The thickness is closely dependent on the density of geometric features on that layer. Therefore, to ensure consistent thickness, foundries normally have to seriously control distribution of the feature density on each layer by using post-processing operations. In this research, the methods of controlling feature density distribution on different layers of an analog layout during the process of layout migration from an old technology to a new one or updated design specifications in the same technology have been investigated. We aim to achieve density-uniformity-aware layout retargeting for facilitating manufacturing process in the advanced technologies. This can offer an advantage right to the design stage for the designers to evaluate the effects of applying density uniformity to their drafted layouts, which are otherwise usually done by the foundries at the final manufacturing stage without considering circuit performance. Layout modification for density uniformity includes component position change and size modification, which may induce crosstalk noise caused by extra parasitic capacitance. To effectively control this effect, we have also investigated and proposed a simple yet accurate analytic method to model the parasitic capacitance on multi-layer VLSI chips. Supported by this capacitance modeling research, a unique methodology to deal with density-uniformity-aware analog layout retargeting with the capability of parasitic capacitance control has been presented. The proposed operations include layout geometry position rearrangement, interconnect size modification, and extra dummy fill insertion for enhancing layout density uniformity. All of these operations are holistically coordinated by a linear programming optimization scheme. The experimental results demonstrate the efficacy of the proposed methodology compared to the popular digital solutions in terms of minimum density variation and acute parasitic capacitance control

    Layoutautomatisierung im analogen IC-Entwurf mit formalisiertem und nicht-formalisiertem Expertenwissen

    Get PDF
    After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively. Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone. Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale. Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.Nach mehr als drei Jahrzehnten Entwurfsautomatisierung werden die meisten Layouts für analoge integrierte Schaltkreise heute immer noch in aufwändiger Handarbeit entworfen. Gegenüber den hochautomatisierten Synthesewerkzeugen im Digitalbereich (die sich mit dem quantitativen Problem auseinandersetzen, mehr und mehr Komponenten auf einem einzelnen Chip unterzubringen – bestens bekannt als More Moore) kämpft die analoge Layoutautomatisierung mit den vielen verschiedenen und stark korrelierten funktionalen Anforderungen, die das analoge Entwurfsproblem zu einer More than Moore Herausforderung machen. Angesichts dieser qualitativen Komplexität bedarf es des umfassenden Expertenwissens erfahrener Layouter um sämtliche Entwurfsconstraints, die zwingend eingehalten werden müssen, zu berücksichtigen. Meist beinhaltet dies formal spezifiziertes als auch nicht-formal übermitteltes Expertenwissen, was eine explizite bzw. implizite Constraint Berücksichtigung nach sich zieht. Existierende Automatisierungsansätze können grundsätzlich unterteilt werden in Optimierungsalgorithmen (wo die Constraint Berücksichtigung explizit erfolgt) und prozedurale Generatoren (die Constraints nur implizit berücksichtigen können). Wie in dieser Arbeit eruiert wird, folgen diese beiden Automatisierungsstrategien zwei grundlegend unterschiedlichen Paradigmen, bezeichnet als top-down Automatisierung und bottom-up Automatisierung. Wesentliches Merkmal der top-down Automatisierung ist die Notwendigkeit einer umfassenden Problemformalisierung um eine eigenintelligente Lösungsfindung zu ermöglichen, während ein bottom-up Automatismus –parametergesteuert– lediglich Lösungen reproduziert, die vorab von einem Layoutexperten vorgedacht wurden. Da die Stärken des einen Paradigmas die Schwächen des anderen ausgleichen können, ist anzunehmen, dass eine Kombination beider Paradigmen –genannt bottom-up meets top down– weitaus mehr Potenzial hat, das analoge Entwurfsproblem in seiner Gesamtheit zu lösen als optimierungsbasierte oder generatorbasierte Ansätze für sich allein. Vor diesem Hintergrund stellt die vorliegende Arbeit Self-organized Wiring and Arrangement of Responsive Modules (SWARM) vor, eine interdisziplinäre Methodik, die das Entwurfsproblem mit einem dezentralisierten Multi-Agenten-System angeht. Das Grundprinzip besteht darin, ähnlich dem Zusammentreiben einer Schafherde, reaktionsfähige mobile Layoutmodule (realisiert als kontextbewusste prozedurale Generatoren) in einer benutzerdefinierten Layoutzone interagieren zu lassen. Jedes Modul darf sich selbständig bewegen, drehen und verformen, wobei ein übergeordnetes Kontrollorgan die Zone schrittweise verkleinert, um die Interaktion auf zunehmend kompakte (und constraintkonforme) Layoutanordnungen hinzulenken. Durch die Berücksichtigung diverser Selbstorganisationsgrundsätze und die Einarbeitung von Ideen bestehender dezentralisierter Systeme ist SWARM in der Lage, das Phänomen der Emergenz hervorzurufen: obwohl jedes Modul nur eine begrenzte Sichtweise hat und egoistisch seine eigenen Ziele verfolgt, können sich auf globaler Ebene bemerkenswerte Gesamtlösungen herausbilden. Mehrere Beispiele veranschaulichen dieses emergente Verhalten in SWARM, wobei besonders interessant ist, dass sogar optimale Lösungen aus der Modulinteraktion entstehen können. Weitere Beispiele demonstrieren SWARMs Eignung zwecks Floorplanning sowie die Anwendung auf praktische Place-and-Route Probleme. Letzteres verdeutlicht, wie die interagierenden Module ihre jeweiligen Entwurfsanforderungen implizit (also: bottom-up) beachten, während sie gleichzeitig High-Level-Constraints berücksichtigen (z.B. die Layoutkontur, die top-down vom übergeordneten Kontrollorgan auferlegt wird). Experimentelle Ergebnisse zeigen, dass Optimierungsalgorithmen und prozedurale Generatoren von SWARM sowohl bezüglich Layoutqualität als auch Entwurfsproduktivität übertroffen werden können. Aus akademischer Sicht besteht SWARMs große Errungenschaft in der Erschließung fruchtbaren Neulands für zukünftige Arbeiten an neuartigen bottom-up meets top-down Automatismen. Diese könnten eines Tages der Schlüssel sein, um die Automatisierungslücke im analogen Layoutentwurf zu schließen

    Time-domain optimization of amplifiers based on distributed genetic algorithms

    Get PDF
    Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer EngineeringThe work presented in this thesis addresses the task of circuit optimization, helping the designer facing the high performance and high efficiency circuits demands of the market and technology evolution. A novel framework is introduced, based on time-domain analysis, genetic algorithm optimization, and distributed processing. The time-domain optimization methodology is based on the step response of the amplifier. The main advantage of this new time-domain methodology is that, when a given settling-error is reached within the desired settling-time, it is automatically guaranteed that the amplifier has enough open-loop gain, AOL, output-swing (OS), slew-rate (SR), closed loop bandwidth and closed loop stability. Thus, this simplification of the circuit‟s evaluation helps the optimization process to converge faster. The method used to calculate the step response expression of the circuit is based on the inverse Laplace transform applied to the transfer function, symbolically, multiplied by 1/s (which represents the unity input step). Furthermore, may be applied to transfer functions of circuits with unlimited number of zeros/poles, without approximation in order to keep accuracy. Thus, complex circuit, with several design/optimization degrees of freedom can also be considered. The expression of the step response, from the proposed methodology, is based on the DC bias operating point of the devices of the circuit. For this, complex and accurate device models (e.g. BSIM3v3) are integrated. During the optimization process, the time-domain evaluation of the amplifier is used by the genetic algorithm, in the classification of the genetic individuals. The time-domain evaluator is integrated into the developed optimization platform, as independent library, coded using C programming language. The genetic algorithms have demonstrated to be a good approach for optimization since they are flexible and independent from the optimization-objective. Different levels of abstraction can be optimized either system level or circuit level. Optimization of any new block is basically carried-out by simply providing additional configuration files, e.g. chromosome format, in text format; and the circuit library where the fitness value of each individual of the genetic algorithm is computed. Distributed processing is also employed to address the increasing processing time demanded by the complex circuit analysis, and the accurate models of the circuit devices. The communication by remote processing nodes is based on Message Passing interface (MPI). It is demonstrated that the distributed processing reduced the optimization run-time by more than one order of magnitude. Platform assessment is carried by several examples of two-stage amplifiers, which have been optimized and successfully used, embedded, in larger systems, such as data converters. A dedicated example of an inverter-based self-biased two-stage amplifier has been designed, laid-out and fabricated as a stand-alone circuit and experimentally evaluated. The measured results are a direct demonstration of the effectiveness of the proposed time-domain optimization methodology.Portuguese Foundation for the Science and Technology (FCT

    Commissioning Perspectives for the ATLAS Pixel Detector

    Get PDF
    The ATLAS Pixel Detector, the innermost sub-detector of the ATLAS experiment at the Large Hadron Collider, CERN, is an 80 million channel silicon pixel tracking detector designed for high-precision charged particle tracking and secondary vertex reconstruction. It was installed in the ATLAS experiment and commissioning for the first proton-proton collision data taking in 2008 has begun. Due to the complex layout and limited accessibility, quality assurance measurements were continuously performed during production and assembly to ensure that no problematic components are integrated. The assembly of the detector at CERN and related quality assurance measurement results, including comparison to previous production measurements, will be presented. In order to verify that the integrated detector, its data acquisition readout chain, the ancillary services and cooling system as well as the detector control and data acquisition software perform together as expected approximately 8% of the detector system was progressively assembled as close to the final layout as possible. The so-called System Test laboratory setup was operated for several months under experiment-like environment conditions. The interplay between different detector components was studied with a focus on the performance and tunability of the optical data transmission system. Operation and optical tuning procedures were developed and qualified for the upcoming commission ing. The front-end electronics preamplifier threshold tuning and noise performance were studied and noise occupancy of the detector with low sensor bias voltages was investigated. Data taking with cosmic muons was performed to test the data acquisition and trigger system as well as the offline reconstruction and analysis software. The data quality was verified with an extended version of the pixel online monitoring package which was implemented for the ATLAS Combined Testbeam. The detector raw data of the Combined Testbeam and of the System Test cosmic run was converted for offline data analysis with the Pixel bytestream converter which was continuously extended and adapted according to the offline analysis software needs
    corecore