73 research outputs found

    Layoutautomatisierung im analogen IC-Entwurf mit formalisiertem und nicht-formalisiertem Expertenwissen

    Get PDF
    After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively. Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone. Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale. Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.Nach mehr als drei Jahrzehnten Entwurfsautomatisierung werden die meisten Layouts für analoge integrierte Schaltkreise heute immer noch in aufwändiger Handarbeit entworfen. Gegenüber den hochautomatisierten Synthesewerkzeugen im Digitalbereich (die sich mit dem quantitativen Problem auseinandersetzen, mehr und mehr Komponenten auf einem einzelnen Chip unterzubringen – bestens bekannt als More Moore) kämpft die analoge Layoutautomatisierung mit den vielen verschiedenen und stark korrelierten funktionalen Anforderungen, die das analoge Entwurfsproblem zu einer More than Moore Herausforderung machen. Angesichts dieser qualitativen Komplexität bedarf es des umfassenden Expertenwissens erfahrener Layouter um sämtliche Entwurfsconstraints, die zwingend eingehalten werden müssen, zu berücksichtigen. Meist beinhaltet dies formal spezifiziertes als auch nicht-formal übermitteltes Expertenwissen, was eine explizite bzw. implizite Constraint Berücksichtigung nach sich zieht. Existierende Automatisierungsansätze können grundsätzlich unterteilt werden in Optimierungsalgorithmen (wo die Constraint Berücksichtigung explizit erfolgt) und prozedurale Generatoren (die Constraints nur implizit berücksichtigen können). Wie in dieser Arbeit eruiert wird, folgen diese beiden Automatisierungsstrategien zwei grundlegend unterschiedlichen Paradigmen, bezeichnet als top-down Automatisierung und bottom-up Automatisierung. Wesentliches Merkmal der top-down Automatisierung ist die Notwendigkeit einer umfassenden Problemformalisierung um eine eigenintelligente Lösungsfindung zu ermöglichen, während ein bottom-up Automatismus –parametergesteuert– lediglich Lösungen reproduziert, die vorab von einem Layoutexperten vorgedacht wurden. Da die Stärken des einen Paradigmas die Schwächen des anderen ausgleichen können, ist anzunehmen, dass eine Kombination beider Paradigmen –genannt bottom-up meets top down– weitaus mehr Potenzial hat, das analoge Entwurfsproblem in seiner Gesamtheit zu lösen als optimierungsbasierte oder generatorbasierte Ansätze für sich allein. Vor diesem Hintergrund stellt die vorliegende Arbeit Self-organized Wiring and Arrangement of Responsive Modules (SWARM) vor, eine interdisziplinäre Methodik, die das Entwurfsproblem mit einem dezentralisierten Multi-Agenten-System angeht. Das Grundprinzip besteht darin, ähnlich dem Zusammentreiben einer Schafherde, reaktionsfähige mobile Layoutmodule (realisiert als kontextbewusste prozedurale Generatoren) in einer benutzerdefinierten Layoutzone interagieren zu lassen. Jedes Modul darf sich selbständig bewegen, drehen und verformen, wobei ein übergeordnetes Kontrollorgan die Zone schrittweise verkleinert, um die Interaktion auf zunehmend kompakte (und constraintkonforme) Layoutanordnungen hinzulenken. Durch die Berücksichtigung diverser Selbstorganisationsgrundsätze und die Einarbeitung von Ideen bestehender dezentralisierter Systeme ist SWARM in der Lage, das Phänomen der Emergenz hervorzurufen: obwohl jedes Modul nur eine begrenzte Sichtweise hat und egoistisch seine eigenen Ziele verfolgt, können sich auf globaler Ebene bemerkenswerte Gesamtlösungen herausbilden. Mehrere Beispiele veranschaulichen dieses emergente Verhalten in SWARM, wobei besonders interessant ist, dass sogar optimale Lösungen aus der Modulinteraktion entstehen können. Weitere Beispiele demonstrieren SWARMs Eignung zwecks Floorplanning sowie die Anwendung auf praktische Place-and-Route Probleme. Letzteres verdeutlicht, wie die interagierenden Module ihre jeweiligen Entwurfsanforderungen implizit (also: bottom-up) beachten, während sie gleichzeitig High-Level-Constraints berücksichtigen (z.B. die Layoutkontur, die top-down vom übergeordneten Kontrollorgan auferlegt wird). Experimentelle Ergebnisse zeigen, dass Optimierungsalgorithmen und prozedurale Generatoren von SWARM sowohl bezüglich Layoutqualität als auch Entwurfsproduktivität übertroffen werden können. Aus akademischer Sicht besteht SWARMs große Errungenschaft in der Erschließung fruchtbaren Neulands für zukünftige Arbeiten an neuartigen bottom-up meets top-down Automatismen. Diese könnten eines Tages der Schlüssel sein, um die Automatisierungslücke im analogen Layoutentwurf zu schließen

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    Micromegas for the search of solar axions in CAST and low-mass WIMPs in TREX-DM

    Get PDF
    En este trabajo hemos estudiado la aplicación de los planos de lectura Micromegas, una estructura para la amplificación de la carga en detectores gaseosos, al campo de la detección de sucesos poco probables. En el experimento CAST para la detección de axiones; y en el proyecto TREX-DM para la búsqueda de WIMPs de baja masa. Tanto los axiones como los WIMPs son excelentes candidatos para formar la materia oscura del universo dado que no aparecen como solución ad hoc para resolverlo, sino que se propusieron para solucionar imporantes problemas del Modelo Estandard de física de partículas. Tanto los axiones como los WIMPs producirían un ritmo de sucesos extremadamente bajo y a muy bajas energías. Los detectores Micromegas pueden alcanzar niveles de fondo muy bajas y umbrales de energía por debajo del keV, debido a la granularidad del detector, radiopureza, unifromidad de la respuesta y alta ganancia. Pequeños detectores gaseosos de unos 3 cm de espesor se utilizan en el experimento CAST para la detección de los rayos-x inducidos por los axiones. En este trabajo se muestran los niveles de fondo alcanzados por los detectores del experimento, las técnicas de reducción de fondo y finalmente, en ausencia de una señal positiva, se obtienen límites a la constante de acoplo axion-fotón. Una versión similar, pero superior en tamaño se pretende utilizar para la detección de WIMPs de baja masa en el proyecto TREX-DM. TREX-DM pretende operar un detector a alta presión con un material blanco ligero equipado con planos de lectura Micromegas. Se describe el detector y la puesta a punto del mismo, así como los primeros resultados de la caracterización y la sensibilidad anticipada que puede alcanzar el experimento en caso de operar en un laboratorio subterráneo

    Machine learning for quantum and complex systems

    Get PDF
    Machine learning now plays a pivotal role in our society, providing solutions to problems that were previously thought intractable. The meteoric rise of this technology can no doubt be attributed to the information age that we now live in. As data is continually amassed, more efficient and scalable methods are required to yield functional models and accurate inferences. Simultaneously we have also seen quantum technology come to the forefront of research and next generation systems. These technologies promise secure information transfer, efficient computation and high precision sensing, at levels unattainable by their classical counterparts. Although these technologies are powerful, they are necessarily more complicated and difficult to control. The combination of these two advances yields an opportunity for study, namely leveraging the power of machine learning to control and optimise quantum (and more generally complex) systems. The work presented in thesis explores these avenues of investigation and demonstrates the potential success of machine learning methods in the domain of quantum and complex systems. One of the most crucial potential quantum technologies is the quantum memory. If we are to one day harness the true power of quantum key distribution for secure transimission of information, and more general quantum computating tasks, it will almost certainly involve the use of quantum memorys. We start by presenting the operation of the cold atom workhorse: the magneto-optical trap (MOT). To use a cold atomic ensemble as a quantum memory we are required to prepare the atoms using a specialised cooling sequence. During this we observe a stable, coherent optical emission exiting each end of the elongated ensemble. We characterise this behaviour and compare it to similar observations in previous work. Following this, we use the ensemble to implement a backward Raman memory. Using this scheme we are able to demonstrate an increased efficiency over that of previous forward recall implementations. While we are limited by the optical depth of the system, we observe an efficiency more than double that of previous implementations. The MOT provides an easily accessible test bed for the optimisation via some machine learning technique. As we require an efficient search method, we implement a new type of algorithm based on deep learning. We design this technique such that the artificial neural networks are placed in control of the online optimisation, rather than simply being used as surrogate models. We experimentally optimise the optical depth of the MOT using this method, by parametrising the time varying compression sequence. We identify a new and unintuitive method for cooling the atomic ensemble which surpasses current methods. Following this initial implementation we make substantial improvements to the deep learning approach. This extends the approach to be applicable to a far wider range of complex problems, which may contain extensive local minima and structure. We benchmark this algorithm against many of the conventional optimisation techniques and demonstrate superior capability to optimise problems with high dimensionality. Finally we apply this technique to a series of preliminary problems, namely the tuning of a single electron transistor and second-order correlations from a quantum dot source

    Social work with airports passengers

    Get PDF
    Social work at the airport is in to offer to passengers social services. The main methodological position is that people are under stress, which characterized by a particular set of characteristics in appearance and behavior. In such circumstances passenger attracts in his actions some attention. Only person whom he trusts can help him with the documents or psychologically
    corecore