2,868 research outputs found
Mixing global and local competition in genetic optimization based design space exploration of analog circuits
The knowledge of optimal design space boundaries of component circuits can be extremely useful in making good subsystem-level design decisions which are aware of the parasitics and other second-order circuit-level details. However, direct application of popular Multi-objective genetic optimization algorithms were found to produce Pareto fronts with poor diversity for analog circuits problems. This work proposes a novel approach to control the diversity of solutions by paritioning the solution space, using Local Competition to promote diversity and Global competition for convergence, and by controlling the proportion of these two mechanisms by a Simulated Annealing based formulation. The algorithm was applied to extract numerical results on analog switched capacitor integrator circuits with a wide range of tight specifications. The results were found to be significantly better than traditional GA based uncontrolled optimization methods
A Comprehensive Review of Bio-Inspired Optimization Algorithms Including Applications in Microelectronics and Nanophotonics
The application of artificial intelligence in everyday life is becoming all-pervasive and unavoidable. Within that vast field, a special place belongs to biomimetic/bio-inspired algorithms for multiparameter optimization, which find their use in a large number of areas. Novel methods and advances are being published at an accelerated pace. Because of that, in spite of the fact that there are a lot of surveys and reviews in the field, they quickly become dated. Thus, it is of importance to keep pace with the current developments. In this review, we first consider a possible classification of bio-inspired multiparameter optimization methods because papers dedicated to that area are relatively scarce and often contradictory. We proceed by describing in some detail some more prominent approaches, as well as those most recently published. Finally, we consider the use of biomimetic algorithms in two related wide fields, namely microelectronics (including circuit design optimization) and nanophotonics (including inverse design of structures such as photonic crystals, nanoplasmonic configurations and metamaterials). We attempted to keep this broad survey self-contained so it can be of use not only to scholars in the related fields, but also to all those interested in the latest developments in this attractive area
Recommended from our members
Automatic design of analogue circuits
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Evolvable Hardware (EHW) is a promising area in electronics today. Evolutionary Algorithms (EA), together with a circuit simulation tool or real hardware, automatically designs a circuit for a given problem. The circuits evolved may have unconventional designs and be less dependent on the personal knowledge of a designer. Nowadays, EA are represented by Genetic Algorithms (GA), Genetic Programming (GP) and Evolutionary Strategy (ES). While GA is definitely the most popular tool, GP has rapidly developed in recent years and is notable by its outstanding results. However, to date the use of ES for analogue circuit synthesis has been limited to a few applications.
This work is devoted to exploring the potential of ES to create novel analogue designs. The narrative of the thesis starts with a framework of an ES-based system generating simple circuits, such as low pass filters. Then it continues with a step-by-step progression to increasingly sophisticated designs that require additional strength from the system. Finally, it describes the modernization of the system using novel techniques that enable the synthesis of complex multi-pin circuits that are newly evolved.
It has been discovered that ES has strong power to synthesize analogue circuits. The circuits evolved in the first part of the thesis exceed similar results made previously using other techniques in a component economy, in the better functioning of the evolved circuits and in the computing power spent to reach the results. The target circuits for evolution in the second half are chosen by the author to challenge the capability of the developed system. By functioning, they do not belong to the conventional analogue domain but to applications that are usually adopted by digital circuits. To solve the design tasks, the system has been gradually developed to support the ability of evolving increasingly complex circuits.
As a final result, a state-of-the-art ES-based system has been developed that possesses a novel mutation paradigm, with an ability to create, store and reuse substructures, to adapt the mutation, selection parameters and population size, utilize automatic incremental evolution and use the power of parallel computing. It has been discovered that with the ability to synthesis the most up-to-date multi-pin complex analogue circuits that have ever been automatically synthesized before, the system is capable of synthesizing circuits that are problematic for conventional design with application domains that lay beyond the conventional application domain for analogue circuits
Graduate School: Course Decriptions, 1972-73
Official publication of Cornell University V.64 1972/7
Application of entropy concepts to power system state estimation
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Energia). Faculdade de Engenharia. Universidade do Porto. 200
Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
Una aproximación multinivel para el diseño sistemático de circuitos integrados de radiofrecuencia.
Tesis reducida por acuerdo de confidencialidad.En un mercado bien establecido como el de las telecomunicaciones, donde se está evolucionando hacia el 5G, se estima que hoy en dĂa haya más de 2 Mil Millones de usuarios de Smartphones. Solo de por sĂ, este nĂşmero es asombroso. Pero nada se compara a lo que va a pasar en un futuro muy prĂłximo. El prĂłximo boom tecnolĂłgico está directamente conectado con el mercado emergente del internet of things (IoT). Se estima que, en 2020, habrá 20 Mil Millones de dispositivos fĂsicos conectados y comunicando entre sĂ, lo que equivale a 4 dispositivos fĂsicos por cada persona del planeta. Debido a este boom tecnolĂłgico, van a surgir nuevas e interesantes oportunidades de inversiĂłn e investigaciĂłn. De hecho, se estima que en 2020 se van a invertir cerca de 3 Mil Millones de dĂłlares solo en este mercado, un 50% más que en 2017. Todos estos dispositivos IoT tienen que comunicarse inalámbricamente entre sĂ, algo en lo que los circuitos de radiofrecuencia (RF) son imprescindibles. El problema es que el diseño de circuitos RF en tecnologĂas nanomĂ©tricas se está haciendo extraordinariamente difĂcil debido a su creciente complejidad. Este hecho, combinado con los crĂticos compromisos entre las prestaciones de estos circuitos, tales como el consumo de energĂa, el área de chip, la fiabilidad de los chips, etc., provocan una reducciĂłn en la productividad en su diseño, algo que supone un problema debido a las estrictas restricciones time-to-market de las empresas.
Es posible concluir, por tanto, que uno de los ámbitos en los que es tremendamente importante centrarse hoy en dĂa, es el desarrollo de nuevas metodologĂas de diseño de circuitos RF que permitan al diseñador obtener circuitos que cumplan con especificaciones muy exigentes en un tiempo razonable. Debido a las complejas relaciones entre prestaciones de los
circuitos RF (por ejemplo, ruido de fase frente a consumo de potencia en un oscilador controlado por tensiĂłn), es fácil comprender que el diseño de circuitos RF es una tarea extremadamente complicada y debe ser soportada por herramientas de diseño asistido por ordenador (EDA). En un escenario ideal, los diseñadores tendrĂan una herramienta EDA que podrĂa generar automáticamente un circuito integrado (IC), algo definido en la literatura como un compilador de silicio. Con esta herramienta ideal, el usuario sĂłlo estipularĂa las especificaciones deseadas para su sistema y la herramienta generarĂa automáticamente el diseño del IC listo para fabricar (lo que se denomina diseño fĂsico o layout). Sin embargo, para sistemas complejos tales como circuitos RF, dicha herramienta no existe. La tesis que se presenta, se centra exactamente en el desarrollo de nuevas metodologĂas de diseño capaces de mejorar el estado del arte y acortar la brecha de productividad existente en el diseño de circuitos RF. Por lo tanto, con el fin de establecer una nueva metodologĂa de diseño para sistemas RF, se han de abordar distintos cuellos de botella del diseño RF con el fin de diseñar con Ă©xito dichos circuitos. El diseño de circuitos RF ha seguido tradicionalmente una estrategia basada en ecuaciones analĂticas derivadas especĂficamente para cada circuito y que exige una gran experiencia del diseñador. Esto significa que el diseñador plantea una estrategia para diseñar el circuito manualmente y, tras varias iteraciones, normalmente logra que el circuito cumpla con las especificaciones deseadas. No obstante, conseguir diseños con prestaciones Ăłptimas puede ser muy difĂcil utilizando esta metodologĂa, ya que el espacio de diseño (o bĂşsqueda) es enorme (decenas de variables de diseño con cientos de combinaciones diferentes). Aunque el diseñador llegue a una
soluciĂłn que cumpla todas las especificaciones, nunca estará seguro de que el diseño al que ha llegado es el mejor (por ejemplo, el que consuma menos energĂa). Hoy en dĂa, las tĂ©cnicas basadas en optimizaciĂłn se están utilizando con el objetivo de ayudar al diseñador a encontrar automáticamente zonas Ăłptimas de diseño. El uso de metodologĂas basadas en optimizaciĂłn intenta superar las limitaciones de metodologĂas previas mediante el uso de algoritmos que son capaces de realizar una amplia exploraciĂłn del espacio de diseño para encontrar diseños de prestaciones Ăłptimas. La filosofĂa de estas metodologĂas es que el diseñador elige las especificaciones del circuito, selecciona la topologĂa y ejecuta una optimizaciĂłn que devuelve el valor de cada componente del circuito Ăłptimo (por ejemplo, anchos y longitudes de los transistores) de forma automática.
Además, mediante el uso de estos algoritmos, la exploración del espacio de diseño permite estudiar los distintos y complejos compromisos entre prestaciones de los circuitos de RF. Sin embargo, la problemática del diseño de RF es mucho más amplia que la selección del tamaño de cada
componente. Con el objetivo de conseguir algo similar a un compilador de silicio para circuitos RF, la metodologĂa desarrollada en la
tesis, tiene que ser capaz de asegurar un diseño robusto que permita al diseñador tener Ă©xito frente a medidas experimentales, y, además, las optimizaciones tienen que ser elaboradas en tiempos razonables para que se puedan cumplir las estrictas restricciones time-to-market de las empresas. Para conseguir esto, en esta tesis, hay cuatro aspectos clave que son abordados en la metodologĂa: 1. Los inductores integrados todavĂa son un cuello de botella en circuitos RF. Los parásitos que aparecen a altas
frecuencias hacen que las prestaciones de los inductores sean muy difĂciles de modelar. Existe, por tanto, la necesidad de desarrollar nuevos modelos más precisos, pero tambiĂ©n muy eficientes computacionalmente que puedan ser incluidos en metodologĂas que usen algoritmos de optimizaciĂłn. 2. Las variaciones de proceso son fenĂłmenos que afectan mucho las tecnologĂas nanomĂ©tricas, asĂ que para obtener un diseño robusto es necesario tener en cuenta estas variaciones durante la optimizaciĂłn. 3. En las metodologĂas de diseño manual, los parásitos de layout normalmente no se tienen en cuenta en una primera fase de diseño. En ese sentido, cuando el diseñador pasa del diseño topolĂłgico al diseño fĂsico, puede que su circuito deje de cumplir con las especificaciones. Estas consideraciones fĂsicas del circuito deben ser tenidas en cuenta en las primeras etapas de diseño. Por lo tanto, con el fin de abordar este problema, la metodologĂa desarrollada tiene que tener en cuenta los parásitos de la realizaciĂłn fĂsica desde una primera fase de optimizaciĂłn.
4. Una vez se ha desarrollado la capacidad de generar distintos circuitos RF de forma automática utilizando esta metodologĂa (amplificadores de bajo ruido, osciladores controlados por tensiĂłn y mezcladores), en la tesis se
aborda también la composición de un sistema RF con una aproximación multinivel, donde el proceso empieza por el diseño de los componentes pasivos y termina componiendo distintos circuitos, construyendo un sistema (por ejemplo, un receptor de radiofrecuencia).
La tesis aborda los cuatro problemas descritos anteriormente con Ă©xito, y ha avanzado considerablemente en el estado del arte de metodologĂas de diseño automáticas/sistemáticas para circuitos RF.Premio Extraordinario de Doctorado U
Coherent Receiver Arrays for Astronomy and Remote Sensing
Monolithic Millimeter-wave Integrated Circuits (MMICs) provide a level of integration that makes possible
the construction of large focal plane arrays of radio-frequency detectors—effectively the first “Radio
Cameras”—and these will revolutionize radio-frequency observations with single dishes, interferometers,
spectrometers, and spacecraft over the next two decades. The key technological advances have been
made at the Jet Propulsion Laboratory (JPL) in collaboration with the Northrop Grumman Corporation
(NGC). Although dramatic progress has been made in the last decade in several important areas, including
(i) packaging that enables large coherent detector arrays, (ii) extending the performance of amplifiers
to much higher frequencies, and (iii) reducing room-temperature noise at high frequencies, funding to
develop MMIC performance at cryo-temperatures and at frequencies below 150GHz has dropped nearly
to zero over the last five years. This has severely hampered the advance of the field. Moreover, because
of the high visibility of < 150GHz cryogenic detectors in astrophysics and cosmology, lack of progress in
this area has probably had a disproportionate impact on perceptions of the potential of coherent detectors
in general.
One of the prime objectives of the Keck Institute for Space Studies (KISS) is to select crucial areas of
technological development in their embryonic stages, when relatively modest funding can have a highly
significant impact by catalyzing collaborations between key institutions world-wide, supporting in-depth
studies of the current state and potential of emerging technologies, and prototyping development of key
components—all potentially leading to strong agency follow-on funding.
The KISS large program “Coherent Instrumentation for Cosmic Microwave Background Observations”
was initiated in order to investigate the scientific potential and technical feasibility of these “Radio
Cameras.” This opens up the possibility of bringing support to this embryonic area of detector development
at a critical phase during which KISS can catalyze and launch a coherent, coordinated, worldwide
effort on the development of MMIC Arrays. A number of key questions, regarding (i) the importance and
breadth of the scientific drivers, (ii) realistic limits on sensitivity, (iii) the potential of miniaturization into
receiver “modules,” and (iv) digital signal processing, needed to be studied carefully before embarking on
a major MMIC Array development effort led by Caltech/JPL/NGC and supported by KISS, in the hope
of attracting adequate subsequent government funding. For this purpose a large study was undertaken
under the sponsorship and aegis of KISS. The study began with a workshop in Pasadena on “MMIC
Array Receivers and Spectrographs” (July 21–25, 2008)1, immediately after an international conference
“CMB Component Separation and the Physics of Foregrounds” (July 14–18, 2008)2 that was organized in
conjunction with the MMIC workshop. There was then an eight-month study period, culminating in a
final “MMIC 2Workshop” (March 23–27, 2009).3 These workshops were very well attended, and brought
together the major international groups and scientists in the field of coherent radio-frequency detector
arrays. A notable aspect of the workshops is that they were well attended by young scientists—there
are many graduate students and post-doctoral fellows coming into this area. The two workshops focused
both on detailed discussions of key areas of interest and on the writing of this report. They were
conducted in a spirit of full and impartial scrutiny of the pros and cons of MMICs, in order to make an
objective assessment of their potential. It serves no useful purpose to pursue lines of technology development
based on unrealistic and over-optimistic projections. This is crucially important for KISS, Caltech,
and JPL which can only have real impact if they deliver on the promise of the technologies they develop.
A broad range of opinions was evident at the start of the first workshop, but in the end a strong consensus
was achieved on the most important questions that had emerged. This report reflects the workshop
deliberations and that consensus.
The key scientific drivers for the development of the MMIC technology are: (i) large angular-scale Bmode
polarization observations of the cosmic microwave background—here MMICs are one of two key
technologies under development at JPL, both of which are primary detectors on the recently-launched
Planck mission; (ii) large-field spectroscopic surveys of the Galaxy and nearby galaxies at high spectral
resolution, and of galaxy clusters at low resolution; (iii) wide-field imaging via deployment as focal plane
arrays on interferometers; (iv) remote sensing of the atmosphere and Earth; and (v) wide-field imaging in
planetary missions. These science drivers are discussed in the report.
The most important single outcome of the workshops, and a sine qua non of this whole program,
is that consensus was reached that it should be possible to reduce the noise of individual HEMTs or
MMICs operating at cryogenic temperatures to less than three times the quantum limit at frequencies up
to 150 GHz, by working closely with a foundry (in this case NGC) and providing rapid feedback on the
performance of the devices they are fabricating, thus enabling tests of the effects of small changes in the
design of these transistors. This kind of partnership has been very successful in the past, but can now be
focused more intensively on cryogenic performance by carrying out tests of MMIC wafers, including tests
on a cryogenic probe station. It was felt that a properly outfitted university laboratory dedicated to this
testing and optimization would be an important element in this program, which would include MMIC
designs, wafer runs, and a wide variety of tests of MMIC performance at cryogenic temperatures.
This Study identified eight primary areas of technology development, including the one singled out
above, which must be actively pursued in order to exploit the full potential of MMIC Arrays in a timely
fashion:
1. Reduce the noise levels of individual transistors and MMICs to three times the quantum limit or
lower at cryogenic temperatures at frequencies up to 150 GHz.
2. Integrate high-performing MMICs into the building blocks of large arrays without loss of performance.
Currently factors of two in both noise and bandwidth are lost at this step.
3. Develop high performance, low mass, inexpensive feed arrays.
4. Develop robust interconnects and wiring that allow easy fabrication and integration of large arrays.
5. Develop mass production techniques suitable for arrays of differing sizes.
6. Reduce mass and power. (Requirements will differ widely with application. In the realm of planetary
instruments, this is often the most important single requirement.)
7. Develop planar orthomode transducers with low crosstalk and broad bandwidth.
8. Develop high power and high efficiency MMIC amplifiers for LO chains, etc.
Another important outcome of the two workshops was that a number of new collaborations were
forged between leading groups worldwide with the object of focusing on the development of MMIC
arrays
Surrogate based Optimization and Verification of Analog and Mixed Signal Circuits
Nonlinear Analog and Mixed Signal (AMS) circuits are very complex and expensive to design and verify. Deeper technology scaling has made these designs susceptible to noise and process variations which presents a growing concern due to the degradation in the circuit performances and risks of design failures. In fact, due to process parameters, AMS circuits like phase locked loops may present chaotic behavior that can be confused with noisy behavior. To design and verify circuits, current industrial designs rely heavily on simulation based verification and knowledge based optimization techniques. However, such techniques lack mathematical rigor necessary to catch up with the growing design constraints besides being computationally intractable. Given all aforementioned barriers, new techniques are needed to ensure that circuits are robust and optimized despite process variations and possible chaotic behavior. In this thesis, we develop a methodology for optimization and verification of AMS circuits advancing three frontiers in the variability-aware design flow. The first frontier is a robust circuit sizing methodology wherein a multi-level circuit optimization approach
is proposed. The optimization is conducted in two phases. First, a global sizing phase powered by a regional sensitivity analysis to quickly scout the feasible design space that reduces the optimization search. Second, nominal sizing step based on space mapping of two AMS circuits models at different levels of abstraction is developed for the sake of breaking the re-design loop without performance penalties. The second frontier concerns a dynamics verification scheme of the circuit behavior (i.e., study the chaotic vs. stochastic circuit behavior). It is based on a surrogate generation approach
and a statistical proof by contradiction technique using Gaussian Kernel measure in the state space domain. The last frontier focus on quantitative verification approaches to predict parametric yield for both a single and multiple circuit performance constraints.
The single performance approach is based on a combination of geometrical intertwined reachability analysis and a non-parametric statistical verification scheme. On the other hand, the multiple performances approach involves process parameter reduction, state space based pattern matching, and multiple hypothesis testing procedures. The performance of the proposed methodology is demonstrated on several benchmark analog and mixed signal circuits. The optimization approach greatly improves computational efficiency while locating a comparable/better design point than
other approaches. Moreover, great improvements were achieved using our verification methods with many orders of speedup compared to existing techniques
- …