4 research outputs found

    Improving Scalability of Evolutionary Robotics with Reformulation

    Get PDF
    Creating systems that can operate autonomously in complex environments is a challenge for contemporary engineering techniques. Automatic design methods offer a promising alternative, but so far they have not been able to produce agents that outperform manual designs. One such method is evolutionary robotics. It has been shown to be a robust and versatile tool for designing robots to perform simple tasks, but more challenging tasks at present remain out of reach of the method. In this thesis I discuss and attack some problems underlying the scalability issues associated with the method. I present a new technique for evolving modular networks. I show that the performance of modularity-biased evolution depends heavily on the morphology of the robot鈥檚 body and present a new method for co-evolving morphology and modular control. To be able to reason about the new technique I develop reformulation framework: a general way to describe and reason about metaoptimization approaches. Within this framework I describe a new heuristic for developing metaoptimization approaches that is based on the technique for co-evolving morphology and modularity. I validate the framework by applying it to a practical task of zero-g autonomous assembly of structures with a fleet of small robots. Although this work focuses on the evolutionary robotics, methods and approaches developed within it can be applied to optimization problems in any domain

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm驴s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximaci贸n multidisciplinar para poder avanzar se constata en todos los campos de la ingenier铆a, lo cual conlleva la necesidad de resolver problemas de optimizaci贸n complejos que exceden la capacidad del cerebro humano o de la intuici贸n. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos gen茅ticos, caracterizados por su robustez y versatilidad, as铆 como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimizaci贸n disponibles con licencias de software libre representan el estado del arte actual en tecnolog铆a de optimizaci贸n. Sin embargo, la capacidad de adaptaci贸n de los algoritmos de optimizaci贸n a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todav铆a una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulaci贸n largos y variables. Esta variabilidad es com煤n en la Din谩mica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mec谩nica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a d铆a de hoy. La investigaci贸n actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos est谩 enfocada principalmente al desarrollo de nuevos algoritmos de b煤squeda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementaci贸n en ordenadores paralelos. La tarea pendiente es conseguir una paralelizaci贸n eficiente. Adem谩s, los avances en la investigaci贸n de nuevos algoritmos de b煤squeda y la paralelizaci贸n son aditivos, por lo que el proceso de mejora del software de optimizaci贸n actual se ver谩 incrementada si se atacan ambos frentes simult谩neamente. La motivaci贸n de esta Tesis Doctoral es avanzar hacia una integraci贸n completa de las capacidades de Optimizaci贸n y Computaci贸n de Alto Rendimiento para as铆 impulsar el desarrollo tecnol贸gico proporcionando mejores dise帽os, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las t茅cnicas de optimizaci贸n matem谩tica disponibles a d铆a de hoy, se ha dise帽ado una librer铆a de optimizaci贸n orientada al campo de la Din谩mica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuaci贸n se han analizado las principales limitaciones de las estrategias de paralelizaci贸n disponibles para algoritmos gen茅ticos y otros m茅todos de optimizaci贸n basados en poblaciones. En el caso en que el tiempo de evaluaci贸n medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradaci贸n de la escalabilidad o eficiencia paralela del algoritmo de optimizaci贸n es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simult谩neamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier m茅todo de optimizaci贸n basado en una poblaci贸n que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingenier铆a que consiste en optimizar el sistema de refrigeraci贸n de un dispositivo de electr贸nica de potencia. En 茅l queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulaci贸n que necesita la herramienta de optimizaci贸n

    Efficient learning methods to tune algorithm parameters

    Get PDF
    This thesis focuses on the algorithm configuration problem. In particular, three efficient learning configurators are introduced to tune parameters offline. The first looks into metaoptimization, where the algorithm is expected to solve similar problem instances within varying computational budgets. Standard meta-optimization techniques have to be repeated whenever the available computational budget changes, as the parameters that work well for small budgets, may not be suitable for larger ones. The proposed Flexible Budget method can, in a single run, identify the best parameter setting for all possible computational budgets less than a specified maximum, without compromising solution quality. Hence, a lot of time is saved. This will be shown experimentally. The second regards Racing algorithms which often do not fully utilize the available computational budget to find the best parameter setting, as they may terminate whenever a single parameter remains in the race. The proposed Racing with reset can overcome this issue, and at the same time adapt Racing鈥檚 hyper-parameter 伪 online. Experiments will show that such adaptation enables the algorithm to achieve significantly lower failure rates, compared to any fixed 伪 set by the user. The third extends on Racing with reset by allowing it to utilize all the information gathered previously when it adapts 伪, it also permits Racing algorithms in general to intelligently allocate the budget in each iteration, as opposed to equally allocating it. All developed Racing algorithms are compared to two budget allocators from the Simulation Optimization literature, OCBA and CBA, and to equal allocation to demonstrate under which conditions each performs best in terms of minimizing the probability of incorrect selection
    corecore