1,713 research outputs found

    Evolutionary approaches to optimisation in rough machining

    Get PDF
    This thesis concerns the use of Evolutionary Computation to optimise the sequence and selection of tools and machining parameters in rough milling applications. These processes are not automated in current Computer-Aided Manufacturing (CAM) software and this work, undertaken in collaboration with an industrial partner, aims to address this. Related research has mainly approached tool sequence optimisation using only a single tool type, and machining parameter optimisation of a single-tool sequence. In a real world industrial setting, tools with different geometrical profiles are commonly used in combination on rough machining tasks in order to produce components with complex sculptured surfaces. This work introduces a new representation scheme and search operators to support the use of the three most commonly used tool types: end mill, ball nose and toroidal. Using these operators, single-objective metaheuristic algorithms are shown to find near-optimal solutions, while surveying only a small number of tool sequences. For the first time, a multi-objective approach is taken to tool sequence optimisation. The process of ‘multi objectivisation’ is shown to offer two benefits: escaping local optima on deceptive multimodal search spaces and providing a selection of tool sequence alternatives to a machinist. The multi-objective approach is also used to produce a varied set of near-Pareto optimal solutions, offering different trade-offs between total machining time and total tooling costs, simultaneously optimising tool sequences and the cutting speeds of individual tools. A challenge for using computationally expensive CAM software, important for real world machining, is the time cost of evaluations. An asynchronous parallel evolutionary optimisation system is presented that can provide a significant speed up, even in the presence of heterogeneous evaluation times produced by variable length tool sequences. This system uses a distributed network of processors that could be easily and inexpensively implemented on existing commercial hardware, and accessible to even small workshops

    Gradient boosting in automatic machine learning: feature selection and hyperparameter optimization

    Get PDF
    Das Ziel des automatischen maschinellen Lernens (AutoML) ist es, alle Aspekte der Modellwahl in prädiktiver Modellierung zu automatisieren. Diese Arbeit beschäftigt sich mit Gradienten Boosting im Kontext von AutoML mit einem Fokus auf Gradient Tree Boosting und komponentenweisem Boosting. Beide Techniken haben eine gemeinsame Methodik, aber ihre Zielsetzung ist unterschiedlich. Während Gradient Tree Boosting im maschinellen Lernen als leistungsfähiger Vorhersagealgorithmus weit verbreitet ist, wurde komponentenweises Boosting im Rahmen der Modellierung hochdimensionaler Daten entwickelt. Erweiterungen des komponentenweisen Boostings auf multidimensionale Vorhersagefunktionen werden in dieser Arbeit ebenfalls untersucht. Die Herausforderung der Hyperparameteroptimierung wird mit Fokus auf Bayesianische Optimierung und effiziente Stopping-Strategien diskutiert. Ein groß angelegter Benchmark über Hyperparameter verschiedener Lernalgorithmen, zeigt den kritischen Einfluss von Hyperparameter Konfigurationen auf die Qualität der Modelle. Diese Daten können als Grundlage für neue AutoML- und Meta-Lernansätze verwendet werden. Darüber hinaus werden fortgeschrittene Strategien zur Variablenselektion zusammengefasst und eine neue Methode auf Basis von permutierten Variablen vorgestellt. Schließlich wird ein AutoML-Ansatz vorgeschlagen, der auf den Ergebnissen und Best Practices für die Variablenselektion und Hyperparameteroptimierung basiert. Ziel ist es AutoML zu vereinfachen und zu stabilisieren sowie eine hohe Vorhersagegenauigkeit zu gewährleisten. Dieser Ansatz wird mit AutoML-Methoden, die wesentlich komplexere Suchräume und Ensembling Techniken besitzen, verglichen. Vier Softwarepakete für die statistische Programmiersprache R sind Teil dieser Arbeit, die neu entwickelt oder erweitert wurden: mlrMBO: Ein generisches Paket für die Bayesianische Optimierung; autoxgboost: Ein AutoML System, das sich vollständig auf Gradient Tree Boosting fokusiert; compboost: Ein modulares, in C++ geschriebenes Framework für komponentenweises Boosting; gamboostLSS: Ein Framework für komponentenweises Boosting additiver Modelle für Location, Scale und Shape.The goal of automatic machine learning (AutoML) is to automate all aspects of model selection in (supervised) predictive modeling. This thesis deals with gradient boosting techniques in the context of AutoML with a focus on gradient tree boosting and component-wise gradient boosting. Both techniques have a common methodology, but their goal is quite different. While gradient tree boosting is widely used in machine learning as a powerful prediction algorithm, component-wise gradient boosting strength is in feature selection and modeling of high-dimensional data. Extensions of component-wise gradient boosting to multidimensional prediction functions are considered as well. Focusing on Bayesian optimization and efficient early stopping strategies the challenge of hyperparameter optimization for these algorithms is discussed. Difficulty in the optimization of these algorithms is shown by a large scale random search on hyperparameters for machine learning algorithms, that can build the foundation of new AutoML and metalearning approaches. Furthermore, advanced feature selection strategies are summarized and a new method based on shadow features is introduced. Finally, an AutoML approach based on the results and best practices for feature selection and hyperparameter optimization is proposed, with the goal of simplifying and stabilizing AutoML while maintaining high prediction accuracy. This is compared to AutoML approaches using much more complex search spaces and ensembling techniques. Four software packages for the statistical programming language R have been newly developed or extended as a part of this thesis: mlrMBO: A general framework for Bayesian optimization; autoxgboost: An automatic machine learning framework that heavily utilizes gradient tree boosting; compboost: A modular framework for component-wise boosting written in C++; gamboostLSS: A framework for component-wise boosting for generalized additive models for location scale and shape

    Evolving hardware with genetic algorithms

    Get PDF
    Genetic techniques are applied to the problem of electronic circuit design, with an emphasis on VLSI circuits. The goal is to have a tool which has the performance and flexibility to attack a wide range of problems. A genetic algorithm is used to design a circuit specified by the desired input /output characteristics. A software system is implemented to synthesize and optimize circuits using an asynchronous parallel genetic algorithm. The software is designed with object-oriented constructs in order to maintain scalability and provide for future enhancements. The system is executed on a heterogeneous network of workstations ranging from Sun Sparc Ultras to HP multiprocessors. Testing of this software is done with examples of both digital and analog CMOS VLSI circuits. Performance is measured in both the quality of the solutions and in the time it took to evolve them

    Treasure hunt : a framework for cooperative, distributed parallel optimization

    Get PDF
    Orientador: Prof. Dr. Daniel WeingaertnerCoorientadora: Profa. Dra. Myriam Regattieri DelgadoTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/05/2019Inclui referências: p. 18-20Área de concentração: Ciência da ComputaçãoResumo: Este trabalho propõe um framework multinível chamado Treasure Hunt, que é capaz de distribuir algoritmos de busca independentes para um grande número de nós de processamento. Com o objetivo de obter uma convergência conjunta entre os nós, este framework propõe um mecanismo de direcionamento que controla suavemente a cooperação entre múltiplas instâncias independentes do Treasure Hunt. A topologia em árvore proposta pelo Treasure Hunt garante a rápida propagação da informação pelos nós, ao mesmo tempo em que provê simutaneamente explorações (pelos nós-pai) e intensificações (pelos nós-filho), em vários níveis de granularidade, independentemente do número de nós na árvore. O Treasure Hunt tem boa tolerância à falhas e está parcialmente preparado para uma total tolerância à falhas. Como parte dos métodos desenvolvidos durante este trabalho, um método automatizado de Particionamento Iterativo foi proposto para controlar o balanceamento entre explorações e intensificações ao longo da busca. Uma Modelagem de Estabilização de Convergência para operar em modo Online também foi proposto, com o objetivo de encontrar pontos de parada com bom custo/benefício para os algoritmos de otimização que executam dentro das instâncias do Treasure Hunt. Experimentos em benchmarks clássicos, aleatórios e de competição, de vários tamanhos e complexidades, usando os algoritmos de busca PSO, DE e CCPSO2, mostram que o Treasure Hunt melhora as características inerentes destes algoritmos de busca. O Treasure Hunt faz com que os algoritmos de baixa performance se tornem comparáveis aos de boa performance, e os algoritmos de boa performance possam estender seus limites até problemas maiores. Experimentos distribuindo instâncias do Treasure Hunt, em uma rede cooperativa de até 160 processos, demonstram a escalabilidade robusta do framework, apresentando melhoras nos resultados mesmo quando o tempo de processamento é fixado (wall-clock) para todas as instâncias distribuídas do Treasure Hunt. Resultados demonstram que o mecanismo de amostragem fornecido pelo Treasure Hunt, aliado à maior cooperação entre as múltiplas populações em evolução, reduzem a necessidade de grandes populações e de algoritmos de busca complexos. Isto é especialmente importante em problemas de mundo real que possuem funções de fitness muito custosas. Palavras-chave: Inteligência artificial. Métodos de otimização. Algoritmos distribuídos. Modelagem de convergência. Alta dimensionalidade.Abstract: This work proposes a multilevel framework called Treasure Hunt, which is capable of distributing independent search algorithms to a large number of processing nodes. Aiming to obtain joint convergences between working nodes, Treasure Hunt proposes a driving mechanism that smoothly controls the cooperation between the multiple independent Treasure Hunt instances. The tree topology proposed by Treasure Hunt ensures quick propagation of information, while providing simultaneous explorations (by parents) and exploitations (by children), on several levels of granularity, regardless the number of nodes in the tree. Treasure Hunt has good fault tolerance and is partially prepared to full fault tolerance. As part of the methods developed during this work, an automated Iterative Partitioning method is proposed to control the balance between exploration and exploitation as the search progress. A Convergence Stabilization Modeling to operate in Online mode is also proposed, aiming to find good cost/benefit stopping points for the optimization algorithms running within the Treasure Hunt instances. Experiments on classic, random and competition benchmarks of various sizes and complexities, using the search algorithms PSO, DE and CCPSO2, show that Treasure Hunt boosts the inherent characteristics of these search algorithms. Treasure Hunt makes algorithms with poor performances to become comparable to good ones, and algorithms with good performances to be capable of extending their limits to larger problems. Experiments distributing Treasure Hunt instances in a cooperative network up to 160 processes show the robust scaling of the framework, presenting improved results even when fixing a wall-clock time for the instances. Results show that the sampling mechanism provided by Treasure Hunt, allied to the increased cooperation between multiple evolving populations, reduce the need for large population sizes and complex search algorithms. This is specially important on real-world problems with time-consuming fitness functions. Keywords: Artificial intelligence. Optimization methods. Distributed algorithms. Convergence modeling. High dimensionality

    An Object-Oriented Programming Environment for Parallel Genetic Algorithms

    Get PDF
    This thesis investigates an object-oriented programming environment for building parallel applications based on genetic algorithms (GAs). It describes the design of the Genetic Algorithms Manipulation Environment (GAME), which focuses on three major software development requirements: flexibility, expandability and portability. Flexibility is provided by GAME through a set of libraries containing pre-defined and parameterised components such as genetic operators and algorithms. Expandability is offered by GAME'S object-oriented design. It allows applications, algorithms and genetic operators to be easily modified and adapted to satisfy diverse problem's requirements. Lastly, portability is achieved through the use of the standard C++ language, and by isolating machine and operating system dependencies into low-level modules, which are hidden from the application developer by GAME'S application programming interfaces. The development of GAME is central to the Programming Environment for Applications of PArallel GENetic Algorithms project (PAPAGENA). This is the principal European Community (ESPRIT III) funded parallel genetic algorithms project. It has two main goals: to provide a general-purpose tool kit, supporting the development and analysis of large-scale parallel genetic algorithms (PGAs) applications, and to demonstrate the potential of applying evolutionary computing in diverse problem domains. The research reported in this thesis is divided in two parts: i) the analysis of GA models and the study of existing GA programming environments from an application developer perspective; ii) the description of a general-purpose programming environment designed to help with the development of GA and PGA-based computer programs. The studies carried out in the first part provide the necessary understanding of GAs' structure and operation to outline the requirements for the development of complex computer programs. The second part presents GAME as the result of combining development requirements, relevant features of existing environments and innovative ideas, into a powerful programming environment. The system is described in terms of its abstract data structures and sub-systems that allow the representation of problems independently of any particular GA model. GAME's programming model is also presented as general-purpose object-oriented framework for programming coarse-grained parallel applications. GAME has a modular architecture comprising five modules: the Virtual Machine, the Parallel Execution Module, the Genetic Libraries, the Monitoring Control Module, and the Graphic User Interface. GAME's genetic-oriented abstract data structures, and the Virtual Machine, isolates genetic operators and algorithms from low-level operations such as memory management, exception handling, etc. The Parallel Execution Module supports GAME's object- oriented parallel programming model. It defines an application programming interface and a runtime library that allow the same parallel application, created within the environment, to run on different hardware and operating system platforms. The Genetic Libraries outline a hierarchy of components implemented as parameterised versions of standard and custom genetic operators, algorithms and applications. The Monitoring Control Module supports dynamic control and monitoring of simulations, whereas the Graphic User Interface defines a basic framework and graphic 'widgets' for displaying and entering data. This thesis describes the design philosophy and rationale behind these modules, covering in more detail the Virtual Machine, the Parallel Execution Module and the Genetic Libraries. The assessment discusses the system's ability to satisfy the main requirements of GA and PGA software development, as well as the features that distinguish GAME from other programming environments

    Global laminate optimization on geometrically partitioned shell structures

    Get PDF
    A method aimed at the optimization of locally varying laminates is investigated. The structure is partitioned into geometrical sections. These sections are covered by global plies. A variable-length representation scheme for an evolutionary algorithm is developed. This scheme encodes the number of global plies, their thickness, material, and orientation. A set of genetic variation operators tailored to this particular representation is introduced. Sensitivity information assists the genetic search in the placement of reinforcements and optimization of ply angles. The method is investigated on two benchmark applications. There it is able to find significant improvements. A case study of an airplane's side rudder illustrates the applicability of the method to typical engineering problem

    Cooperative Models of Particle Swarm Optimizers

    Get PDF
    Particle Swarm Optimization (PSO) is one of the most effFective optimization tools, which emerged in the last decade. Although, the original aim was to simulate the behavior of a group of birds or a school of fish looking for food, it was quickly realized that it could be applied in optimization problems. Different directions have been taken to analyze the PSO behavior as well as improving its performance. One approach is the introduction of the concept of cooperation. This thesis focuses on studying this concept in PSO by investigating the different design decisions that influence the cooperative PSO models' performance and introducing new approaches for information exchange. Firstly, a comprehensive survey of all the cooperative PSO models proposed in the literature is compiled and a definition of what is meant by a cooperative PSO model is introduced. A taxonomy for classifying the different surveyed cooperative PSO models is given. This taxonomy classifies the cooperative models based on two different aspects: the approach the model uses for decomposing the problem search space and the method used for placing the particles into the different cooperating swarms. The taxonomy helps in gathering all the proposed models under one roof and understanding the similarities and differences between these models. Secondly, a number of parameters that control the performance of cooperative PSO models are identified. These parameters give answers to the four questions: Which information to share? When to share it? Whom to share it with? and What to do with it? A complete empirical study is conducted on one of the cooperative PSO models in order to understand how the performance changes under the influence of these parameters. Thirdly, a new heterogeneous cooperative PSO model is proposed, which is based on the exchange of probability models rather than the classical migration of particles. The model uses two swarms that combine the ideas of PSO and Estimation of Distribution Algorithms (EDAs) and is considered heterogeneous since the cooperating swarms use different approaches to sample the search space. The model is tested using different PSO models to ensure that the performance is robust against changing the underlying population topology. The experiments show that the model is able to produce better results than its components in many cases. The model also proves to be highly competitive when compared to a number of state-of-the-art cooperative PSO algorithms. Finally, two different versions of the PSO algorithm are applied in the FPGA placement problem. One version is applied entirely in the discrete domain, which is the first attempt to solve this problem in this domain using a discrete PSO (DPSO). Another version is implemented in the continuous domain. The PSO algorithms are applied to several well-known FPGA benchmark problems with increasing dimensionality. The results are compared to those obtained by the academic Versatile Place and Route (VPR) placement tool, which is based on Simulated Annealing (SA). The results show that these methods are competitive for small and medium-sized problems. For higher-sized problems, the methods provide very close results. The work also proposes the use of different cooperative PSO approaches using the two versions and their performances are compared to the single swarm performance

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización
    corecore