99 research outputs found

    Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations

    Full text link
    Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer optimization framework, on non-separable functions. First, we reveal empirical reasons of why decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores

    Evolutionary computation for expensive optimization: a survey

    Get PDF
    Expensive optimization problem (EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation (EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently

    Treasure hunt : a framework for cooperative, distributed parallel optimization

    Get PDF
    Orientador: Prof. Dr. Daniel WeingaertnerCoorientadora: Profa. Dra. Myriam Regattieri DelgadoTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/05/2019Inclui referências: p. 18-20Área de concentração: Ciência da ComputaçãoResumo: Este trabalho propõe um framework multinível chamado Treasure Hunt, que é capaz de distribuir algoritmos de busca independentes para um grande número de nós de processamento. Com o objetivo de obter uma convergência conjunta entre os nós, este framework propõe um mecanismo de direcionamento que controla suavemente a cooperação entre múltiplas instâncias independentes do Treasure Hunt. A topologia em árvore proposta pelo Treasure Hunt garante a rápida propagação da informação pelos nós, ao mesmo tempo em que provê simutaneamente explorações (pelos nós-pai) e intensificações (pelos nós-filho), em vários níveis de granularidade, independentemente do número de nós na árvore. O Treasure Hunt tem boa tolerância à falhas e está parcialmente preparado para uma total tolerância à falhas. Como parte dos métodos desenvolvidos durante este trabalho, um método automatizado de Particionamento Iterativo foi proposto para controlar o balanceamento entre explorações e intensificações ao longo da busca. Uma Modelagem de Estabilização de Convergência para operar em modo Online também foi proposto, com o objetivo de encontrar pontos de parada com bom custo/benefício para os algoritmos de otimização que executam dentro das instâncias do Treasure Hunt. Experimentos em benchmarks clássicos, aleatórios e de competição, de vários tamanhos e complexidades, usando os algoritmos de busca PSO, DE e CCPSO2, mostram que o Treasure Hunt melhora as características inerentes destes algoritmos de busca. O Treasure Hunt faz com que os algoritmos de baixa performance se tornem comparáveis aos de boa performance, e os algoritmos de boa performance possam estender seus limites até problemas maiores. Experimentos distribuindo instâncias do Treasure Hunt, em uma rede cooperativa de até 160 processos, demonstram a escalabilidade robusta do framework, apresentando melhoras nos resultados mesmo quando o tempo de processamento é fixado (wall-clock) para todas as instâncias distribuídas do Treasure Hunt. Resultados demonstram que o mecanismo de amostragem fornecido pelo Treasure Hunt, aliado à maior cooperação entre as múltiplas populações em evolução, reduzem a necessidade de grandes populações e de algoritmos de busca complexos. Isto é especialmente importante em problemas de mundo real que possuem funções de fitness muito custosas. Palavras-chave: Inteligência artificial. Métodos de otimização. Algoritmos distribuídos. Modelagem de convergência. Alta dimensionalidade.Abstract: This work proposes a multilevel framework called Treasure Hunt, which is capable of distributing independent search algorithms to a large number of processing nodes. Aiming to obtain joint convergences between working nodes, Treasure Hunt proposes a driving mechanism that smoothly controls the cooperation between the multiple independent Treasure Hunt instances. The tree topology proposed by Treasure Hunt ensures quick propagation of information, while providing simultaneous explorations (by parents) and exploitations (by children), on several levels of granularity, regardless the number of nodes in the tree. Treasure Hunt has good fault tolerance and is partially prepared to full fault tolerance. As part of the methods developed during this work, an automated Iterative Partitioning method is proposed to control the balance between exploration and exploitation as the search progress. A Convergence Stabilization Modeling to operate in Online mode is also proposed, aiming to find good cost/benefit stopping points for the optimization algorithms running within the Treasure Hunt instances. Experiments on classic, random and competition benchmarks of various sizes and complexities, using the search algorithms PSO, DE and CCPSO2, show that Treasure Hunt boosts the inherent characteristics of these search algorithms. Treasure Hunt makes algorithms with poor performances to become comparable to good ones, and algorithms with good performances to be capable of extending their limits to larger problems. Experiments distributing Treasure Hunt instances in a cooperative network up to 160 processes show the robust scaling of the framework, presenting improved results even when fixing a wall-clock time for the instances. Results show that the sampling mechanism provided by Treasure Hunt, allied to the increased cooperation between multiple evolving populations, reduce the need for large population sizes and complex search algorithms. This is specially important on real-world problems with time-consuming fitness functions. Keywords: Artificial intelligence. Optimization methods. Distributed algorithms. Convergence modeling. High dimensionality

    Automated Design of Metaheuristic Algorithms: A Survey

    Full text link
    Metaheuristics have gained great success in academia and practice because their search logic can be applied to any problem with available solution representation, solution quality evaluation, and certain notions of locality. Manually designing metaheuristic algorithms for solving a target problem is criticized for being laborious, error-prone, and requiring intensive specialized knowledge. This gives rise to increasing interest in automated design of metaheuristic algorithms. With computing power to fully explore potential design choices, the automated design could reach and even surpass human-level design and could make high-performance algorithms accessible to a much wider range of researchers and practitioners. This paper presents a broad picture of automated design of metaheuristic algorithms, by conducting a survey on the common grounds and representative techniques in terms of design space, design strategies, performance evaluation strategies, and target problems in this field

    Hierarchical Multi-Agent Optimization for Resource Allocation in Cloud Computing

    Get PDF
    In cloud computing, an important concern is to allocate the available resources of service nodes to the requested tasks on demand and to make the objective function optimum, i.e., maximizing resource utilization, payoffs and available bandwidth. This paper proposes a hierarchical multi-agent optimization (HMAO) algorithm in order to maximize the resource utilization and make the bandwidth cost minimum for cloud computing. The proposed HMAO algorithm is a combination of the genetic algorithm (GA) and the multi-agent optimization (MAO) algorithm. With maximizing the resource utilization, an improved GA is implemented to find a set of service nodes that are used to deploy the requested tasks. A decentralized-based MAO algorithm is presented to minimize the bandwidth cost. We study the effect of key parameters of the HMAO algorithm by the Taguchi method and evaluate the performance results. When compared with genetic algorithm (GA) and fast elitist non-dominated sorting genetic (NSGA-II) algorithm, the simulation results demonstrate that the HMAO algorithm is more effective than the existing solutions to solve the problem of resource allocation with a large number of the requested tasks. Furthermore, we provide the performance comparison of the HMAO algorithm with the first-fit greedy approach in on-line resource allocation

    Deep neural networks in the cloud: Review, applications, challenges and research directions

    Get PDF
    Deep neural networks (DNNs) are currently being deployed as machine learning technology in a wide range of important real-world applications. DNNs consist of a huge number of parameters that require millions of floating-point operations (FLOPs) to be executed both in learning and prediction modes. A more effective method is to implement DNNs in a cloud computing system equipped with centralized servers and data storage sub-systems with high-speed and high-performance computing capabilities. This paper presents an up-to-date survey on current state-of-the-art deployed DNNs for cloud computing. Various DNN complexities associated with different architectures are presented and discussed alongside the necessities of using cloud computing. We also present an extensive overview of different cloud computing platforms for the deployment of DNNs and discuss them in detail. Moreover, DNN applications already deployed in cloud computing systems are reviewed to demonstrate the advantages of using cloud computing for DNNs. The paper emphasizes the challenges of deploying DNNs in cloud computing systems and provides guidance on enhancing current and new deployments.The EGIA project (KK-2022/00119The Consolidated Research Group MATHMODE (IT1456-22
    corecore