15 research outputs found

    A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization

    Full text link
    Large-scale optimization problems that involve thousands of decision variables have extensively arisen from various industrial areas. As a powerful optimization tool for many real-world applications, evolutionary algorithms (EAs) fail to solve the emerging large-scale problems both effectively and efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA that can not only produce high-quality solution by solving sub-problems separately, but also highly utilizes the power of parallel computing by solving the sub-problems simultaneously. Existing DC-based EAs that were deemed to enjoy the same advantages of the proposed algorithm, are shown to be practically incompatible with the parallel computing scheme, unless some trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure

    Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations

    Full text link
    Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer optimization framework, on non-separable functions. First, we reveal empirical reasons of why decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores

    Cúmulo de partículas coevolutivo cooperativo usando lógica borrosa para la optimización a gran escala

    Get PDF
    A cooperative coevolutionary framework can improve the performance of optimization algorithms on large-scale problems. In this paper, we propose a new Cooperative Coevolutionary algorithm to improve our preliminary work, FuzzyPSO2. This new proposal, called CCFPSO, uses the random grouping technique that changes the size of the subcomponents in each generation. Unlike FuzzyPSO2, CCFPSO’s re-initialization of the variables, suggested by the fuzzy system, were performed on the particles with the worst fitness values. In addition, instead of updating the particles based on the global best particle, CCFPSO was updated considering the personal best particle and the neighborhood best particle. This proposal was tested on large-scale problems that resemble real-world problems (CEC2008, CEC2010), where the performance of CCFPSO was favorable in comparison with other state-of-the-art PSO versions, namely CCPSO2, SLPSO, and CSO. The experimental results indicate that using a Cooperative Coevolutionary PSO approach with a fuzzy logic system can improve results on high dimensionality problems (100 to 1000 variables).Un marco coevolutivo cooperativo puede mejorar el rendimiento de los algoritmos de optimización en problemas a gran escala. En este trabajo, proponemos un nuevo algoritmo coevolutivo cooperativo para mejorar nuestro trabajo preliminar, FuzzyPSO2. Esta nueva propuesta, denominada CCFPSO, utiliza la técnica de agrupación aleatoria que cambia el tamaño de los subcomponentes en cada generación. A diferencia de FuzzyPSO2, la reinicialización de las variables de CCFPSO, sugerida por el sistema difuso, se realizaron sobre las partículas con los peores valores de fitness. Además, en lugar de actualizar las partículas basándose en la mejor partícula global, CCFPSO se actualizó considerando la mejor partícula personal y la mejor partícula del vecindario. Esta propuesta se probó en problemas a gran escala que se asemejan a los del mundo real (CEC2008, CEC2010), donde el rendimiento de CCFPSO fue favorable en comparación con otras versiones de PSO del estado del arte, a saber, CCPSO2, SLPSO y CSO. Los resultados experimentales indican que el uso de un enfoque PSO coevolutivo cooperativo con un sistema de lógica difusa puede mejorar los resultados en problemas de alta dimensionalidad (de 100 a 1000 variables).Facultad de Informátic

    A Competitive Divide-and-Conquer Algorithm for Unconstrained Large-Scale Black-Box Optimization

    Get PDF
    This article proposes a competitive divide-and-conquer algorithm for solving large-scale black-box optimization problems for which there are thousands of decision variables and the algebraic models of the problems are unavailable. We focus on problems that are partially additively separable, since this type of problem can be further decomposed into a number of smaller independent subproblems. The proposed algorithm addresses two important issues in solving large-scale black-box optimization: (1) the identification of the independent subproblems without explicitly knowing the formula of the objective function and (2) the optimization of the identified black-box subproblems. First, a Global Differential Grouping (GDG) method is proposed to identify the independent subproblems. Then, a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is adopted to solve the subproblems resulting from its rotation invariance property. GDG and CMA-ES work together under the cooperative co-evolution framework. The resultant algorithm, named CC-GDG-CMAES, is then evaluated on the CEC’2010 large-scale global optimization (LSGO) benchmark functions, which have a thousand decision variables and black-box objective functions. The experimental results show that, on most test functions evaluated in this study, GDG manages to obtain an ideal partition of the index set of the decision variables, and CC-GDG-CMAES outperforms the state-of-the-art results. Moreover, the competitive performance of the well-known CMA-ES is extended from low-dimensional to high-dimensional black-box problems

    AutoFHE: Automated Adaption of CNNs for Efficient Evaluation over FHE

    Get PDF
    Secure inference of deep convolutional neural networks (CNNs) was recently demonstrated under RNS-CKKS. The state-of-the-art solution uses a high-order composite polynomial to approximate all ReLUs. However, it results in prohibitively high latency because bootstrapping is required to refresh zero-level ciphertext after every Conv-BN layer. To accelerate inference of CNNs over FHE and automatically design homomorphic evaluation architectures of CNNs, we propose AutoFHE: a bi-level multi-objective optimization framework to automatically adapt standard CNNs to polynomial CNNs. AutoFHE can maximize validation accuracy and minimize the number of bootstrapping operations by assigning layerwise polynomial activations and searching for the placement of bootstrapping operations. As a result, AutoFHE can generate diverse solutions spanning the trade-off front between accuracy and inference time. Experimental results of ResNets on encrypted CIFAR-10 under RNS-CKKS indicate that in comparison to the state-of-the-art solution, AutoFHE can reduce inference time (50 images on 50 threads) by up to 3,297 seconds (43%) while preserving accuracy (92.68%). AutoFHE also improves the accuracy of ResNet-32 by 0.48% while accelerating inference by 382 seconds (7%)
    corecore