81,581 research outputs found

    Algorithms (X,sigma,eta) : quasi-random mutations for Evolution Strategies

    Get PDF
    International audienceRandomization is an efficient tool for global optimization. We here define a method which keeps : - the order 0 of evolutionary algorithms (no gradient) ; - the stochastic aspect of evolutionary algorithms ; - the efficiency of so-called "low-dispersion" points ; and which ensures under mild assumptions global convergence with linear convergence rate. We use i) sampling on a ball instead of Gaussian sampling (in a way inspired by trust regions), ii) an original rule for step-size adaptation ; iii) quasi-monte-carlo sampling (low dispersion points) instead of Monte-Carlo sampling. We prove in this framework linear convergence rates i) for global optimization and not only local optimization ; ii) under very mild assumptions on the regularity of the function (existence of derivatives is not required). Though the main scope of this paper is theoretical, numerical experiments are made to backup the mathematical results. Algorithm XSE: quasi-random mutations for evolution strategies. A. Auger, M. Jebalia, O. Teytaud. Proceedings of EA'2005

    Self Adaptive Artificial Bee Colony for Global Numerical Optimization

    Get PDF
    AbstractThe ABC algorithm has been used in many practical cases and has demonstrated good convergence rate. It produces the new solution according to the stochastic variance process. In this process, the magnitudes of the perturbation are important since it can affect the new solution. In this paper, we propose a self adaptive artificial bee colony, called self adaptive ABC, for the global numerical optimization. A new self adaptive perturbation is introduced in the basic ABC algorithm, in order to improve the convergence rates. 23 benchmark functions are employed in verifying the performance of self adaptive ABC. Experimental results indicate our approach is effective and efficient. Compared with other algorithms, self adaptive ABC performs better than, or at least comparable to the basic ABC algorithm and other state-of-the-art approaches from literature when considering the quality of the solution obtained

    Inertial Block Proximal Methods for Non-Convex Non-Smooth Optimization

    Get PDF
    We propose inertial versions of block coordinate descent methods for solving non-convex non-smooth composite optimization problems. Our methods possess three main advantages compared to current state-of-the-art accelerated first-order methods: (1) they allow using two different extrapolation points to evaluate the gradients and to add the inertial force (we will empirically show that it is more efficient than using a single extrapolation point), (2) they allow to randomly picking the block of variables to update, and (3) they do not require a restarting step. We prove the subsequential convergence of the generated sequence under mild assumptions, prove the global convergence under some additional assumptions, and provide convergence rates. We deploy the proposed methods to solve non-negative matrix factorization (NMF) and show that they compete favorably with the state-of-the-art NMF algorithms. Additional experiments on non-negative approximate canonical polyadic decomposition, also known as non-negative tensor factorization, are also provided

    Optimal Algorithms for Non-Smooth Distributed Optimization in Networks

    Full text link
    In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in O(1/t)O(1/\sqrt{t}), the structure of the communication network only impacts a second-order term in O(1/t)O(1/t), where tt is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a d1/4d^{1/4} multiplicative factor of the optimal convergence rate, where dd is the underlying dimension.Comment: 17 page

    Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL) optimization framework

    Get PDF
    Simplicity and flexibility of meta-heuristic optimization algorithms have attracted lots of attention in the field of optimization. Different optimization methods, however, hold algorithm-specific strengths and limitations, and selecting the best-performing algorithm for a specific problem is a tedious task. We introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme. SC-SAHEL explores performance of different EAs, such as the capability to escape local attractions, speed, convergence, etc., during population evolution as each individual EA suits differently to various response surfaces. The SC-SAHEL algorithm is benchmarked over 29 conceptual test functions, and a real-world hydropower reservoir model case study. Results show that the hybrid SC-SAHEL algorithm is rigorous and effective in finding global optimum for a majority of test cases, and that it is computationally efficient in comparison to algorithms with individual EA
    corecore