10,858 research outputs found
Analysis of Different Types of Regret in Continuous Noisy Optimization
The performance measure of an algorithm is a crucial part of its analysis.
The performance can be determined by the study on the convergence rate of the
algorithm in question. It is necessary to study some (hopefully convergent)
sequence that will measure how "good" is the approximated optimum compared to
the real optimum. The concept of Regret is widely used in the bandit literature
for assessing the performance of an algorithm. The same concept is also used in
the framework of optimization algorithms, sometimes under other names or
without a specific name. And the numerical evaluation of convergence rate of
noisy algorithms often involves approximations of regrets. We discuss here two
types of approximations of Simple Regret used in practice for the evaluation of
algorithms for noisy optimization. We use specific algorithms of different
nature and the noisy sphere function to show the following results. The
approximation of Simple Regret, termed here Approximate Simple Regret, used in
some optimization testbeds, fails to estimate the Simple Regret convergence
rate. We also discuss a recent new approximation of Simple Regret, that we term
Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016,
Denver, United States. 201
Experimental study on population-based incremental learning algorithms for dynamic optimization problems
Copyright @ Springer-Verlag 2005.Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.This work was was supported by
UK EPSRC under Grant GR/S79718/01
Dual population-based incremental learning for problem optimization in dynamic environments
Copyright @ 2003 Asia Pacific Symposium on Intelligent and Evolutionary SystemsIn recent years there is a growing interest in the research of evolutionary algorithms for dynamic optimization problems since real world problems are usually dynamic, which presents serious challenges to traditional evolutionary algorithms. In this paper, we investigate the application of Population-Based Incremental Learning (PBIL) algorithms, a class of evolutionary algorithms, for problem optimization under dynamic environments. Inspired by the complementarity mechanism in nature, we propose a Dual PBIL that operates on two probability vectors that are dual to each other with respect to the central point in the search space. Using a dynamic problem generating technique we generate a series of dynamic knapsack problems from a randomly generated stationary knapsack problem and carry out experimental study comparing the performance of investigated PBILs and one traditional genetic algorithm. Experimental results show that the introduction of dualism into PBIL improves its adaptability under dynamic environments, especially when the environment is subject to significant changes in the sense of genotype space
Cosmic Swarms: A search for Supermassive Black Holes in the LISA data stream with a Hybrid Evolutionary Algorithm
We describe a hybrid evolutionary algorithm that can simultaneously search
for multiple supermassive black hole binary (SMBHB) inspirals in LISA data. The
algorithm mixes evolutionary computation, Metropolis-Hastings methods and
Nested Sampling. The inspiral of SMBHBs presents an interesting problem for
gravitational wave data analysis since, due to the LISA response function, the
sources have a bi-modal sky solution. We show here that it is possible not only
to detect multiple SMBHBs in the data stream, but also to investigate
simultaneously all the various modes of the global solution. In all cases, the
algorithm returns parameter determinations within (as estimated from
the Fisher Matrix) of the true answer, for both the actual and antipodal sky
solutions.Comment: submitted to Classical & Quantum Gravity. 19 pages, 4 figure
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Algorithm Portfolios for Noisy Optimization
Noisy optimization is the optimization of objective functions corrupted by
noise. A portfolio of solvers is a set of solvers equipped with an algorithm
selection tool for distributing the computational power among them. Portfolios
are widely and successfully used in combinatorial optimization. In this work,
we study portfolios of noisy optimization solvers. We obtain mathematically
proved performance (in the sense that the portfolio performs nearly as well as
the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy
optimization. A somehow surprising result is that it is better to compare
solvers with some lag, i.e., propose the current recommendation of best solver
based on their performance earlier in the run. An additional finding is a
principled method for distributing the computational power among solvers in the
portfolio.Comment: in Annals of Mathematics and Artificial Intelligence, Springer
Verlag, 201
Parallel ACO with a Ring Neighborhood for Dynamic TSP
The current paper introduces a new parallel computing technique based on ant
colony optimization for a dynamic routing problem. In the dynamic traveling
salesman problem the distances between cities as travel times are no longer
fixed. The new technique uses a parallel model for a problem variant that
allows a slight movement of nodes within their Neighborhoods. The algorithm is
tested with success on several large data sets.Comment: 8 pages, 1 figure; accepted J. Information Technology Researc
SamACO: variable sampling ant colony optimization algorithm for continuous optimization
An ant colony optimization (ACO) algorithm offers
algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution
constructions and to realize a pheromone laying-and-following
mechanism. Although ACO is first designed for solving discrete
(combinatorial) optimization problems, the ACO procedure is
also applicable to continuous optimization. This paper presents
a new way of extending ACO to solving continuous optimization
problems by focusing on continuous variable sampling as a key
to transforming ACO from discrete optimization to continuous
optimization. The proposed SamACO algorithm consists of three
major steps, i.e., the generation of candidate variable values for
selection, the ants’ solution construction, and the pheromone
update process. The distinct characteristics of SamACO are the
cooperation of a novel sampling method for discretizing the
continuous search space and an efficient incremental solution
construction method based on the sampled values. The performance
of SamACO is tested using continuous numerical functions
with unimodal and multimodal features. Compared with some
state-of-the-art algorithms, including traditional ant-based algorithms
and representative computational intelligence algorithms
for continuous optimization, the performance of SamACO is seen
competitive and promising
- …