2,227 research outputs found
Adaptive hybrid optimization strategy for calibration and parameter estimation of physical models
A new adaptive hybrid optimization strategy, entitled squads, is proposed for
complex inverse analysis of computationally intensive physical models. The new
strategy is designed to be computationally efficient and robust in
identification of the global optimum (e.g. maximum or minimum value of an
objective function). It integrates a global Adaptive Particle Swarm
Optimization (APSO) strategy with a local Levenberg-Marquardt (LM) optimization
strategy using adaptive rules based on runtime performance. The global strategy
optimizes the location of a set of solutions (particles) in the parameter
space. The LM strategy is applied only to a subset of the particles at
different stages of the optimization based on the adaptive rules. After the LM
adjustment of the subset of particle positions, the updated particles are
returned to the APSO strategy. The advantages of coupling APSO and LM in the
manner implemented in squads is demonstrated by comparisons of squads
performance against Levenberg-Marquardt (LM), Particle Swarm Optimization
(PSO), Adaptive Particle Swarm Optimization (APSO; the TRIBES strategy), and an
existing hybrid optimization strategy (hPSO). All the strategies are tested on
2D, 5D and 10D Rosenbrock and Griewank polynomial test functions and a
synthetic hydrogeologic application to identify the source of a contaminant
plume in an aquifer. Tests are performed using a series of runs with random
initial guesses for the estimated (function/model) parameters. Squads is
observed to have the best performance when both robustness and efficiency are
taken into consideration than the other strategies for all test functions and
the hydrogeologic application
Performance evaluation on optimisation of 200 dimensional numerical tests - results and issues
Abstract: Many tasks in science and technology require optimisation. Resolving such tasks could bring great benefits to community. Multidimensional problems where optimisation parameters are hundreds and more face unusual computational limitations. Algorithms, which perform well on low number of dimensions, when are applied to high dimensional space suffers insuperable difficulties. This article presents an investigation on 200 dimensional scalable, heterogeneous, real-value, numerical tests. For some of these tests optimal values are dependent on dimensions’ number and virtually unknown for variety of dimensions. Dependence on initialisation for successful identification of optimal values is analysed by comparison between experiments with start from random initial locations and start from one location. The aim is to: (1) assess dependence on initialisation in optimisation of 200 dimensional tests; (2) evaluate tests complexity and required for their resolving periods of time; (3) analyse adaptation to tasks with unknown solutions; (4) identify specific peculiarities which could support the performance on high dimensions (5) identify computational limitations which numerical methods could face on high dimensions. Presented and analysed experimental results can be used for further comparison and evaluation of real value methods
Performance evaluation on optimisation of 200 dimensional numerical tests - results and issues
Abstract: Many tasks in science and technology require optimisation. Resolving such tasks could bring great benefits to community. Multidimensional problems where optimisation parameters are hundreds and more face unusual computational limitations. Algorithms, which perform well on low number of dimensions, when are applied to high dimensional space suffers insuperable difficulties. This article presents an investigation on 200 dimensional scalable, heterogeneous, real-value, numerical tests. For some of these tests optimal values are dependent on dimensions’ number and virtually unknown for variety of dimensions. Dependence on initialisation for successful identification of optimal values is analysed by comparison between experiments with start from random initial locations and start from one location. The aim is to: (1) assess dependence on initialisation in optimisation of 200 dimensional tests; (2) evaluate tests complexity and required for their resolving periods of time; (3) analyse adaptation to tasks with unknown solutions; (4) identify specific peculiarities which could support the performance on high dimensions (5) identify computational limitations which numerical methods could face on high dimensions. Presented and analysed experimental results can be used for further comparison and evaluation of real value methods
Towards a Better Understanding of the Local Attractor in Particle Swarm Optimization: Speed and Solution Quality
Particle Swarm Optimization (PSO) is a popular nature-inspired meta-heuristic
for solving continuous optimization problems. Although this technique is widely
used, the understanding of the mechanisms that make swarms so successful is
still limited. We present the first substantial experimental investigation of
the influence of the local attractor on the quality of exploration and
exploitation. We compare in detail classical PSO with the social-only variant
where local attractors are ignored. To measure the exploration capabilities, we
determine how frequently both variants return results in the neighborhood of
the global optimum. We measure the quality of exploitation by considering only
function values from runs that reached a search point sufficiently close to the
global optimum and then comparing in how many digits such values still deviate
from the global minimum value. It turns out that the local attractor
significantly improves the exploration, but sometimes reduces the quality of
the exploitation. As a compromise, we propose and evaluate a hybrid PSO which
switches off its local attractors at a certain point in time. The effects
mentioned can also be observed by measuring the potential of the swarm
Algorithm Portfolio for Individual-based Surrogate-Assisted Evolutionary Algorithms
Surrogate-assisted evolutionary algorithms (SAEAs) are powerful optimisation
tools for computationally expensive problems (CEPs). However, a randomly
selected algorithm may fail in solving unknown problems due to no free lunch
theorems, and it will cause more computational resource if we re-run the
algorithm or try other algorithms to get a much solution, which is more serious
in CEPs. In this paper, we consider an algorithm portfolio for SAEAs to reduce
the risk of choosing an inappropriate algorithm for CEPs. We propose two
portfolio frameworks for very expensive problems in which the maximal number of
fitness evaluations is only 5 times of the problem's dimension. One framework
named Par-IBSAEA runs all algorithm candidates in parallel and a more
sophisticated framework named UCB-IBSAEA employs the Upper Confidence Bound
(UCB) policy from reinforcement learning to help select the most appropriate
algorithm at each iteration. An effective reward definition is proposed for the
UCB policy. We consider three state-of-the-art individual-based SAEAs on
different problems and compare them to the portfolios built from their
instances on several benchmark problems given limited computation budgets. Our
experimental studies demonstrate that our proposed portfolio frameworks
significantly outperform any single algorithm on the set of benchmark problems
- …