26,698 research outputs found
A statistical learning based approach for parameter fine-tuning of metaheuristics
Metaheuristics are approximation methods used to solve combinatorial optimization problems. Their performance usually depends on a set of parameters that need to be adjusted. The selection of appropriate parameter values causes a loss of efficiency, as it requires time, and advanced analytical and problem-specific skills. This paper provides an overview of the principal approaches to tackle the Parameter Setting Problem, focusing on the statistical procedures employed so far by the scientific community. In addition, a novel methodology is proposed, which is tested using an already existing algorithm for solving the Multi-Depot Vehicle Routing Problem.Peer ReviewedPostprint (published version
Chaotic multi-objective optimization based design of fractional order PI{\lambda}D{\mu} controller in AVR system
In this paper, a fractional order (FO) PI{\lambda}D\mu controller is designed
to take care of various contradictory objective functions for an Automatic
Voltage Regulator (AVR) system. An improved evolutionary Non-dominated Sorting
Genetic Algorithm II (NSGA II), which is augmented with a chaotic map for
greater effectiveness, is used for the multi-objective optimization problem.
The Pareto fronts showing the trade-off between different design criteria are
obtained for the PI{\lambda}D\mu and PID controller. A comparative analysis is
done with respect to the standard PID controller to demonstrate the merits and
demerits of the fractional order PI{\lambda}D\mu controller.Comment: 30 pages, 14 figure
Comparative Studies on Decentralized Multiloop PID Controller Design Using Evolutionary Algorithms
Decentralized PID controllers have been designed in this paper for
simultaneous tracking of individual process variables in multivariable systems
under step reference input. The controller design framework takes into account
the minimization of a weighted sum of Integral of Time multiplied Squared Error
(ITSE) and Integral of Squared Controller Output (ISCO) so as to balance the
overall tracking errors for the process variables and required variation in the
corresponding manipulated variables. Decentralized PID gains are tuned using
three popular Evolutionary Algorithms (EAs) viz. Genetic Algorithm (GA),
Evolutionary Strategy (ES) and Cultural Algorithm (CA). Credible simulation
comparisons have been reported for four benchmark 2x2 multivariable processes.Comment: 6 pages, 9 figure
Self-Adaptive Surrogate-Assisted Covariance Matrix Adaptation Evolution Strategy
This paper presents a novel mechanism to adapt surrogate-assisted
population-based algorithms. This mechanism is applied to ACM-ES, a recently
proposed surrogate-assisted variant of CMA-ES. The resulting algorithm,
saACM-ES, adjusts online the lifelength of the current surrogate model (the
number of CMA-ES generations before learning a new surrogate) and the surrogate
hyper-parameters. Both heuristics significantly improve the quality of the
surrogate model, yielding a significant speed-up of saACM-ES compared to the
ACM-ES and CMA-ES baselines. The empirical validation of saACM-ES on the
BBOB-2012 noiseless testbed demonstrates the efficiency and the scalability
w.r.t the problem dimension and the population size of the proposed approach,
that reaches new best results on some of the benchmark problems.Comment: Genetic and Evolutionary Computation Conference (GECCO 2012) (2012
Parameter Sensitivity Analysis of Social Spider Algorithm
Social Spider Algorithm (SSA) is a recently proposed general-purpose
real-parameter metaheuristic designed to solve global numerical optimization
problems. This work systematically benchmarks SSA on a suite of 11 functions
with different control parameters. We conduct parameter sensitivity analysis of
SSA using advanced non-parametric statistical tests to generate statistically
significant conclusion on the best performing parameter settings. The
conclusion can be adopted in future work to reduce the effort in parameter
tuning. In addition, we perform a success rate test to reveal the impact of the
control parameters on the convergence speed of the algorithm
Quality Measures of Parameter Tuning for Aggregated Multi-Objective Temporal Planning
Parameter tuning is recognized today as a crucial ingredient when tackling an
optimization problem. Several meta-optimization methods have been proposed to
find the best parameter set for a given optimization algorithm and (set of)
problem instances. When the objective of the optimization is some scalar
quality of the solution given by the target algorithm, this quality is also
used as the basis for the quality of parameter sets. But in the case of
multi-objective optimization by aggregation, the set of solutions is given by
several single-objective runs with different weights on the objectives, and it
turns out that the hypervolume of the final population of each single-objective
run might be a better indicator of the global performance of the aggregation
method than the best fitness in its population. This paper discusses this issue
on a case study in multi-objective temporal planning using the evolutionary
planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how
ParamILS makes a difference between both approaches, and demonstrate that
indeed, in this context, using the hypervolume indicator as ParamILS target is
the best choice. Other issues pertaining to parameter tuning in the proposed
context are also discussed.Comment: arXiv admin note: substantial text overlap with arXiv:1305.116
Easy over Hard: A Case Study on Deep Learning
While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201
On-line multiobjective automatic control system generation by evolutionary algorithms
Evolutionary algorithms are applied to the on- line generation of servo-motor control systems. In this paper, the evolving population of controllers is evaluated at run-time via hardware in the loop, rather than on a simulated model. Disturbances are also introduced at run-time in order to pro- duce robust performance. Multiobjective optimisation of both PI and Fuzzy Logic controllers is considered. Finally an on-line implementation of Genetic Programming is presented based around the Simulink standard blockset. The on-line designed controllers are shown to be robust to both system noise and ex- ternal disturbances while still demonstrating excellent steady- state and dvnamic characteristics
- …