38,882 research outputs found
A comparative study of metaheuristic algorithms for the fertilizer optimization problem
Hard combinatorial optimization (CO) problems pose challenges to traditional algorithmic solutions. The search space usually contains a large number of local optimal points and the computational cost to reach a global optimum may be too high for practical use. In this work, we conduct a comparative study of several state-of-the-art metaheuristic algorithms for hard CO problems solving. Our study is motivated by an industrial application called the Fertilizer Blends Optimization. We focus our study on a number of local search metaheuristics and analyze their performance in terms of both runtime efficiency and solution quality. We show that local search granularity (move step size) and the downhill move probability are two major factors that affect algorithm performance, and we demonstrate how experimental tuning work can be applied to obtain good performance of the algorithms. Our empirical result suggests that the well-known Simulated Annealing (SA) algorithm showed the best performance on the fertilizer problem. The simple Iterated Improvement Algorithm (IIA) also performed surprisingly well by combining strict uphill move and random neighborhood selection. A novel approach, called Delivery Network Model (DNM) algorithm, was also shown to be competitive, but it has the disadvantage of being very sensitive to local search granularity. The constructive local search method (GRASP), which combines heuristic space sampling and local search, outperformed IIA without a construction phase; however, the improvement in performance is limited and generally speaking, local search performance is not sensitive to initial search positions in our studied fertilizer problem
Quantifying the Impact of Parameter Tuning on Nature-Inspired Algorithms
The problem of parameterization is often central to the effective deployment
of nature-inspired algorithms. However, finding the optimal set of parameter
values for a combination of problem instance and solution method is highly
challenging, and few concrete guidelines exist on how and when such tuning may
be performed. Previous work tends to either focus on a specific algorithm or
use benchmark problems, and both of these restrictions limit the applicability
of any findings. Here, we examine a number of different algorithms, and study
them in a "problem agnostic" fashion (i.e., one that is not tied to specific
instances) by considering their performance on fitness landscapes with varying
characteristics. Using this approach, we make a number of observations on which
algorithms may (or may not) benefit from tuning, and in which specific
circumstances.Comment: 8 pages, 7 figures. Accepted at the European Conference on Artificial
Life (ECAL) 2013, Taormina, Ital
Dynamic Control of Explore/Exploit Trade-Off In Bayesian Optimization
Bayesian optimization offers the possibility of optimizing black-box
operations not accessible through traditional techniques. The success of
Bayesian optimization methods such as Expected Improvement (EI) are
significantly affected by the degree of trade-off between exploration and
exploitation. Too much exploration can lead to inefficient optimization
protocols, whilst too much exploitation leaves the protocol open to strong
initial biases, and a high chance of getting stuck in a local minimum.
Typically, a constant margin is used to control this trade-off, which results
in yet another hyper-parameter to be optimized. We propose contextual
improvement as a simple, yet effective heuristic to counter this - achieving a
one-shot optimization strategy. Our proposed heuristic can be swiftly
calculated and improves both the speed and robustness of discovery of optimal
solutions. We demonstrate its effectiveness on both synthetic and real world
problems and explore the unaccounted for uncertainty in the pre-determination
of search hyperparameters controlling explore-exploit trade-off.Comment: Accepted for publication in the proceedings of 2018 Computing
Conferenc
Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation
Combinatorial interaction testing is an important software testing technique
that has seen lots of recent interest. It can reduce the number of test cases
needed by considering interactions between combinations of input parameters.
Empirical evidence shows that it effectively detects faults, in particular, for
highly configurable software systems. In real-world software testing, the input
variables may vary in how strongly they interact, variable strength
combinatorial interaction testing (VS-CIT) can exploit this for higher
effectiveness. The generation of variable strength test suites is a
non-deterministic polynomial-time (NP) hard computational problem
\cite{BestounKamalFuzzy2017}. Research has shown that stochastic
population-based algorithms such as particle swarm optimization (PSO) can be
efficient compared to alternatives for VS-CIT problems. Nevertheless, they
require detailed control for the exploitation and exploration trade-off to
avoid premature convergence (i.e. being trapped in local optima) as well as to
enhance the solution diversity. Here, we present a new variant of PSO based on
Mamdani fuzzy inference system
\cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive
selection of its global and local search operations. We detail the design of
this combined algorithm and evaluate it through experiments on multiple
synthetic and benchmark problems. We conclude that fuzzy adaptive selection of
global and local search operations is, at least, feasible as it performs only
second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the
best mean test suite size, the fuzzy adaptation even outperforms DPSO
occasionally. We discuss the reasons behind this performance and outline
relevant areas of future work.Comment: 21 page
A controlled migration genetic algorithm operator for hardware-in-the-loop experimentation
In this paper, we describe the development of an extended migration operator, which combats the negative effects of noise on the effective search capabilities of genetic algorithms. The research is motivated by the need to minimize the num- ber of evaluations during hardware-in-the-loop experimentation, which can carry a significant cost penalty in terms of time or financial expense. The authors build on previous research, where convergence for search methods such as Simulated Annealing and Variable Neighbourhood search was accelerated by the implementation of an adaptive decision support operator. This methodology was found to be effective in searching noisy data surfaces. Providing that noise is not too significant, Genetic Al- gorithms can prove even more effective guiding experimentation. It will be shown that with the introduction of a Controlled Migration operator into the GA heuristic, data, which repre- sents a significant signal-to-noise ratio, can be searched with significant beneficial effects on the efficiency of hardware-in-the- loop experimentation, without a priori parameter tuning. The method is tested on an engine-in-the-loop experimental example, and shown to bring significant performance benefits
KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization
This paper investigates the control of an ML component within the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box
optimization. The known CMA-ES weakness is its sample complexity, the number of
evaluations of the objective function needed to approximate the global optimum.
This weakness is commonly addressed through surrogate optimization, learning an
estimate of the objective function a.k.a. surrogate model, and replacing most
evaluations of the true objective function with the (inexpensive) evaluation of
the surrogate model. This paper presents a principled control of the learning
schedule (when to relearn the surrogate model), based on the Kullback-Leibler
divergence of the current search distribution and the training distribution of
the former surrogate model. The experimental validation of the proposed
approach shows significant performance gains on a comprehensive set of
ill-conditioned benchmark problems, compared to the best state of the art
including the quasi-Newton high-precision BFGS method
Easy over Hard: A Case Study on Deep Learning
While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201
- …