396 research outputs found
Algorithm Portfolios for Noisy Optimization
Noisy optimization is the optimization of objective functions corrupted by
noise. A portfolio of solvers is a set of solvers equipped with an algorithm
selection tool for distributing the computational power among them. Portfolios
are widely and successfully used in combinatorial optimization. In this work,
we study portfolios of noisy optimization solvers. We obtain mathematically
proved performance (in the sense that the portfolio performs nearly as well as
the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy
optimization. A somehow surprising result is that it is better to compare
solvers with some lag, i.e., propose the current recommendation of best solver
based on their performance earlier in the run. An additional finding is a
principled method for distributing the computational power among solvers in the
portfolio.Comment: in Annals of Mathematics and Artificial Intelligence, Springer
Verlag, 201
Learning dynamic algorithm portfolios
Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a trade-off between the performance of model-based algorithm selection, and the cost of learning the model. In this paper, we treat this trade-off in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the model-based shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark; and with a set of solvers for the Auction Winner Determination proble
LLAMA: Leveraging Learning to Automatically Manage Algorithms
Algorithm portfolio and selection approaches have achieved remarkable
improvements over single solvers. However, the implementation of such systems
is often highly customised and specific to the problem domain. This makes it
difficult for researchers to explore different techniques for their specific
problems. We present LLAMA, a modular and extensible toolkit implemented as an
R package that facilitates the exploration of a range of different portfolio
techniques on any problem domain. It implements the algorithm selection
approaches most commonly used in the literature and leverages the extensive
library of machine learning algorithms and techniques in R. We describe the
current capabilities and limitations of the toolkit and illustrate its usage on
a set of example SAT problems
Population-based Algorithm Portfolios with automated constituent algorithms selection
AbstractPopulation-based Algorithm Portfolios (PAP) is an appealing framework for integrating different Evolutionary Algorithms (EAs) to solve challenging numerical optimization problems. Particularly, PAP has shown significant advantages to single EAs when a number of problems need to be solved simultaneously. Previous investigation on PAP reveals that choosing appropriate constituent algorithms is crucial to the success of PAP. However, no method has been developed for this purpose. In this paper, an extended version of PAP, namely PAP based on Estimated Performance Matrix (EPM-PAP) is proposed. EPM-PAP is equipped with a novel constituent algorithms selection module, which is based on the EPM of each candidate EAs. Empirical studies demonstrate that the EPM-based selection method can successfully identify appropriate constituent EAs, and thus EPM-PAP outperformed all single EAs considered in this work
Neural Networks for Predicting Algorithm Runtime Distributions
Many state-of-the-art algorithms for solving hard combinatorial problems in
artificial intelligence (AI) include elements of stochasticity that lead to
high variations in runtime, even for a fixed problem instance. Knowledge about
the resulting runtime distributions (RTDs) of algorithms on given problem
instances can be exploited in various meta-algorithmic procedures, such as
algorithm selection, portfolios, and randomized restarts. Previous work has
shown that machine learning can be used to individually predict mean, median
and variance of RTDs. To establish a new state-of-the-art in predicting RTDs,
we demonstrate that the parameters of an RTD should be learned jointly and that
neural networks can do this well by directly optimizing the likelihood of an
RTD given runtime observations. In an empirical study involving five algorithms
for SAT solving and AI planning, we show that neural networks predict the true
RTDs of unseen instances better than previous methods, and can even do so when
only few runtime observations are available per training instance
- …