938 research outputs found

    Design Of Perturbative Hyper-Heuristics For Combinatorial Optimisation

    Get PDF
    Combinatorial optimisation is an area which seeks to identify optimal solution(s) from a discrete solution search space. Approaches for solving combinatorial optimisation problems can be separated into two main sub-classes, i.e. exact and approximation algorithms. Exact algorithm is a sub-class of techniques that is able to guarantee global optimality. However, exact algorithms are not feasible for solving complex problem due to its high computational overhead. Approximation algorithm is a sub-class of techniques which is able to provide sub-optimal solution(s) with reasonable computational cost. In order to explore the solution search space of a combinatorial optimisation problem, an approximation algorithm performs perturbations on the existing solutions by adopting a single or multiple perturbative Low-Level Heuristic(s) (LLHs). The use of a single LLH leads to poor performance when the particular heuristic is incompetent in solving the problem. Thus, the use of multiple LLHs is more desirable as the weaknesses of one heuristic can be compensated by the strengths of another. When there are multiple LLHs, a hyper-heuristic can be integrated to determine the choice of heuristics for a particular problem or situation. Hyper-heuristic automates the selection of LLHs through a high-level heuristic that consists of two key components, i.e. a heuristic selection method and a move acceptance method. The capability of a high-level heuristic is highly problem dependent as the landscape properties of a problem are unique among others. The high-level heuristics in the existing hyper-heuristics are designed by manually matching different combinations of high-level heuristic components

    Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling (1+λ)(1+\lambda) EA Variants on OneMax and LeadingOnes

    Full text link
    Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (COmparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1+λ)(1+\lambda) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1+λ)(1+\lambda) EA on LeadingOnes

    Automated construction of evolutionary algorithm operators for the bi-objective water distribution network design problem using a genetic programming based hyper-heuristic approach

    Get PDF
    The water distribution network (WDN) design problem is primarily concerned with finding the optimal pipe sizes that provide the best service for minimal cost; a problem of continuing importance both in the UK and internationally. Consequently, many methods for solving this problem have been proposed in the literature, often using tailored, hand-crafted approaches to more effectively optimise this difficult problem. In this paper we investigate a novel hyper-heuristic approach that uses genetic programming (GP) to evolve mutation operators for evolutionary algorithms (EAs) which are specialised for a bi-objective formulation of the WDN design problem (minimising WDN cost and head deficit). Once generated, the evolved operators can then be used ad infinitum in any EA on any WDN to improve performance. A novel multi-objective method is demonstrated that evolves a set of mutation operators for one training WDN. The best operators are evaluated in detail by applying them to three test networks of varying complexity. An experiment is conducted in which 83 operators are evolved. The best 10 are examined in detail. One operator, GP1, is shown to be especially effective and incorporates interesting domain-specific learning (pipe smoothing) while GP5 demonstrates the ability of the method to find known, well-used operators like a Gaussian. © IWA Publishing 2014J.Engineering and Physical Sciences Research Council (EPSRC)Mouchel Ltd

    On the Runtime Analysis of Selection Hyper-heuristics for Pseudo-Boolean Optimisation

    Get PDF
    Rather than manually deciding on a suitable algorithm configuration for a given optimisation problem, hyper-heuristics are high-level search algorithms which evolve the heuristic to be applied. While there are numerous reported successful applications of hyper-heuristics to combinatorial optimisation problems, it is not yet fully understood how well they perform and on which problem classes they are effective. Selection hyper-heuristics (SHHs) employ smart methodologies to select from a pre-defined set of low-level heuristics which to apply in the next decision step. This thesis extends and improves upon the existing foundational understanding of the behaviour and performance of SHHs, providing insights into how and when they can be successfully applied by analysing the time complexity of SHHs on a variety of unimodal and multimodal problem classes. Through a rigorous theoretical analysis, we show that while four commonly applied simple SHHs from the literature do not learn to select the most promising low-level heuristics, generalising them such that application of the chosen heuristic occurs over a longer period of time allows for vastly improved performance. Furthermore, we prove that extending the size of the set of low-level heuristics can improve the performance of the generalised SHHs, outperforming SHHs with smaller sets of low-level heuristics. We show that allowing the SHH to automatically adapt the length of the learning period may further improve the performance and outperform non-adaptive variants. SHHs selecting between two move-acceptance operators are also analysed on two classes of multimodal benchmark functions. An analysis of the performance of simple SHHs on these functions provides insights into the effectiveness of the presented methodologies for escaping from local optima

    The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms

    Get PDF
    open access articleWe present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the Holm–Bonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them

    Simple hyper-heuristics control the neighbourhood size of randomised local search optimally for LeadingOnes

    Get PDF
    Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the `simple' Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n = 108) and shed some light on the best choices for the parameter τ in various situations

    When move acceptance selection hyper-heuristics outperform Metropolis and elitist evolutionary algorithms and when not

    Get PDF
    Selection hyper-heuristics (HHs) are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently, selection HHs choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise standard unimodal benchmark functions from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper, we extend our understanding of the performance of HHs to the domain of multimodal optimisation by considering a Move Acceptance HH (MAHH) from the literature that can switch between elitist and non-elitist heuristics during the run. In essence, MAHH is a non-elitist search heuristic that differs from other search heuristics in the source of non-elitism. We first identify the range of parameters that allow MAHH to hillclimb efficiently and prove that it can optimise the standard hillclimbing benchmark function OneMax in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where MAHH outperforms elitist evolutionary algorithms and the well-known Metropolis non-elitist algorithm by quickly escaping local optima, and ones where it does not. Since MAHH is essentially a non-elitist random local search heuristic, the paper is of independent interest to researchers in the fields of artificial intelligence and randomised search heuristics
    corecore