6 research outputs found

    Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates

    Full text link
    We analyze the performance of the 2-rate (1+λ)(1+\lambda) Evolutionary Algorithm (EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a (1+λ)(1+\lambda)~EA variant using multiplicative update rules on the OneMax problem. We compare their efficiency for offspring population sizes ranging up to λ=3,200\lambda=3,200 and problem sizes up to n=100,000n=100,000. Our empirical results show that the ranking of the algorithms is very consistent across all tested dimensions, but strongly depends on the population size. While for small values of λ\lambda the 2-rate EA performs best, the multiplicative updates become superior for starting for some threshold value of λ\lambda between 50 and 100. Interestingly, for population sizes around 50, the (1+λ)(1+\lambda)~EA with static mutation rates performs on par with the best of the self-adjusting algorithms. We also consider how the lower bound pminp_{\min} for the mutation rate influences the efficiency of the algorithms. We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound pmin=1/n2p_{\min}=1/n^2 gives better results than pmin=1/np_{\min}=1/n when λ\lambda is small. For both algorithms the situation reverses for large~λ\lambda.Comment: To appear at Genetic and Evolutionary Computation Conference (GECCO'19). v2: minor language revisio

    From Parameter Tuning to Dynamic Heuristic Selection

    Get PDF
    The importance of balance between exploration and exploitation plays a crucial role while solving combinatorial optimization problems. This balance is reached by two general techniques: by using an appropriate problem solver and by setting its proper parameters. Both problems were widely studied in the past and the research process continues up until now. The latest studies in the field of automated machine learning propose merging both problems, solving them at design time, and later strengthening the results at runtime. To the best of our knowledge, the generalized approach for solving the parameter setting problem in heuristic solvers has not yet been proposed. Therefore, the concept of merging heuristic selection and parameter control have not been introduced. In this thesis, we propose an approach for generic parameter control in meta-heuristics by means of reinforcement learning (RL). Making a step further, we suggest a technique for merging the heuristic selection and parameter control problems and solving them at runtime using RL-based hyper-heuristic. The evaluation of the proposed parameter control technique on a symmetric traveling salesman problem (TSP) revealed its applicability by reaching the performance of tuned in online and used in isolation underlying meta-heuristic. Our approach provides the results on par with the best underlying heuristics with tuned parameters.:1 Introduction 1 1.1 Motivation 1 1.2 Research objective 2 1.3 Solution overview 2 2 Background and RelatedWork Analysis 3 2.1 Optimization Problems and their Solvers 3 2.2 Heuristic Solvers for Optimization Problems 9 2.3 Setting Algorithm Parameters 19 2.4 Combined Algorithm Selection and Hyper-Parameter Tuning Problem 27 2.5 Conclusion on Background and Related Work Analysis 28 3 Online Selection Hyper-Heuristic with Generic Parameter Control 31 3.1 Combined Parameter Control and Algorithm Selection Problem 31 3.2 Search Space Structure 32 3.3 Parameter Prediction Process 34 3.4 Low-Level Heuristics 35 3.5 Conclusion of Concept 36 4 Implementation Details 37 4.2 Search Space 40 4.3 Prediction Process 43 4.4 Low Level Heuristics 48 4.5 Conclusion 52 5 Evaluation 55 5.1 Optimization Problem 55 5.2 Environment Setup 56 5.3 Meta-heuristics Tuning 56 5.4 Concept Evaluation 60 5.5 Analysis of HH-PC Settings 74 5.6 Conclusion 79 6 Conclusion 81 7 FutureWork 83 7.1 Prediction Process 83 7.2 Search Space 84 7.3 Evaluations and Benchmarks 84 Bibliography 87 A Evaluation Results 99 A.1 Results in Figures 99 A.2 Results in numbers 10

    Enhanced Harris's Hawk algorithm for continuous multi-objective optimization problems

    Get PDF
    Multi-objective swarm intelligence-based (MOSI-based) metaheuristics were proposed to solve multi-objective optimization problems (MOPs) with conflicting objectives. Harris’s hawk multi-objective optimizer (HHMO) algorithm is a MOSIbased algorithm that was developed based on the reference point approach. The reference point is determined by the decision maker to guide the search process to a particular region in the true Pareto front. However, HHMO algorithm produces a poor approximation to the Pareto front because lack of information sharing in its population update strategy, equal division of convergence parameter and randomly generated initial population. A two-step enhanced non-dominated sorting HHMO (2SENDSHHMO) algorithm has been proposed to solve this problem. The algorithm includes (i) a population update strategy which improves the movement of hawks in the search space, (ii) a parameter adjusting strategy to control the transition between exploration and exploitation, and (iii) a population generating method in producing the initial candidate solutions. The population update strategy calculates a new position of hawks based on the flush-and-ambush technique of Harris’s hawks, and selects the best hawks based on the non-dominated sorting approach. The adjustment strategy enables the parameter to adaptively changed based on the state of the search space. The initial population is produced by generating quasi-random numbers using Rsequence followed by adapting the partial opposition-based learning concept to improve the diversity of the worst half in the population of hawks. The performance of the 2S-ENDSHHMO has been evaluated using 12 MOPs and three engineering MOPs. The obtained results were compared with the results of eight state-of-the-art multi-objective optimization algorithms. The 2S-ENDSHHMO algorithm was able to generate non-dominated solutions with greater convergence and diversity in solving most MOPs and showed a great ability in jumping out of local optima. This indicates the capability of the algorithm in exploring the search space. The 2S-ENDSHHMO algorithm can be used to improve the search process of other MOSI-based algorithms and can be applied to solve MOPs in applications such as structural design and signal processing

    Automated Machine Learning for Positive-Unlabelled Learning

    Get PDF
    Positive-Unlabelled (PU) learning is a field of machine learning that involves learning classifiers from data consisting of positive class and unlabelled instances. That is, instances that may be either positive or negative, but the label is unknown. PU learning differs from standard binary classification due to the absence of negative instances. This difference is non-trivial and requires differing classification frameworks and evaluation metrics. This thesis looks to address gaps in the PU learning literature and make PU learning more accessible to non-experts by introducing Automated Machine Learning (Auto-ML) systems specific to PU learning. Three such systems have been developed, GA-Auto-PU, a Genetic Algorithm (GA)-based Auto-ML system, BO-Auto-PU, a Bayesian Optimisation (BO)-based Auto-ML system, and EBO-Auto-PU, an Evolutionary/Bayesian Optimisation (EBO) hybrid-based Auto-ML system. These three Auto-ML systems are three primary contributions of this work. EBO, the optimiser component of EBO-Auto-PU, is by itself a novel optimisation method developed in this work that has proved effective for the task of Auto-ML and represents another contribution. EBO was developed with the aim of acting as a trade-off between GA, which achieved high predictive performance but at high computational expense, and BO, which, when utilised by the Auto-PU system, did not perform as well as the GA-based system but did execute much faster. EBO achieved this aim, providing high predictive performance with a computational runtime much faster than the GA-based system, and not substantially slower than the BO-based system. The proposed Auto-ML systems for PU learning were evaluated on three versions of 40 datasets, thus evaluated on 120 learning tasks in total. The 40 datasets consist of 20 real-world biomedical datasets and 20 synthetic datasets. The main evaluation measure was the F-measure, a popular measure in PU learning. Based on the F-measure results, the three proposed systems outperformed in general two baseline PU learning methods, usually with statistically significant results. Among the three proposed systems, there was no statistically significance difference between their results in general, whilst a version of the EBO-Auto-PU system performed overall slightly better than the other systems, in terms of F-measure. The two other main contributions of this work relate specifically to the field of PU learning. Firstly, in this work we present and utilise a robust evaluation approach. Evaluating PU learning classifiers is non-trivial and little guidance has been provided in the literature on how to do so. In this work, we present a clear framework for evaluation and use this framework to evaluate the proposed systems. Secondly, when evaluating the proposed systems, an analysis of the most frequently selected components of the optimised PU learning algorithm is presented. That is, the components that constitute the PU learning algorithms produced by the optimisers (for example, the choice of classifiers used in the algorithm, the number of iterations, etc.). This analysis is used to provide guidance on the construction of PU learning algorithms for specific dataset characteristics
    corecore