7 research outputs found
Bio-Inspired Computing For Complex And Dynamic Constrained Problems
Bio-inspired algorithms are general-purpose optimisation methods that can find solutions with high qualities for complex problems. They are able to find these solutions with minimal knowledge of a search space. Bio-inspired algorithms (the design of which is inspired by nature) can easily adapt to changing environments. In this thesis, we contribute to the theoretical and empirical understanding of bioinspired algorithms, such as evolutionary algorithms and ant colony optimisation. We address complex problems as well as problems with dynamically changing constraints. Firstly, we review the most recent achievements in the theoretical analysis of dynamic optimisation via bio-inspired algorithms. We then continue our investigations in two major areas: static and dynamic combinatorial problems. To tackle static problems, we study the evolutionary algorithms that are enhanced by using a knowledge-based mutation approach in solving single- and multi-objective minimum spanning tree (MST) problems. Our results show that proper development of biased mutation can significantly improve the performance of evolutionary algorithms. Afterwards, we analyse the ability of single- and multi-objective algorithms to solve the packing while travelling (PWT) problem. This NP-hard problem is chosen to represent real-world multi-component problems. We outline the limitations of randomised local search in solving PWT and prove the advantage of using evolutionary algorithms. Our dynamic investigations begin with an empirical analysis of the ability of simple and advanced evolutionary algorithms to optimise the dynamic knapsack (KP) problem. We show that while optimising a population of solutions can speed up the ability of an algorithm to find optimal solutions after a dynamic change, it has the exact opposite effect in environments with high-frequency changes. Finally, we investigate the dynamic version of a more general problem known as the subset selection problem. We prove the inability of the adaptive greedy approach to maintain quality solutions in dynamic environments and illustrate the advantage of using evolutionary algorithms theoretically and practically.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202
Evolutionary Multi-Objective Optimization for the Dynamic Knapsack Problem
Evolutionary algorithms are bio-inspired algorithms that can easily adapt to
changing environments. In this paper, we study single- and multi-objective
baseline evolutionary algorithms for the classical knapsack problem where the
capacity of the knapsack varies over time. We establish different benchmark
scenarios where the capacity changes every iterations according to a
uniform or normal distribution. Our experimental investigations analyze the
behavior of our algorithms in terms of the magnitude of changes determined by
parameters of the chosen distribution, the frequency determined by , and
the class of knapsack instance under consideration. Our results show that the
multi-objective approaches using a population that caters for dynamic changes
have a clear advantage in many benchmarks scenarios when the frequency of
changes is not too high. Furthermore, we demonstrate that the distribution
handling techniques in advance algorithms such as NSGA-II and SPEA2 do not
necessarily result in better performance and even prevent these algorithms from
finding good quality solutions in comparison with simple multi-objective
approaches
Runtime analysis of randomized search heuristics for dynamic graph coloring
We contribute to the theoretical understanding of randomized search heuristics for dynamic problems. We consider the classical graph coloring problem and investigate the dynamic setting where edges are added to the current graph. We then analyze the expected time for randomized search heuristics to recompute high quality solutions. This includes the (1+1) EA and RLS in a setting where the number of colors is bounded and we are minimizing the number of conflicts as well as iterated local search algorithms that use an unbounded color palette and aim to use the smallest colors and - as a consequence - the smallest number of colors.
We identify classes of bipartite graphs where reoptimization is as hard as or even harder than optimization from scratch, i. e. starting with a random initialization. Even adding a single edge can lead to hard symmetry problems. However, graph classes that are hard for one algorithm turn out to be easy for others. In most cases our bounds show that reoptimization is faster than optimizing from scratch. Furthermore, we show how to speed up computations by using problem specific operators concentrating on parts of the graph where changes have occurred
Analysis of Evolutionary Algorithms in Dynamic and Stochastic Environments
Many real-world optimization problems occur in environments that change
dynamically or involve stochastic components. Evolutionary algorithms and other
bio-inspired algorithms have been widely applied to dynamic and stochastic
problems. This survey gives an overview of major theoretical developments in
the area of runtime analysis for these problems. We review recent theoretical
studies of evolutionary algorithms and ant colony optimization for problems
where the objective functions or the constraints change over time. Furthermore,
we consider stochastic problems under various noise models and point out some
directions for future research.Comment: This book chapter is to appear in the book "Theory of Randomized
Search Heuristics in Discrete Search Spaces", which is edited by Benjamin
Doerr and Frank Neumann and is scheduled to be published by Springer in 201
Pareto Optimization for Subset Selection with Dynamic Cost Constraints
We consider the subset selection problem for function with constraint
bound that changes over time. Within the area of submodular optimization,
various greedy approaches are commonly used. For dynamic environments we
observe that the adaptive variants of these greedy approaches are not able to
maintain their approximation quality. Investigating the recently introduced
POMC Pareto optimization approach, we show that this algorithm efficiently
computes a -approximation, where
is the submodularity ratio of , for each possible constraint
bound . Furthermore, we show that POMC is able to adapt its set of
solutions quickly in the case that increases. Our experimental
investigations for the influence maximization in social networks show the
advantage of POMC over generalized greedy algorithms. We also consider EAMC, a
new evolutionary algorithm with polynomial expected time guarantee to maintain
approximation ratio, and NSGA-II as an advanced multi-objective
optimization algorithm, to demonstrate their challenges in optimizing the
maximum coverage problem. Our empirical analysis shows that, within the same
number of evaluations, POMC is able to outperform NSGA-II under linear
constraint, while EAMC performs significantly worse than all considered
algorithms in most cases.Comment: A preliminary version of this article has been presented at the
Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019
Fast re-optimization via structural diversity
International audienceWhen a problem instance is perturbed by a small modification, one would hope to find a good solution for the new instance by building on a known good solution for the previous one. Via a rigorous mathematical analysis, we show that evolutionary algorithms, despite usually being robust problem solvers, can have unexpected difficulties to solve such re-optimization problems. When started with a random Hamming neighbor of the optimum, the (1+1) evolutionary algorithm takes time to optimize the LeadingOnes benchmark function, which is the same asymptotic optimization time when started in a randomly chosen solution. There is hence no significant advantage from re-optimizing a structurally good solution. We then propose a way to overcome such difficulties. As our mathematical analysis reveals, the reason for this undesired behavior is that during the optimization structurally good solutions can easily be replaced by structurally worse solutions of equal or better fitness. We propose a simple diversity mechanism that prevents this behavior, thereby reducing the re-optimization time for LeadingOnes to , where is the population size used by the diversity mechanism and the Hamming distance of the new optimum from the previous solution. We show similarly fast re-optimization times for the optimization of linear functions with changing constraints and for the minimum spanning tree problem