530 research outputs found

    Introductory Chapter: Swarm Intelligence and Particle Swarm Optimization

    Get PDF

    Runtime Analyses of Multi-Objective Evolutionary Algorithms in the Presence of Noise

    Full text link
    In single-objective optimization, it is well known that evolutionary algorithms also without further adjustments can tolerate a certain amount of noise in the evaluation of the objective function. In contrast, this question is not at all understood for multi-objective optimization. In this work, we conduct the first mathematical runtime analysis of a simple multi-objective evolutionary algorithm (MOEA) on a classic benchmark in the presence of noise in the objective functions. We prove that when bit-wise prior noise with rate pα/np \le \alpha/n, α\alpha a suitable constant, is present, the \emph{simple evolutionary multi-objective optimizer} (SEMO) without any adjustments to cope with noise finds the Pareto front of the OneMinMax benchmark in time O(n2logn)O(n^2\log n), just as in the case without noise. Given that the problem here is to arrive at a population consisting of n+1n+1 individuals witnessing the Pareto front, this is a surprisingly strong robustness to noise (comparably simple evolutionary algorithms cannot optimize the single-objective OneMax problem in polynomial time when p=ω(log(n)/n)p = \omega(\log(n)/n)). Our proofs suggest that the strong robustness of the MOEA stems from its implicit diversity mechanism designed to enable it to compute a population covering the whole Pareto front. Interestingly this result only holds when the objective value of a solution is determined only once and the algorithm from that point on works with this, possibly noisy, objective value. We prove that when all solutions are reevaluated in each iteration, then any noise rate p=ω(log(n)/n2)p = \omega(\log(n)/n^2) leads to a super-polynomial runtime. This is very different from single-objective optimization, where it is generally preferred to reevaluate solutions whenever their fitness is important and where examples are known such that not reevaluating solutions can lead to catastrophic performance losses.Comment: Appears at IJCAI 202

    Sensitivity of shortest distance search in the ant colony algorithm with varying normalized distance formulas

    Get PDF
    The ant colony algorithm is an algorithm adopted from the behavior of ants which naturally ants are able to find the shortest route on the way from the nest to places of food sources based on footprints on the track that has been passed. The ant colony algorithm helps a lot in solving several problems such as scheduling, traveling salesman problems (TSP) and vehicle routing problems (VRP). In addition, ant colony has been developed and has several variants. However, in its function to find the shortest distance is optimized by utilizing several normalized distance formulas with the data used in finding distances between merchants in the mercant ecosystem. Where in the test normalized distance is superior Hamming distance in finding the shortest distance of 0.2875, then followed by the same value, namely the normalized formula Manhattan distance and normalized Euclidean distance with a value of 0.4675 and without using the normalized distance formula or the original ant colony algorithm gets a value 0.6635. Given the sensitivity in distance search using merchant ecosystem data, the method works well on the ant colony Algorithm using normalized Hamming distance
    corecore