10,313 research outputs found
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Metaheuristic algorithms are often nature-inspired, and they are becoming
very powerful in solving global optimization problems. More than a dozen of
major metaheuristic algorithms have been developed over the last three decades,
and there exist even more variants and hybrid of metaheuristics. This paper
intends to provide an overview of nature-inspired metaheuristic algorithms,
from a brief history to their applications. We try to analyze the main
components of these algorithms and how and why they works. Then, we intend to
provide a unified view of metaheuristics by proposing a generalized
evolutionary walk algorithm (GEWA). Finally, we discuss some of the important
open questions.Comment: 14 page
Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem
Self Organizing Migrating Algorithm (SOMA) is a meta-heuristic algorithm
based on the self-organizing behavior of individuals in a simulated social
environment. SOMA performs iterative computations on a population of potential
solutions in the given search space to obtain an optimal solution. In this
paper, an Opportunistic Self Organizing Migrating Algorithm (OSOMA) has been
proposed that introduces a novel strategy to generate perturbations
effectively. This strategy allows the individual to span across more possible
solutions and thus, is able to produce better solutions. A comprehensive
analysis of OSOMA on multi-dimensional unconstrained benchmark test functions
is performed. OSOMA is then applied to solve real-time Dynamic Traveling
Salesman Problem (DTSP). The problem of real-time DTSP has been stipulated and
simulated using real-time data from Google Maps with a varying cost-metric
between any two cities. Although DTSP is a very common and intuitive model in
the real world, its presence in literature is still very limited. OSOMA
performs exceptionally well on the problems mentioned above. To substantiate
this claim, the performance of OSOMA is compared with SOMA, Differential
Evolution and Particle Swarm Optimization.Comment: 6 pages, published in CISS 201
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Heterogeneous Ant Colony Optimisation Methods and their Application to the Travelling Salesman and PCB Drilling Problems
Ant Colony Optimization (ACO) is an optimization algorithm that is inspired by the foraging behaviour of real ants in locating and transporting food source to their nest. It is designed as a population-based metaheuristic and have been successfully implemented on various NP-hard problems such as the well-known Traveling Salesman Problem (TSP), Vehicle Routing Problem (VRP) and many more. However, majority of the studies in ACO focused on homogeneous artificial ants although animal behaviour researchers suggest that real ants exhibit heterogeneous behaviour thus improving the overall efficiency of the ant colonies. Equally important is that most, if not all, optimization algorithms require proper parameter tuning to achieve optimal performance. However, it is well-known that parameters are problem-dependant as different problems or even different instances have different optimal parameter settings. Parameter tuning through the testing of parameter combinations is a computationally expensive procedure that is infeasible on large-scale real-world problems. One method to mitigate this is to introduce heterogeneity by initializing the artificial agents with individual parameters rather than colony level parameters. This allows the algorithm to either actively or passively discover good parameter settings during the search. The approach undertaken in this study is to randomly initialize the ants from both uniform and Gaussian distribution respectively within a predefined range of values. The approach taken in this study is one of biological plausibility for ants with similar roles, but differing behavioural traits, which are being drawn from a mathematical distribution. This study also introduces an adaptive approach to the heterogeneous ant colony population that evolves the alpha and beta controlling parameters for ACO to locate near-optimal solutions. The adaptive approach is able to modify the exploitation and exploration characteristics of the algorithm during the search to reflect the dynamic nature of search. An empirical analysis of the proposed algorithm tested on a range of Travelling Salesman Problem (TSP) instances shows that the approach has better algorithmic performance when compared against state-of-the-art algorithms from the literature
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
- …