225 research outputs found

    Review of Metaheuristics and Generalized Evolutionary Walk Algorithm

    Full text link
    Metaheuristic algorithms are often nature-inspired, and they are becoming very powerful in solving global optimization problems. More than a dozen of major metaheuristic algorithms have been developed over the last three decades, and there exist even more variants and hybrid of metaheuristics. This paper intends to provide an overview of nature-inspired metaheuristic algorithms, from a brief history to their applications. We try to analyze the main components of these algorithms and how and why they works. Then, we intend to provide a unified view of metaheuristics by proposing a generalized evolutionary walk algorithm (GEWA). Finally, we discuss some of the important open questions.Comment: 14 page

    A Comprehensive Review of Recent Variants and Modifications of Firefly Algorithm

    Get PDF
    Swarm intelligence (SI) is an emerging field of biologically-inspired artificial intelligence based on the behavioral models of social insects such as ants, bees, wasps, termites etc. Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. Most SI algorithms have been developed to address stationary optimization problems and hence, they can converge on the (near-) optimum solution efficiently. However, many real-world problems have a dynamic environment that changes over time. In the last two decades, there has been a growing interest of addressing Dynamic Optimization Problems using SI algorithms due to their adaptation capabilities. This paper presents a broad review on two SI algorithms: 1) Firefly Algorithm (FA) 2) Flower Pollination Algorithm (FPA). FA is inspired from bioluminescence characteristic of fireflies. FPA is inspired from the the pollination behavior of flowering plants. This article aims to give a detailed analysis of different variants of FA and FPA developed by parameter adaptations, modification, hybridization as on date. This paper also addresses the applications of these algorithms in various fields. In addition, literatures found that most of the cases that used FA and FPA technique have outperformed compare to other metaheuristic algorithms

    A survey of swarm intelligence for dynamic optimization: algorithms and applications

    Get PDF
    Swarm intelligence (SI) algorithms, including ant colony optimization, particle swarm optimization, bee-inspired algorithms, bacterial foraging optimization, firefly algorithms, fish swarm optimization and many more, have been proven to be good methods to address difficult optimization problems under stationary environments. Most SI algorithms have been developed to address stationary optimization problems and hence, they can converge on the (near-) optimum solution efficiently. However, many real-world problems have a dynamic environment that changes over time. For such dynamic optimization problems (DOPs), it is difficult for a conventional SI algorithm to track the changing optimum once the algorithm has converged on a solution. In the last two decades, there has been a growing interest of addressing DOPs using SI algorithms due to their adaptation capabilities. This paper presents a broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications. In addition, this paper focuses on the enhancement strategies integrated in SI algorithms to address dynamic changes, the performance measurements and benchmark generators used in SIDO. Finally, some considerations about future directions in the subject are given

    A Survey of Evolutionary Continuous Dynamic Optimization Over Two Decades:Part B

    Get PDF
    Many real-world optimization problems are dynamic. The field of dynamic optimization deals with such problems where the search space changes over time. In this two-part paper, we present a comprehensive survey of the research in evolutionary dynamic optimization for single-objective unconstrained continuous problems over the last two decades. In Part A of this survey, we propose a new taxonomy for the components of dynamic optimization algorithms, namely, convergence detection, change detection, explicit archiving, diversity control, and population division and management. In comparison to the existing taxonomies, the proposed taxonomy covers some additional important components, such as convergence detection and computational resource allocation. Moreover, we significantly expand and improve the classifications of diversity control and multi-population methods, which are under-represented in the existing taxonomies. We then provide detailed technical descriptions and analysis of different components according to the suggested taxonomy. Part B of this survey provides an indepth analysis of the most commonly used benchmark problems, performance analysis methods, static optimization algorithms used as the optimization components in the dynamic optimization algorithms, and dynamic real-world applications. Finally, several opportunities for future work are pointed out

    From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Get PDF
    ABSTRACT. Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail.This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore