121,816 research outputs found

    Fitness Landscape-Based Characterisation of Nature-Inspired Algorithms

    Full text link
    A significant challenge in nature-inspired algorithmics is the identification of specific characteristics of problems that make them harder (or easier) to solve using specific methods. The hope is that, by identifying these characteristics, we may more easily predict which algorithms are best-suited to problems sharing certain features. Here, we approach this problem using fitness landscape analysis. Techniques already exist for measuring the "difficulty" of specific landscapes, but these are often designed solely with evolutionary algorithms in mind, and are generally specific to discrete optimisation. In this paper we develop an approach for comparing a wide range of continuous optimisation algorithms. Using a fitness landscape generation technique, we compare six different nature-inspired algorithms and identify which methods perform best on landscapes exhibiting specific features.Comment: 10 pages, 1 figure, submitted to the 11th International Conference on Adaptive and Natural Computing Algorithm

    Continuous extremal optimization for Lennard-Jones Clusters

    Full text link
    In this paper, we explore a general-purpose heuristic algorithm for finding high-quality solutions to continuous optimization problems. The method, called continuous extremal optimization(CEO), can be considered as an extension of extremal optimization(EO) and is consisted of two components, one is with responsibility for global searching and the other is with responsibility for local searching. With only one adjustable parameter, the CEO's performance proves competitive with more elaborate stochastic optimization procedures. We demonstrate it on a well known continuous optimization problem: the Lennerd-Jones clusters optimization problem.Comment: 5 pages and 3 figure

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore