3,296 research outputs found

    A hybrid swarm-based algorithm for single-objective optimization problems involving high-cost analyses

    Full text link
    In many technical fields, single-objective optimization procedures in continuous domains involve expensive numerical simulations. In this context, an improvement of the Artificial Bee Colony (ABC) algorithm, called the Artificial super-Bee enhanced Colony (AsBeC), is presented. AsBeC is designed to provide fast convergence speed, high solution accuracy and robust performance over a wide range of problems. It implements enhancements of the ABC structure and hybridizations with interpolation strategies. The latter are inspired by the quadratic trust region approach for local investigation and by an efficient global optimizer for separable problems. Each modification and their combined effects are studied with appropriate metrics on a numerical benchmark, which is also used for comparing AsBeC with some effective ABC variants and other derivative-free algorithms. In addition, the presented algorithm is validated on two recent benchmarks adopted for competitions in international conferences. Results show remarkable competitiveness and robustness for AsBeC.Comment: 19 pages, 4 figures, Springer Swarm Intelligenc

    Is swarm intelligence able to create mazes?

    Get PDF
    In this paper, the idea of applying Computational Intelligence in the process of creation board games, in particular mazes, is presented. For two different algorithms the proposed idea has been examined. The results of the experiments are shown and discussed to present advantages and disadvantages

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A self-adaptive artificial bee colony algorithm with local search for TSK-type neuro-fuzzy system training

    Full text link
    © 2019 IEEE. In this paper, we introduce a self-adaptive artificial bee colony (ABC) algorithm for learning the parameters of a Takagi-Sugeno-Kang-type (TSK-type) neuro-fuzzy system (NFS). The proposed NFS learns fuzzy rules for the premise part of the fuzzy system using an adaptive clustering method according to the input-output data at hand for establishing the network structure. All the free parameters in the NFS, including the premise and the following TSK-type consequent parameters, are optimized by the modified ABC (MABC) algorithm. Experiments involve two parts, including numerical optimization problems and dynamic system identification problems. In the first part of investigations, the proposed MABC compares to the standard ABC on mathematical optimization problems. In the remaining experiments, the performance of the proposed method is verified with other metaheuristic methods, including differential evolution (DE), genetic algorithm (GA), particle swarm optimization (PSO) and standard ABC, to evaluate the effectiveness and feasibility of the system. The simulation results show that the proposed method provides better approximation results than those obtained by competitors methods
    corecore