50,500 research outputs found

    Hybrid nature-inspired computation methods for optimization

    Get PDF
    The focus of this work is on the exploration of the hybrid Nature-Inspired Computation (NIC) methods with application in optimization. In the dissertation, we first study various types of the NIC algorithms including the Clonal Selection Algorithm (CSA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Simulated Annealing (SA), Harmony Search (HS), Differential Evolution (DE), and Mind Evolution Computing (MEC), and propose several new fusions of the NIC techniques, such as CSA-DE, HS-DE, and CSA-SA. Their working principles, structures, and algorithms are analyzed and discussed in details. We next investigate the performances of our hybrid NIC methods in handling nonlinear, multi-modal, and dynamical optimization problems, e.g., nonlinear function optimization, optimal LC passive power filter design, and optimization of neural networks and fuzzy classification systems. The hybridization of these NIC methods can overcome the shortcomings of standalone algorithms while still retaining all the advantages. It has been demonstrated using computer simulations that the proposed hybrid NIC approaches are capable of yielding superior optimization performances over the individual NIC methods as well as conventional methodologies with regard to the search efficiency, convergence speed, and quantity and quality of the optimal solutions achieved

    Bat Algorithm: Literature Review and Applications

    Full text link
    Bat algorithm (BA) is a bio-inspired algorithm developed by Yang in 2010 and BA has been found to be very efficient. As a result, the literature has expanded significantly in the last 3 years. This paper provides a timely review of the bat algorithm and its new variants. A wide range of diverse applications and case studies are also reviewed and summarized briefly here. Further research topics are also discussed.Comment: 10 page

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore