2,210 research outputs found

    Cooperation of Nature and Physiologically Inspired Mechanism in Visualisation

    Get PDF
    A novel approach of integrating two swarm intelligence algorithms is considered, one simulating the behaviour of birds flocking (Particle Swarm Optimisation) and the other one (Stochastic Diffusion Search) mimics the recruitment behaviour of one species of ants – Leptothorax acervorum. This hybrid algorithm is assisted by a biological mechanism inspired by the behaviour of blood flow and cells in blood vessels, where the concept of high and low blood pressure is utilised. The performance of the nature-inspired algorithms and the biologically inspired mechanisms in the hybrid algorithm is reflected through a cooperative attempt to make a drawing on the canvas. The scientific value of the marriage between the two swarm intelligence algorithms is currently being investigated thoroughly on many benchmarks and the results reported suggest a promising prospect (al-Rifaie, Bishop & Blackwell, 2011). We also discuss whether or not the ‘art works’ generated by nature and biologically inspired algorithms can possibly be considered as ‘computationally creative’

    Particle swarm optimization with composite particles in dynamic environments

    Get PDF
    This article is placed here with the permission of IEEE - Copyright @ 2010 IEEEIn recent years, there has been a growing interest in the study of particle swarm optimization (PSO) in dynamic environments. This paper presents a new PSO model, called PSO with composite particles (PSO-CP), to address dynamic optimization problems. PSO-CP partitions the swarm into a set of composite particles based on their similarity using a "worst first" principle. Inspired by the composite particle phenomenon in physics, the elementary members in each composite particle interact via a velocity-anisotropic reflection scheme to integrate valuable information for effectively and rapidly finding the promising optima in the search space. Each composite particle maintains the diversity by a scattering operator. In addition, an integral movement strategy is introduced to promote the swarm diversity. Experiments on a typical dynamic test benchmark problem provide a guideline for setting the involved parameters and show that PSO-CP is efficient in comparison with several state-of-the-art PSO algorithms for dynamic optimization problems.This work was supported in part by the Key Program of the National Natural Science Foundation (NNSF) of China under Grant 70931001 and 70771021, the Science Fund for Creative Research Group of the NNSF of China under Grant 60821063 and 70721001, the Ph.D. Programs Foundation of the Ministry of education of China under Grant 200801450008, and by the Engineering and Physical Sciences Research Council of U.K. under Grant EP/E060722/1

    Parallel surrogate-assisted global optimization with expensive functions – a survey

    Get PDF
    Surrogate assisted global optimization is gaining popularity. Similarly, modern advances in computing power increasingly rely on parallelization rather than faster processors. This paper examines some of the methods used to take advantage of parallelization in surrogate based global optimization. A key issue focused on in this review is how different algorithms balance exploration and exploitation. Most of the papers surveyed are adaptive samplers that employ Gaussian Process or Kriging surrogates. These allow sophisticated approaches for balancing exploration and exploitation and even allow to develop algorithms with calculable rate of convergence as function of the number of parallel processors. In addition to optimization based on adaptive sampling, surrogate assisted parallel evolutionary algorithms are also surveyed. Beyond a review of the present state of the art, the paper also argues that methods that provide easy parallelization, like multiple parallel runs, or methods that rely on population of designs for diversity deserve more attention.United States. Dept. of Energy (National Nuclear Security Administration. Advanced Simulation and Computing Program. Cooperative Agreement under the Predictive Academic Alliance Program. DE-NA0002378

    A survey on metaheuristics for stochastic combinatorial optimization

    Get PDF
    Metaheuristics are general algorithmic frameworks, often nature-inspired, designed to solve complex optimization problems, and they are a growing research area since a few decades. In recent years, metaheuristics are emerging as successful alternatives to more classical approaches also for solving optimization problems that include in their mathematical formulation uncertain, stochastic, and dynamic information. In this paper metaheuristics such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others are introduced, and their applications to the class of Stochastic Combinatorial Optimization Problems (SCOPs) is thoroughly reviewed. Issues common to all metaheuristics, open problems, and possible directions of research are proposed and discussed. In this survey, the reader familiar to metaheuristics finds also pointers to classical algorithmic approaches to optimization under uncertainty, and useful informations to start working on this problem domain, while the reader new to metaheuristics should find a good tutorial in those metaheuristics that are currently being applied to optimization under uncertainty, and motivations for interest in this fiel

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Bioinspired Computing: Swarm Intelligence

    Get PDF

    Genetic Algorithms in Stochastic Optimization and Applications in Power Electronics

    Get PDF
    Genetic Algorithms (GAs) are widely used in multiple fields, ranging from mathematics, physics, to engineering fields, computational science, bioinformatics, manufacturing, economics, etc. The stochastic optimization problems are important in power electronics and control systems, and most designs require choosing optimum parameters to ensure maximum control effect or minimum noise impact; however, they are difficult to solve using the exhaustive searching method, especially when the search domain conveys a large area or is infinite. Instead, GAs can be applied to solve those problems. And efficient computing budget allocation technique for allocating the samples in GAs is necessary because the real-life problems with noise are often difficult to evaluate and require significant computation effort. A single objective GA is proposed in which computing budget allocation techniques are integrated directly into the selection operator rather than being used during fitness evaluation. This allows fitness evaluations to be allocated towards specific individuals for whom the algorithm requires more information, and this selection-integrated method is shown to be more accurate for the same computing budget than the existing evaluation-integrated methods on several test problems. A combination of studies is performed on a multi-objective GA that compares integration of different computing budget allocation methods into either the evaluation or the environmental selection steps. These comparisons are performed on stochastic problems derived from benchmark multi-objective optimization problems and consider varying levels of noise. The algorithms are compared regarding both proximity to and coverage of the true Pareto-optimal front, and sufficient studies are performed to allow statistically significant conclusions to be drawn. Finally, the multi-objective GA with selection integrated sampling technique is applied to solve a multi-objective stochastic optimization problem in a grid connected photovoltaic inverter system with noise injected from both the solar power input and the utility grid
    corecore