23,922 research outputs found

    Fast multi-swarm optimization for dynamic optimization problems

    Get PDF
    This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn the real world, many applications are non-stationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems

    Multi-agent system for dynamic manufacturing system optimization

    Get PDF
    This paper deals with the application of multi-agent system concept for optimization of dynamic uncertain process. These problems are known to have a computationally demanding objective function, which could turn to be infeasible when large problems are considered. Therefore, fast approximations to the objective function are required. This paper employs bundle of intelligent systems algorithms tied together in a multi-agent system. In order to demonstrate the system, a metal reheat furnace scheduling problem is adopted for highly demanded optimization problem. The proposed multi-agent approach has been evaluated for different settings of the reheat furnace scheduling problem. Particle Swarm Optimization, Genetic Algorithm with different classic and advanced versions: GA with chromosome differentiation, Age GA, and Sexual GA, and finally a Mimetic GA, which is based on combining the GA as a global optimizer and the PSO as a local optimizer. Experimentation has been performed to validate the multi-agent system on the reheat furnace scheduling problem

    Researches on Hierarchical Bare Bones Particle Swarm Optimization for Single-Objective Optimization Problems

    Get PDF
    In experiments and applications, optimization problems aim at finding the best solution from all possible solutions. According to the number of objective functions, optimization problems can be divided into single-objective problems and multi-objective problems. In this thesis, we focus on solutions for single-objective optimization problems. The purpose of this thesis is to clarify a means for realizing high search accuracy without parameter adjustment.To achieve high accuracy results for single-objective optimization problems, there are four major points to note: the local search ability in unimodal problems, the global search ability in multimodal problems, diverse search patterns for different problems, and the convergence speed controlling. Population-based methods like the particle swarm optimization (PSO) algorithms are often used to solve single-objective optimization problems. However, the PSO is a parameter-needed method which means it needs to adjust parameters for better performances. The adjustment of parameters becomes an overhead when considering for engineering applications. Besides, the bare bones particle swarm optimization (BBPSO) algorithm is a parameter-free method but unable to change the search pattern according to different problems. Also, the convergence speed of the BBPSO is too fast to achieve high accuracy results. To cross the shortcoming of existing methods and present high accuracy results for single-objective optimization problems, seven different hierarchical strategies are combined with the BBPSO in this thesis. Four of the proposed algorithms are designed with swarm division which are able to converge to the global optimum fast. The other three algorithms are designed with swarm reconstruction which are able to slow down the convergence and solve shifted or rotated problems. Moreover, no parameter adjustment is needed when controlling the convergence speed.First of all, four algorithms with swarm division are proposed. In the pair-wise bare bones particle swarm optimization (PBBPSO) algorithm, the swarm splits into several search units. Two particle are placed in one unit to enhance the local search ability of the particle swarm.To increase the global search ability, the dynamic allocation bare bones particle swarm optimization (DABBPSO) algorithm is proposed. Particles in DABBPSO are divided into two groups before evaluation according to their personal best position. One group is named as the core group (CG) and the other one is called the edge group (EG). The CG focuses on digging and trying to find the optimal point in the current local optimum. Conversely, the EG aims at exploring the research area and giving the whole swarm more chances to escape from the local optimum. The two groups work together to find the global optimum in the search area.To solve the shifted of rotated problems, traditional methods usually need to increase the population size. However, the growth of population size may increase the computing time. To cross this shortcoming, a multilayer structure is used in the triple bare bones particle swarm optimization (TBBPSO) algorithm. The TBBPSO is able to present high accuracy results in shifted and rotated problems without the increasing of population size.In real-world applications, optimization methods are required to solve different types of optimization problems. However, the original BBPSO can not change its search pattern according to different problems. To solve this problem, a bare bones particle swarm optimization algorithm with dynamic local search (DLS-BBPSO) is proposed. The dynamic local search strategy is able to provide different search patterns based on different questions.In engineering applications, the optimization results can be improved by controlling the convergence speed. Normally, traditional methods need parameter adjustment to control the convergence speed. It is difficult to adjust the parameters for every single problem. To solve this problem, three different reorganization strategies are combined with the BBPSO. In the bare bones particle swarm optimization algorithm with co-evaluation (BBPSO-C), a shadow swarm is used to increase the diversity of the original swarm. A dynamic grouping method is used to disperse both the shadow particle swarm and the original particle swarm. After the dispersion, an exchanging process will be held between the two swarms. The original swarm will be more concentrated and the shadow swarm will be more scattered. With the moving of particles between the two swarms, the BBPSO-C gains the ability to slow down the convergence without parameter adjustment.With the improvement of technologies, it is possible to get high accuracy results with a long calculation. In the dynamic reconstruction bare bones particle swarm optimization (DRBBPSO) algorithm, a dynamic elite selection strategy is used to improve the diversity of the swarm. After elite selection, the swarm will be reconstructed by elite particles. According to experimental results, the DRBBPSO is able to provide high accuracy results after a long calculation.To adapt to different types of optimization problems, a fission-fusion hybrid bare bones bare bones particle swarm optimization (FHBBPSO) is proposed. The FHBBPSO combines a fission strategy and a fusion strategy to sample new positions of the particles. The fission strategy aims at splitting the search space. Particles are assigned to different local groups to sample the corresponding regions. On the other side, the fusion strategy aims at narrowing the search space. Marginal groups will be gradually merged by the central groups until only one group is left. The two strategies work together for the theoretically best solution. The FHBBPSO shows perfect results on experiments with multiple optimization functions.To conclude, the proposed hierarchical strategies provide each of the BBPSO-based algorithms variants with different search characteristics, which makes them able to realize high search accuracy without parameter adjustment.博士(理学)法政大学 (Hosei University

    A clustering particle swarm optimizer for dynamic optimization

    Get PDF
    This article is posted here with permission of the IEEE - Copyright @ 2009 IEEEIn the real world, many applications are nonstationary optimization problems. This requires that optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a clustering particle swarm optimizer (CPSO) for dynamic optimization problems. The algorithm employs hierarchical clustering method to track multiple peaks based on a nearest neighbor search strategy. A fast local search method is also proposed to find the near optimal solutions in a local promising region in the search space. Six test problems generated from a generalized dynamic benchmark generator (GDBG) are used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for locating and tracking multiple optima in dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore