7,051 research outputs found

    Improvement of PSO algorithm by memory based gradient search - application in inventory management

    Full text link
    Advanced inventory management in complex supply chains requires effective and robust nonlinear optimization due to the stochastic nature of supply and demand variations. Application of estimated gradients can boost up the convergence of Particle Swarm Optimization (PSO) algorithm but classical gradient calculation cannot be applied to stochastic and uncertain systems. In these situations Monte-Carlo (MC) simulation can be applied to determine the gradient. We developed a memory based algorithm where instead of generating and evaluating new simulated samples the stored and shared former function evaluations of the particles are sampled to estimate the gradients by local weighted least squares regression. The performance of the resulted regional gradient-based PSO is verified by several benchmark problems and in a complex application example where optimal reorder points of a supply chain are determined.Comment: book chapter, 20 pages, 7 figures, 2 table

    Feedback learning particle swarm optimization

    Get PDF
    This is the author’s version of a work that was accepted for publication in Applied Soft Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published and is available at the link below - Copyright @ Elsevier 2011In this paper, a feedback learning particle swarm optimization algorithm with quadratic inertia weight (FLPSO-QIW) is developed to solve optimization problems. The proposed FLPSO-QIW consists of four steps. Firstly, the inertia weight is calculated by a designed quadratic function instead of conventional linearly decreasing function. Secondly, acceleration coefficients are determined not only by the generation number but also by the search environment described by each particle’s history best fitness information. Thirdly, the feedback fitness information of each particle is used to automatically design the learning probabilities. Fourthly, an elite stochastic learning (ELS) method is used to refine the solution. The FLPSO-QIW has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art PSO algorithms, the performance of FLPSO-QIW is promising and competitive. The effects of parameter adaptation, parameter sensitivity and proposed mechanism are discussed in detail.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany

    Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    Full text link
    In this investigation, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 m x 180 m block in an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Due to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms comprised of mixed optimization techniques. For global optimization, we consider Simulated Annealing (SA), Particle Swarm (PS) and Genetic Algorithm (GA), which rely solely on objective function evaluations; i.e., they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic Implicit Filtering method (IF), which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques combining global optimization and Implicit Filtering address difficulties associated with the non-smooth response, and their performances are shown to significantly decrease the computational time over the global optimization methods alone. To quantify uncertainties associated with the source location and intensity, we employ the Delayed Rejection Adaptive Metropolis (DRAM) and DiffeRential Evolution Adaptive Metropolis (DREAM) algorithms. Marginal densities of the source properties are obtained, and the means of the chains' compare accurately with the estimates produced by the hybrid algorithms.Comment: 36 pages, 14 figure

    Evolutionary global optimization posed as a randomly perturbed martingale problem and applied to parameter recovery of chaotic oscillators

    Full text link
    A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to a directional update term, which parallels the conventional Gateaux derivative used in a local search for the extrema of smooth cost functionals, additional layers of random perturbations referred to as 'coalescence' and 'scrambling'. A selection scheme, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method apparently provides for a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization.Comment: 35 pages, 2 figures; being submitted to Physica

    PID2018 Benchmark Challenge:Multi-Objective Stochastic Optimization Algorithm

    Full text link
    This paper presents a multi-objective stochastic optimization method for tuning of the controller parameters of Refrigeration Systems based on Vapour Compression. Stochastic Multi Parameter Divergence Optimization (SMDO) algorithm is modified for minimization of the Multi Objective function for optimization process. System control performance is improved by tuning of the PI controller parameters according to discrete time model of the refrigeration system with multi objective function by adding conditional integral structure that is preferred to reduce the steady state error of the system. Simulations are compared with existing results via many graphical and numerical solutions

    State Transition Algorithm

    Full text link
    In terms of the concepts of state and state transition, a new heuristic random search algorithm named state transition algorithm is proposed. For continuous function optimization problems, four special transformation operators called rotation, translation, expansion and axesion are designed. Adjusting measures of the transformations are mainly studied to keep the balance of exploration and exploitation. Convergence analysis is also discussed about the algorithm based on random search theory. In the meanwhile, to strengthen the search ability in high dimensional space, communication strategy is introduced into the basic algorithm and intermittent exchange is presented to prevent premature convergence. Finally, experiments are carried out for the algorithms. With 10 common benchmark unconstrained continuous functions used to test the performance, the results show that state transition algorithms are promising algorithms due to their good global search capability and convergence property when compared with some popular algorithms.Comment: 18 pages, 28 figure

    Enhanced Estimation of Autoregressive Wind Power Prediction Model Using Constriction Factor Particle Swarm Optimization

    Full text link
    Accurate forecasting is important for cost-effective and efficient monitoring and control of the renewable energy based power generation. Wind based power is one of the most difficult energy to predict accurately, due to the widely varying and unpredictable nature of wind energy. Although Autoregressive (AR) techniques have been widely used to create wind power models, they have shown limited accuracy in forecasting, as well as difficulty in determining the correct parameters for an optimized AR model. In this paper, Constriction Factor Particle Swarm Optimization (CF-PSO) is employed to optimally determine the parameters of an Autoregressive (AR) model for accurate prediction of the wind power output behaviour. Appropriate lag order of the proposed model is selected based on Akaike information criterion. The performance of the proposed PSO based AR model is compared with four well-established approaches; Forward-backward approach, Geometric lattice approach, Least-squares approach and Yule-Walker approach, that are widely used for error minimization of the AR model. To validate the proposed approach, real-life wind power data of \textit{Capital Wind Farm} was obtained from Australian Energy Market Operator. Experimental evaluation based on a number of different datasets demonstrate that the performance of the AR model is significantly improved compared with benchmark methods.Comment: The 9th IEEE Conference on Industrial Electronics and Applications (ICIEA) 201

    Controller design for synchronization of an array of delayed neural networks using a controllable

    Get PDF
    This is the post-print version of the Article - Copyright @ 2011 ElsevierIn this paper, a controllable probabilistic particle swarm optimization (CPPSO) algorithm is introduced based on Bernoulli stochastic variables and a competitive penalized method. The CPPSO algorithm is proposed to solve optimization problems and is then applied to design the memoryless feedback controller, which is used in the synchronization of an array of delayed neural networks (DNNs). The learning strategies occur in a random way governed by Bernoulli stochastic variables. The expectations of Bernoulli stochastic variables are automatically updated by the search environment. The proposed method not only keeps the diversity of the swarm, but also maintains the rapid convergence of the CPPSO algorithm according to the competitive penalized mechanism. In addition, the convergence rate is improved because the inertia weight of each particle is automatically computed according to the feedback of fitness value. The efficiency of the proposed CPPSO algorithm is demonstrated by comparing it with some well-known PSO algorithms on benchmark test functions with and without rotations. In the end, the proposed CPPSO algorithm is used to design the controller for the synchronization of an array of continuous-time delayed neural networks.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the Engineering and Physical Sciences Research Council EPSRC of the U.K. under Grant No. GR/S27658/01, an International Joint Project sponsored by the Royal Society of the U.K., and the Alexander von Humboldt Foundation of Germany

    Particle Swarm Optimization: A survey of historical and recent developments with hybridization perspectives

    Full text link
    Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment and improvements of its most basic as well as some of the state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.Comment: 34 pages, 7 table

    Impact of noise on a dynamical system: prediction and uncertainties from a swarm-optimized neural network

    Get PDF
    In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey--Glass chaotic time series in the short-term x(t+6)x(t+6). The performance prediction was evaluated and compared with another studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called {\it stochastic} hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute uncertainties of predictions for noisy Mackey--Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σN\sigma_{N}) from 0.01 to 0.1.Comment: 11 pages, 8 figure
    corecore