169 research outputs found

    GREEN-PSO: Conserving Function Evaluations in Particle Swarm Optimization

    Get PDF
    particle swarm optimization; swarm intelligence. In the Particle Swarm Optimization (PSO) algorithm, the expense of evaluating the objective function can make it difficult, or impossible, to use this approach effectively; reducing the number of necessary function evaluations would make it possible to apply the PSO algorithm more widely. Many function approximation techniques have been developed that address this issue, but an alternative to function approximation is function conservation. We describe GREEN-PSO (GR-PSO), an algorithm that, given a fixed number of function evaluations, conserves those function evaluations by probabilistically choosing a subset of particles smaller than the entire swarm on each iteration and allowing only those particles to perform function evaluations. The “surplus ” of function evaluations thus created allows a greater number of particles and/or iterations. In spite of the loss of information resulting from this more parsimonious use of function evaluations, GR-PSO performs as well as, or better than, the standard PSO algorithm on a set of six benchmark functions, both in terms of the rate of error reduction and the quality of the final solution.

    Steady state particle swarm

    Get PDF
    The following grant information was disclosed by the authors: Fundação para a Ciência e Tecnologia (FCT), Research Fellowship: SFRH/BPD/66876/2009. FCT PROJECT: UID/EEA/50009/2013. EPHEMECH: TIN2014-56494-C4-3-P, Spanish Ministry of Economy and Competitivity. PROY-PP2015-06: Plan Propio 2015 UGR. CEI2015-MP-V17 of the Microprojects program 2015 from CEI BioTIC Granada.This paper investigates the performance and scalability of a new update strategy for the particle swarm optimization (PSO) algorithm. The strategy is inspired by the Bak–Sneppen model of co-evolution between interacting species, which is basically a network of fitness values (representing species) that change over time according to a simple rule: the least fit species and its neighbors are iteratively replaced with random values. Following these guidelines, a steady state and dynamic update strategy for PSO algorithms is proposed: only the least fit particle and its neighbors are updated and evaluated in each time-step; the remaining particles maintain the same position and fitness, unless they meet the update criterion. The steady state PSO was tested on a set of unimodal, multimodal, noisy and rotated benchmark functions, significantly improving the quality of results and convergence speed of the standard PSOs and more sophisticated PSOs with dynamic parameters and neighborhood. A sensitivity analysis of the parameters confirms the performance enhancement with different parameter settings and scalability tests show that the algorithm behavior is consistent throughout a substantial range of solution vector dimensions.This work was supported by Fundação para a Ciência e Tecnologia (FCT) Research Fellowship SFRH/BPD/66876/2009 and FCT Project (UID/EEA/50009/2013), EPHEMECH (TIN2014-56494-C4-3-P, Spanish Ministry of Economy and Competitivity), PROY-PP2015-06 (Plan Propio 2015 UGR), project CEI2015-MP-V17 of the Microprojects program 2015 from CEI BioTIC Granada

    Niching particle swarm optimization based euclidean distance and hierarchical clustering for multimodal optimization

    Get PDF
    Abstract : Multimodal optimization is still one of the most challenging tasks in the evolutionary computation field, when multiple global and local optima need to be effectively and efficiently located. In this paper, a niching Particle Swarm Optimization (PSO) based Euclidean Distance and Hierarchical Clustering (EDHC) for multimodal optimization is proposed. This technique first uses the Euclidean distance based PSO algorithm to perform preliminarily search. In this phase, the particles are rapidly clustered around peaks. Secondly, hierarchical clustering is applied to identify and concentrate the particles distributed around each peak to finely search as a whole. Finally, a small world network topology is adopted in each niche to improve the exploitation ability of the algorithm. At the end of this paper, the proposed EDHC-PSO algorithm is applied to the Traveling Salesman Problems (TSP) after being discretized. The experiments demonstrate that the proposed method outperforms existing niching techniques on benchmark problems, and is effective for TSP

    A Survey of Evolutionary Continuous Dynamic Optimization Over Two Decades:Part B

    Get PDF
    Many real-world optimization problems are dynamic. The field of dynamic optimization deals with such problems where the search space changes over time. In this two-part paper, we present a comprehensive survey of the research in evolutionary dynamic optimization for single-objective unconstrained continuous problems over the last two decades. In Part A of this survey, we propose a new taxonomy for the components of dynamic optimization algorithms, namely, convergence detection, change detection, explicit archiving, diversity control, and population division and management. In comparison to the existing taxonomies, the proposed taxonomy covers some additional important components, such as convergence detection and computational resource allocation. Moreover, we significantly expand and improve the classifications of diversity control and multi-population methods, which are under-represented in the existing taxonomies. We then provide detailed technical descriptions and analysis of different components according to the suggested taxonomy. Part B of this survey provides an indepth analysis of the most commonly used benchmark problems, performance analysis methods, static optimization algorithms used as the optimization components in the dynamic optimization algorithms, and dynamic real-world applications. Finally, several opportunities for future work are pointed out

    Seeking multiple solutions:an updated survey on niching methods and their applications

    Get PDF
    Multi-Modal Optimization (MMO) aiming to locate multiple optimal (or near-optimal) solutions in a single simulation run has practical relevance to problem solving across many fields. Population-based meta-heuristics have been shown particularly effective in solving MMO problems, if equipped with specificallydesigned diversity-preserving mechanisms, commonly known as niching methods. This paper provides an updated survey on niching methods. The paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of niching methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment. Furthermore, the paper surveys previous attempts at leveraging the capabilities of niching to facilitate various optimization tasks (e.g., multi-objective and dynamic optimization) and machine learning tasks (e.g., clustering, feature selection, and learning ensembles). A list of successful applications of niching methods to real-world problems is presented to demonstrate the capabilities of niching methods in providing solutions that are difficult for other optimization methods to offer. The significant practical value of niching methods is clearly exemplified through these applications. Finally, the paper poses challenges and research questions on niching that are yet to be appropriately addressed. Providing answers to these questions is crucial before we can bring more fruitful benefits of niching to real-world problem solving

    A study of gradient based particle swarm optimisers

    Get PDF
    Gradient-based optimisers are a natural way to solve optimisation problems, and have long been used for their efficacy in exploiting the search space. Particle swarm optimisers (PSOs), when using reasonable algorithm parameters, are considered to have good exploration characteristics. This thesis proposes a specific way of constructing hybrid gradient PSOs. Heterogeneous, hybrid gradient PSOs are constructed by allowing the gradient algorithm to optimise local best particles, while the PSO algorithm governs the behaviour of the rest of the swarm. This approach allows the distinct algorithms to concentrate on performing the separate tasks of exploration and exploitation. Two new PSOs, the Gradient Descent PSO, which combines the Gradient Descent and PSO algorithms, and the LeapFrog PSO, which combines the LeapFrog and PSO algorithms, are introduced. The GDPSO represents arguably the simplest hybrid gradient PSO possible, while the LeapFrog PSO incorporates the more sophisticated LFOP1(b) algorithm, exhibiting a heuristic algorithm design and dynamic time step adjustment mechanism. The strong tendency of these hybrids to prematurely converge is examined, and it is shown that by modifying algorithm parameters and delaying the introduction of gradient information, it is possible to retain strong exploration capabilities of the original PSO algorithm while also benefiting from the exploitation of the gradient algorithms.Dissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte

    Learning automata and sigma imperialist competitive algorithm for optimization of single and multi-objective functions

    Get PDF
    Evolutionary Algorithms (EA) consist of several heuristics which are able to solve optimisation tasks by imitating some aspects of natural evolution. Two widely-used EAs, namely Harmony Search (HS) and Imperialist Competitive Algorithm (ICA), are considered for improving single objective EA and Multi Objective EA (MOEA), respectively. HS is popular because of its speed and ICA has the ability for escaping local optima, which is an important criterion for a MOEA. In contrast, both algorithms have suffered some shortages. The HS algorithm could be trapped in local optima if its parameters are not tuned properly. This shortage causes low convergence rate and high computational time. In ICA, there is big obstacle that impedes ICA from becoming MOEA. ICA cannot be matched with crowded distance method which produces qualitative value for MOEAs, while ICA needs quantitative value to determine power of each solution. This research proposes a learnable EA, named learning automata harmony search (LAHS). The EA employs a learning automata (LA) based approach to ensure that HS parameters are learnable. This research also proposes a new MOEA based on ICA and Sigma method, named Sigma Imperialist Competitive Algorithm (SICA). Sigma method provides a mechanism to measure the solutions power based on their quantity value. The proposed LAHS and SICA algorithms are tested on wellknown single objective and multi objective benchmark, respectively. Both LAHS and MOICA show improvements in convergence rate and computational time in comparison to the well-known single EAs and MOEAs

    Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction

    Get PDF
    Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF). This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA. The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed. The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions
    corecore