1,524 research outputs found

    Efficiency Analysis of Swarm Intelligence and Randomization Techniques

    Full text link
    Swarm intelligence has becoming a powerful technique in solving design and scheduling tasks. Metaheuristic algorithms are an integrated part of this paradigm, and particle swarm optimization is often viewed as an important landmark. The outstanding performance and efficiency of swarm-based algorithms inspired many new developments, though mathematical understanding of metaheuristics remains partly a mystery. In contrast to the classic deterministic algorithms, metaheuristics such as PSO always use some form of randomness, and such randomization now employs various techniques. This paper intends to review and analyze some of the convergence and efficiency associated with metaheuristics such as firefly algorithm, random walks, and L\'evy flights. We will discuss how these techniques are used and their implications for further research.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1212.0220, arXiv:1208.0527, arXiv:1003.146

    Multi self-adapting particle swarm optimization algorithm (MSAPSO).

    Get PDF
    The performance and stability of the Particle Swarm Optimization algorithm depends on parameters that are typically tuned manually or adapted based on knowledge from empirical parameter studies. Such parameter selection is ineffectual when faced with a broad range of problem types, which often hinders the adoption of PSO to real world problems. This dissertation develops a dynamic self-optimization approach for the respective parameters (inertia weight, social and cognition). The effects of self-adaption for the optimal balance between superior performance (convergence) and the robustness (divergence) of the algorithm with regard to both simple and complex benchmark functions is investigated. This work creates a swarm variant which is parameter-less, which means that it is virtually independent of the underlying examined problem type. As PSO variants always have the issue, that they can be stuck-in-local-optima, as second main topic the MSAPSO algorithm do have a highly flexible escape-lmin-strategy embedded, which works dimension-less. The MSAPSO algorithm outperforms other PSO variants and also other swarm inspired approaches such as Memetic Firefly algorithm with these two major algorithmic elements (parameter-less approach, dimension-less escape-lmin-strategy). The average performance increase in two dimensions is at least fifteen percent with regard to the compared swarm variants. In higher dimensions (≥ 250) the performance gain accumulates to about fifty percent in average. At the same time the error-proneness of MSAPSO is in average similar or even significant better when converging to the respective global optima’s

    Particle Swarms Reformulated towards a Unified and Flexible Framework

    Get PDF

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    Nature-inspired optimization algorithms: challenges and open problems

    Get PDF
    Many problems in science and engineering can be formulated as optimization problems, subject to complex nonlinear constraints. The solutions of highly nonlinear problems usually require sophisticated optimization algorithms, and traditional algorithms may struggle to deal with such problems. A current trend is to use nature-inspired algorithms due to their flexibility and effectiveness. However, there are some key issues concerning nature-inspired computation and swarm intelligence. This paper provides an in-depth review of some recent nature-inspired algorithms with the emphasis on their search mechanisms and mathematical foundations. Some challenging issues are identified and five open problems are highlighted, concerning the analysis of algorithmic convergence and stability, parameter tuning, mathematical framework, role of benchmarking and scalability. These problems are discussed with the directions for future research

    CSM-465: The Sampling Distribution of Particle Swarm Optimisers and their Stability

    Get PDF
    Several theoretical analyses of the dynamics of particle swarms have been offered in the literature over the last decade. Virtually all rely on substantial simplifications, often including the assumption that the particles are deterministic. This has prevented the exact characterisation of the sampling distribution of the PSO. In this paper we introduce a novel method that allows us to exactly determine all the characteristics of a PSO's sampling distribution and explain how it changes over any number of generations, in the presence stochasticity. The only assumption we make is stagnation, i.e., we study the sampling distribution produced by particles in search for a better personal best. We apply the analysis to the PSO with inertia weight, but the analysis is also valid for the PSO with constriction and other forms of PSO

    Particle swarm optimization : stability analysis using N-informers under arbitrary coefficient distributions

    Get PDF
    This paper derives, under minimal modelling assumptions, a simple to use theorem for obtaining both order-1 and order-2 stability criteria for a common class of particle swarm optimization (PSO) variants. Specifically, PSO variants that can be rewritten as a finite sum of stochastically weighted difference vectors between a particle’s position and swarm informers are covered by the theorem. Additionally, the use of the derived theorem allows a PSO practitioner to obtain stability criteria that contains no artificial restriction on the relationship between control coefficients. The majority of previous stability results for PSO variants provided stability criteria under the restriction that certain control coefficients are equal; such restrictions are not present when using the derived theorem. Using the derived theorem, as demonstration of its ease of use, stability criteria are derived without the imposed restriction on the relation between the control coefficients for four popular PSO variants.http://www.elsevier.com/locate/swevohj2023Mathematics and Applied Mathematic

    Particle Swarm Optimization and Uncertainty Assessment in Inverse Problems

    Get PDF
    Most inverse problems in the industry (and particularly in geophysical exploration) are highly underdetermined because the number of model parameters too high to achieve accurate data predictions and because the sampling of the data space is scarce and incomplete; it is always affected by different kinds of noise. Additionally, the physics of the forward problem is a simplification of the reality. All these facts result in that the inverse problem solution is not unique; that is, there are different inverse solutions (called equivalent), compatible with the prior information that fits the observed data within similar error bounds. In the case of nonlinear inverse problems, these equivalent models are located in disconnected flat curvilinear valleys of the cost-function topography. The uncertainty analysis consists of obtaining a representation of this complex topography via different sampling methodologies. In this paper, we focus on the use of a particle swarm optimization (PSO) algorithm to sample the region of equivalence in nonlinear inverse problems. Although this methodology has a general purpose, we show its application for the uncertainty assessment of the solution of a geophysical problem concerning gravity inversion in sedimentary basins, showing that it is possible to efficiently perform this task in a sampling-while-optimizing mode. Particularly, we explain how to use and analyze the geophysical models sampled by exploratory PSO family members to infer different descriptors of nonlinear uncertainty
    corecore