36 research outputs found

    Particle swarm optimization based on information diffusion and clonal selection

    Get PDF
    A novel PSO algorithm called InformPSO is introduced in this paper. The premature convergence problem is a deficiency of PSOs. First, we analyze the causes of premature convergence for conventional PSO. Second, the principles of information diffusion and clonal selection are incorporated into the proposed PSO algorithm to achieve a better diversity and break away from local optima. Finally, when compared with several other PSO variants, it yields better performance on optimization of unimodal and multimodal benchmark functions

    A novel hybrid backtracking search optimization algorithm for continuous function optimization

    Get PDF
    Stochastic optimization algorithm provides a robust and efficient approach for solving complex real world problems. Backtracking Search Optimization Algorithm (BSA) is a new stochastic evolutionary algorithm and the aim of this paper is to introduce a hybrid approach combining the BSA and Quadratic approximation (QA), called HBSAfor solving unconstrained non-linear, non-differentiable optimization problems. For the validity of the proposed method the results are compared with five state-of-the-art particle swarm optimization (PSO) variant approaches in terms of the numerical result of the solutions. The sensitivity analysis of the BSA control parameter (F) is also performed

    THE BEES’ ALGORITHM FOR DESIGN OPTIMIZATION OF A GRIPPER MECHANISM

    Get PDF
    In this paper, a gripper mechanism is optimized by using bees’ algorithm (BA) to compare with Non-dominated Sorting Genetic Algorithm version II (NSGA-II). The procedure of BA is proposed. The superiority of BA is illustrated by using results in figures and tables. A sensitivity analysis using correlation test is executed. The effectiveness coefficients of design variable for the objectives are provided. Consequently, the effectual design variables and the genuine searching method of BA are clearly evaluated and discussed. The BA provides dispersed and the least crowded Pareto Front population for solution in the shortest duration. Therefore, the best solutions are selected based on curve fitting. The closest solutions to the fitted curve are selected as the best in the region

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness

    Niching grey wolf optimizer for multimodal optimization problems

    Get PDF
    Metaheuristic algorithms are widely used for optimization in both research and the industrial community for simplicity, flexibility, and robustness. However, multi-modal optimization is a difficult task, even for metaheuristic algorithms. Two important issues that need to be handled for solving multi-modal problems are (a) to categorize multiple local/global optima and (b) to uphold these optima till the ending. Besides, a robust local search ability is also a prerequisite to reach the exact global optima. Grey Wolf Optimizer (GWO) is a recently developed nature-inspired metaheuristic algorithm that requires less parameter tuning. However, the GWO suffers from premature convergence and fails to maintain the balance between exploration and exploitation for solving multi-modal problems. This study proposes a niching GWO (NGWO) that incorporates personal best features of PSO and a local search technique to address these issues. The proposed algorithm has been tested for 23 benchmark functions and three engineering cases. The NGWO outperformed all other considered algorithms in most of the test functions compared to state-of-the-art metaheuristics such as PSO, GSA, GWO, Jaya and two improved variants of GWO, and niching CSA. Statistical analysis and Friedman tests have been conducted to compare the performance of these algorithms thoroughly

    Particle Swarm Optimized Autonomous Learning Fuzzy System

    Get PDF
    The antecedent and consequent parts of a first-order evolving intelligent system (EIS) determine the validity of the learning results and overall system performance. Nonetheless, the state-of-the-art techniques mostly stress on the novelty from the system identification point of view but pay less attention to the optimality of the learned parameters. Using the recently introduced autonomous learning multiple model (ALMMo) system as the implementation basis, this paper introduces a particles warm-based approach for EIS optimization. The proposed approach is able to simultaneously optimize the antecedent and consequent parameters of ALMMo and effectively enhance the system performance by iteratively searching for optimal solutions in the problem spaces. In addition, the proposed optimization approach does not adversely influence the “one pass” learning ability of ALMMo. Once the optimization process is complete, ALMMo can continue to learn from new data to incorporate unseen data patterns recursively without a full retraining. Experimental studies with a number of real-world benchmark problems validate the proposed concept and general principles. It is also verified that the proposed optimization approach can be applied to other types of EISs with similar operating mechanisms

    Particle swarm optimization with state-based adaptive velocity limit strategy

    Full text link
    Velocity limit (VL) has been widely adopted in many variants of particle swarm optimization (PSO) to prevent particles from searching outside the solution space. Several adaptive VL strategies have been introduced with which the performance of PSO can be improved. However, the existing adaptive VL strategies simply adjust their VL based on iterations, leading to unsatisfactory optimization results because of the incompatibility between VL and the current searching state of particles. To deal with this problem, a novel PSO variant with state-based adaptive velocity limit strategy (PSO-SAVL) is proposed. In the proposed PSO-SAVL, VL is adaptively adjusted based on the evolutionary state estimation (ESE) in which a high value of VL is set for global searching state and a low value of VL is set for local searching state. Besides that, limit handling strategies have been modified and adopted to improve the capability of avoiding local optima. The good performance of PSO-SAVL has been experimentally validated on a wide range of benchmark functions with 50 dimensions. The satisfactory scalability of PSO-SAVL in high-dimension and large-scale problems is also verified. Besides, the merits of the strategies in PSO-SAVL are verified in experiments. Sensitivity analysis for the relevant hyper-parameters in state-based adaptive VL strategy is conducted, and insights in how to select these hyper-parameters are also discussed.Comment: 33 pages, 8 figure
    corecore