20 research outputs found

    Flower pollination algorithm with pollinator attraction

    Get PDF
    The Flower Pollination Algorithm (FPA) is a highly efficient optimization algorithm that is inspired by the evolution process of flowering plants. In the present study, a modified version of FPA is proposed accounting for an additional feature of flower pollination in nature that is the so-called pollinator attraction. Pollinator attraction represents the natural tendency of flower species to evolve in order to attract pollinators by using their colour, shape and scent as well as nutritious rewards. To reflect this evolution mechanism, the proposed FPA variant with Pollinator Attraction (FPAPA) provides fitter flowers of the population with higher probabilities of achieving pollen transfer via biotic pollination than other flowers. FPAPA is tested against a set of 28 benchmark mathematical functions, defined in IEEE-CEC’13 for real-parameter single-objective optimization problems, as well as structural optimization problems. Numerical experiments show that the modified FPA represents a statistically significant improvement upon the original FPA and that it can outperform other state-of-the-art optimization algorithms offering better and more robust optimal solutions. Additional research is suggested to combine FPAPA with other modified and hybridized versions of FPA to further increase its performance in challenging optimization problems

    Flower pollination algorithm parameters tuning

    Get PDF
    The flower pollination algorithm (FPA) is a highly efficient metaheuristic optimization algorithm that is inspired by the pollination process of flowering species. FPA is characterised by simplicity in its formulation and high computational performance. Previous studies on FPA assume fixed parameter values based on empirical observations or experimental comparisons of limited scale and scope. In this study, a comprehensive effort is made to identify appropriate values of the FPA parameters that maximize its computational performance. To serve this goal, a simple non-iterative, single-stage sampling tuning method is employed, oriented towards practical applications of FPA. The tuning method is applied to the set of 28 functions specified in IEEE-CEC'13 for real-parameter single-objective optimization problems. It is found that the optimal FPA parameters depend significantly on the objective functions, the problem dimensions and affordable computational cost. Furthermore, it is found that the FPA parameters that minimize mean prediction errors do not always offer the most robust predictions. At the end of this study, recommendations are made for setting the optimal FPA parameters as a function of problem dimensions and affordable computational cost. [Abstract copyright: © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.

    Elite Opposition-Based Water Wave Optimization Algorithm for Global Optimization

    Get PDF
    Water wave optimization (WWO) is a novel metaheuristic method that is based on shallow water wave theory, which has simple structure, easy realization, and good performance even with a small population. To improve the convergence speed and calculation precision even further, this paper on elite opposition-based strategy water wave optimization (EOBWWO) is proposed, and it has been applied for function optimization and structure engineering design problems. There are three major optimization strategies in the improvement: elite opposition-based (EOB) learning strategy enhances the diversity of population, local neighborhood search strategy is introduced to enhance local search in breaking operation, and improved propagation operator provides the improved algorithm with a better balance between exploration and exploitation. EOBWWO algorithm is verified by using 20 benchmark functions and two structure engineering design problems and the performance of EOBWWO is compared against those of the state-of-the-art algorithms. Experimental results show that the proposed algorithm has faster convergence speed, higher calculation precision, with the exact solution being even obtained on some benchmark functions, and a higher degree of stability than other comparative algorithms

    An Efficient Marine Predators Algorithm for Solving Multi-Objective Optimization Problems:Analysis and Validations

    Get PDF
    Recently, a new strong optimization algorithm called marine predators algorithm (MPA) has been proposed for tackling the single-objective optimization problems and could dramatically fulfill good outcomes in comparison to the other compared algorithms. Those dramatic outcomes, in addition to our recently-proposed strategies for helping meta-heuristic algorithms in fulfilling better outcomes for the multi-objective optimization problems, motivate us to make a comprehensive study to see the performance of MPA alone and with those strategies for those optimization problems. Specifically, This paper proposes four variants of the marine predators' algorithm (MPA) for solving multi-objective optimization problems. The first version, called the multi-objective marine predators' algorithm (MMPA) is based on the behavior of marine predators in finding their prey. In the second version, a novel strategy called dominance strategy-based exploration-exploitation (DSEE) recently-proposed is effectively incorporated with MMPA to relate the exploration and exploitation phase of MPA to the dominance of the solutions - this version is called M-MMPA. DSEE counts the number of dominated solutions for each solution - the solutions with high dominance undergo an exploitation phase; the others with small dominance undergo the exploration phase. The third version integrates M-MMPA with a novel strategy called Gaussian-based mutation, which uses the Gaussian distribution-based exploration and exploitation strategy to search for the optimal solution. The fourth version uses the Nelder-Mead simplex method with M-MMPA (M-MMPA-NMM) at the start of the optimization process to construct a front of the non-dominated solutions that will help M-MMPA to find more good solutions. The effectiveness of the four versions is validated on a large set of theoretical and practical problems. For all the cases, the proposed algorithm and its variants are shown to be superior to a number of well-known multi-objective optimization algorithms. </p

    Evolving CNN-LSTM Models for Time Series Prediction Using Enhanced Grey Wolf Optimizer

    Get PDF
    In this research, we propose an enhanced Grey Wolf Optimizer (GWO) for designing the evolving Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) networks for time series analysis. To overcome the probability of stagnation at local optima and a slow convergence rate of the classical GWO algorithm, the newly proposed variant incorporates four distinctive search mechanisms. They comprise a nonlinear exploration scheme for dynamic search territory adjustment, a chaotic leadership dispatching strategy among the dominant wolves, a rectified spiral local exploitation action, as well as probability distribution-based leader enhancement. The evolving CNN-LSTM models are subsequently devised using the proposed GWO variant, where the network topology and learning hyperparameters are optimized for time series prediction and classification tasks. Evaluated using a number of benchmark problems, the proposed GWO-optimized CNN-LSTM models produce statistically significant results over those from several classical search methods and advanced GWO and Particle Swarm Optimization variants. Comparing with the baseline methods, the CNN-LSTM networks devised by the proposed GWO variant offer better representational capacities to not only capture the vital feature interactions, but also encapsulate the sophisticated dependencies in complex temporal contexts for undertaking time-series tasks

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Optimizing boiler combustion parameters based on evolution teaching-learning-based optimization algorithm for reducing NO<sub>x</sub> emission concentration

    Get PDF
    How to reduce a boiler's NOx emission concentration is an urgent problem for thermal power plants. Therefore, in this paper, we combine an evolution teaching-learning-based optimization algorithm with extreme learning machine to optimize a boiler's combustion parameters for reducing NOx emission concentration. Evolution teaching-learning-based optimization algorithm (ETLBO) is a variant of conventional teaching-learning-based optimization algorithm, which uses a chaotic mapping function to initialize individuals' positions and employs the idea of genetic evolution into the learner phase. To verify the effectiveness of ETLBO, 20 IEEE congress on Evolutionary Computation benchmark test functions are applied to test its convergence speed and convergence accuracy. Experimental results reveal that ETLBO shows the best convergence accuracy on most functions compared to other state-of-the-art optimization algorithms. In addition, the ETLBO is used to reduce boilers' NOx emissions by optimizing combustion parameters, such as coal supply amount and the air valve. Result shows that ETLBO is well-suited to solve the boiler combustion optimization problem

    Evolving machine learning and deep learning models using evolutionary algorithms

    Get PDF
    Despite the great success in data mining, machine learning and deep learning models are yet subject to material obstacles when tackling real-life challenges, such as feature selection, initialization sensitivity, as well as hyperparameter optimization. The prevalence of these obstacles has severely constrained conventional machine learning and deep learning methods from fulfilling their potentials. In this research, three evolving machine learning and one evolving deep learning models are proposed to eliminate above bottlenecks, i.e. improving model initialization, enhancing feature representation, as well as optimizing model configuration, respectively, through hybridization between the advanced evolutionary algorithms and the conventional ML and DL methods. Specifically, two Firefly Algorithm based evolutionary clustering models are proposed to optimize cluster centroids in K-means and overcome initialization sensitivity as well as local stagnation. Secondly, a Particle Swarm Optimization based evolving feature selection model is developed for automatic identification of the most effective feature subset and reduction of feature dimensionality for tackling classification problems. Lastly, a Grey Wolf Optimizer based evolving Convolutional Neural Network-Long Short-Term Memory method is devised for automatic generation of the optimal topological and learning configurations for Convolutional Neural Network-Long Short-Term Memory networks to undertake multivariate time series prediction problems. Moreover, a variety of tailored search strategies are proposed to eliminate the intrinsic limitations embedded in the search mechanisms of the three employed evolutionary algorithms, i.e. the dictation of the global best signal in Particle Swarm Optimization, the constraint of the diagonal movement in Firefly Algorithm, as well as the acute contraction of search territory in Grey Wolf Optimizer, respectively. The remedy strategies include the diversification of guiding signals, the adaptive nonlinear search parameters, the hybrid position updating mechanisms, as well as the enhancement of population leaders. As such, the enhanced Particle Swarm Optimization, Firefly Algorithm, and Grey Wolf Optimizer variants are more likely to attain global optimality on complex search landscapes embedded in data mining problems, owing to the elevated search diversity as well as the achievement of advanced trade-offs between exploration and exploitation

    Machine learning assisted optimization with applications to diesel engine optimization with the particle swarm optimization algorithm

    Get PDF
    A novel approach to incorporating Machine Learning into optimization routines is presented. An approach which combines the benefits of ML, optimization, and meta-model searching is developed and tested on a multi-modal test problem; a modified Rastragin\u27s function. An enhanced Particle Swarm Optimization method was derived from the initial testing. Optimization of a diesel engine was carried out using the modified algorithm demonstrating an improvement of 83% compared with the unmodified PSO algorithm. Additionally, an approach to enhancing the training of ML models by leveraging Virtual Sensing as an alternative to standard multi-layer neural networks is presented. Substantial gains were made in the prediction of Particulate matter, reducing the MMSE by 50% and improving the correlation R^2 from 0.84 to 0.98. Improvements were made in models of PM, NOx, HC, CO, and Fuel Consumption using the method, while training times and convergence reliability were simultaneously improved over the traditional approach
    corecore