25 research outputs found

    How does the number of objective function evaluations impact our understanding of metaheuristics behavior?

    Get PDF
    Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.Internal Grant Agency of Tomas Bata University [IGA/CebiaTech/2021/001]; AI Laboratory, Faculty of Applied Informatics, Tomas Bata University in ZlinIGA/CebiaTech/2021/001; Univerzita Tomáše Bati ve Zlín

    Evolutionary framework with reinforcement learning-based mutation adaptation

    Get PDF
    Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p

    An improved multi-strategy beluga whale optimization for global optimization problems

    Get PDF
    This paper presents an improved beluga whale optimization (IBWO) algorithm, which is mainly used to solve global optimization problems and engineering problems. This improvement is proposed to solve the imbalance between exploration and exploitation and to solve the problem of insufficient convergence accuracy and speed of beluga whale optimization (BWO). In IBWO, we use a new group action strategy (GAS), which replaces the exploration phase in BWO. It was inspired by the group hunting behavior of beluga whales in nature. The GAS keeps individual belugas whales together, allowing them to hide together from the threat posed by their natural enemy, the tiger shark. It also enables the exchange of location information between individual belugas whales to enhance the balance between local and global lookups. On this basis, the dynamic pinhole imaging strategy (DPIS) and quadratic interpolation strategy (QIS) are added to improve the global optimization ability and search rate of IBWO and maintain diversity. In a comparison experiment, the performance of the optimization algorithm (IBWO) was tested by using CEC2017 and CEC2020 benchmark functions of different dimensions. Performance was analyzed by observing experimental data, convergence curves, and box graphs, and the results were tested using the Wilcoxon rank sum test. The results show that IBWO has good optimization performance and robustness. Finally, the applicability of IBWO to practical engineering problems is verified by five engineering problems

    An Improved Differential Evolution Algorithm for Numerical Optimization Problems

    Get PDF
    The differential evolution algorithm has gained popularity for solving complex optimization problems because of its simplicity and efficiency. However, it has several drawbacks, such as a slow convergence rate, high sensitivity to the values of control parameters, and the ease of getting trapped in local optima. In order to overcome these drawbacks, this paper integrates three novel strategies into the original differential evolution. First, a population improvement strategy based on a multi-level sampling mechanism is used to accelerate convergence and increase the diversity of the population. Second, a new self-adaptive mutation strategy balances the exploration and exploitation abilities of the algorithm by dynamically determining an appropriate value of the mutation parameters; this improves the search ability and helps the algorithm escape from local optima when it gets stuck. Third, a new selection strategy guides the search to avoid local optima. Twelve benchmark functions of different characteristics are used to validate the performance of the proposed algorithm. The experimental results show that the proposed algorithm performs significantly better than the original DE in terms of the ability to locate the global optimum, convergence speed, and scalability. In addition, the proposed algorithm is able to find the global optimal solutions on 8 out of 12 benchmark functions, while 7 other well-established metaheuristic algorithms, namely NBOLDE, ODE, DE, SaDE, JADE, PSO, and GA, can obtain only 6, 2, 1, 1, 1, 1, and 1 functions, respectively. Doi: 10.28991/HIJ-2023-04-02-014 Full Text: PD

    A novel hybrid gravitational search particle swarm optimization algorithm

    Full text link
    Particle Swarm Optimization (PSO) algorithm is a member of the swarm computational family and widely used for solving nonlinear optimization problems. But, it tends to suffer from premature stagnation, trapped in the local minimum and loses exploration capability as the iteration progresses. On the contrary, Gravitational Search Algorithm (GSA) is proficient for searching global optimum, however, its drawback is its slow searching speed in the final phase. To overcome these problems in this paper a novel Hybrid Gravitational Search Particle Swarm Optimization Algorithm (HGSPSO) is presented. The key concept behind the proposed method is to merge the local search ability of GSA with the capability for social thinking (gbest) of PSO. To examine the effectiveness of these methods in solving the abovementioned issues of slow convergence rate and trapping in local minima five standard and some modern CEC benchmark functions are used to ensure the efficacy of the presented method. Additionally, a DNA sequence problem is also solved to confirm the proficiency of the proposed method. Different parameters such as Hairpin, Continuity, H-measure, and Similarity are employed as objective functions. A hierarchal approach was used to solve this multi-objective problem where a single objective function is first obtained through a weighted sum method and the results were then empirically validated. The proposed algorithm has demonstrated an extraordinary performance per solution stability and convergence

    Evolving CNN-LSTM Models for Time Series Prediction Using Enhanced Grey Wolf Optimizer

    Get PDF
    In this research, we propose an enhanced Grey Wolf Optimizer (GWO) for designing the evolving Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) networks for time series analysis. To overcome the probability of stagnation at local optima and a slow convergence rate of the classical GWO algorithm, the newly proposed variant incorporates four distinctive search mechanisms. They comprise a nonlinear exploration scheme for dynamic search territory adjustment, a chaotic leadership dispatching strategy among the dominant wolves, a rectified spiral local exploitation action, as well as probability distribution-based leader enhancement. The evolving CNN-LSTM models are subsequently devised using the proposed GWO variant, where the network topology and learning hyperparameters are optimized for time series prediction and classification tasks. Evaluated using a number of benchmark problems, the proposed GWO-optimized CNN-LSTM models produce statistically significant results over those from several classical search methods and advanced GWO and Particle Swarm Optimization variants. Comparing with the baseline methods, the CNN-LSTM networks devised by the proposed GWO variant offer better representational capacities to not only capture the vital feature interactions, but also encapsulate the sophisticated dependencies in complex temporal contexts for undertaking time-series tasks

    Tornado: An Autonomous Chaotic Algorithm for Large Scale Global Optimization

    Get PDF
    In this paper we propose an autonomous chaotic optimization algorithm, called Tornado, for large scale global optimization problems. The algorithm introduces advanced symmetrization, levelling and fine search strategies for an efficient and effective exploration of the search space and exploitation of the best found solutions. To our knowledge, this is the first accurate and fast autonomous chaotic algorithm solving large scale optimization problems. A panel of various benchmark problems with different properties were used to assess the performance of the proposed chaotic algorithm. The obtained results has shown the scalability of the algorithm in contrast to chaotic optimization algorithms encountered in the literature. Moreover, in comparison with some state-of-the-art meta-heuristics (e.g. evolutionary algorithms, swarm intelligence), the computational results revealed that the proposed Tornado algorithm is an effective and efficient optimization algorithm

    Applied (Meta)-Heuristic in Intelligent Systems

    Get PDF
    Engineering and business problems are becoming increasingly difficult to solve due to the new economics triggered by big data, artificial intelligence, and the internet of things. Exact algorithms and heuristics are insufficient for solving such large and unstructured problems; instead, metaheuristic algorithms have emerged as the prevailing methods. A generic metaheuristic framework guides the course of search trajectories beyond local optimality, thus overcoming the limitations of traditional computation methods. The application of modern metaheuristics ranges from unmanned aerial and ground surface vehicles, unmanned factories, resource-constrained production, and humanoids to green logistics, renewable energy, circular economy, agricultural technology, environmental protection, finance technology, and the entertainment industry. This Special Issue presents high-quality papers proposing modern metaheuristics in intelligent systems
    corecore