4 research outputs found
Leo: Lagrange Elementary Optimization
Global optimization problems are frequently solved using the practical and
efficient method of evolutionary sophistication. But as the original problem
becomes more complex, so does its efficacy and expandability. Thus, the purpose
of this research is to introduce the Lagrange Elementary Optimization (Leo) as
an evolutionary method, which is self-adaptive inspired by the remarkable
accuracy of vaccinations using the albumin quotient of human blood. They
develop intelligent agents using their fitness function value after gene
crossing. These genes direct the search agents during both exploration and
exploitation. The main objective of the Leo algorithm is presented in this
paper along with the inspiration and motivation for the concept. To demonstrate
its precision, the proposed algorithm is validated against a variety of test
functions, including 19 traditional benchmark functions and the CECC06 2019
test functions. The results of Leo for 19 classic benchmark test functions are
evaluated against DA, PSO, and GA separately, and then two other recent
algorithms such as FDO and LPB are also included in the evaluation. In
addition, the Leo is tested by ten functions on CECC06 2019 with DA, WOA, SSA,
FDO, LPB, and FOX algorithms distinctly. The cumulative outcomes demonstrate
Leo's capacity to increase the starting population and move toward the global
optimum. Different standard measurements are used to verify and prove the
stability of Leo in both the exploration and exploitation phases. Moreover,
Statistical analysis supports the findings results of the proposed research.
Finally, novel applications in the real world are introduced to demonstrate the
practicality of Leo.Comment: 28 page
A New Lagrangian Problem Crossover: A Systematic Review and Meta-Analysis of Crossover Standards
The performance of most evolutionary metaheuristic algorithms relays on
various operatives. One of them is the crossover operator, which is divided
into two types: application dependent and application independent crossover
operators. These standards always help to choose the best-fitted point in the
evolutionary algorithm process. High efficiency of crossover operators allows
engineers to minimize errors in engineering application optimization while
saving time and avoiding costly. There are two crucial objectives behind this
paper; at first, it is an overview of crossover standards classification that
has been used by researchers for solving engineering operations and problem
representation. The second objective of this paper; The significance of novel
standard crossover is proposed depending on Lagrangian Dual Function (LDF) to
progress the formulation of the Lagrangian Problem Crossover (LPX) as a new
standard of systematic operator. The proposed crossover standards result for
100 generations of parent chromosomes are compared to the BX and SBX standards,
which are the communal real-coded crossover standards. The accuracy and
performance of the proposed standard have evaluated by three unimodal test
functions. Besides, the proposed standard results are statistically
demonstrated and proved that it has an excessive ability to generate and
enhance the novel optimization algorithm compared to BX and SBXComment: 27 page
The Fifteen Puzzle—A New Approach through Hybridizing Three Heuristics Methods
The Fifteen Puzzle problem is one of the most classical problems that has captivated mathematics enthusiasts for centuries. This is mainly because of the huge size of the state space with approximately 1013 states that have to be explored, and several algorithms have been applied to solve the Fifteen Puzzle instances. In this paper, to manage this large state space, the bidirectional A* (BA*) search algorithm with three heuristics, such as Manhattan distance (MD), linear conflict (LC), and walking distance (WD), has been used to solve the Fifteen Puzzle problem. The three mentioned heuristics will be hybridized in a way that can dramatically reduce the number of states generated by the algorithm. Moreover, all these heuristics require only 25 KB of storage, but help the algorithm effectively reduce the number of generated states and expand fewer nodes. Our implementation of the BA* search can significantly reduce the space complexity, and guarantee either optimal or near-optimal solutions
Enhancing Algorithm Selection through Comprehensive Performance Evaluation: Statistical Analysis of Stochastic Algorithms
Analyzing stochastic algorithms for comprehensive performance and comparison across diverse contexts is essential. By evaluating and adjusting algorithm effectiveness across a wide spectrum of test functions, including both classical benchmarks and CEC-C06 2019 conference functions, distinct patterns of performance emerge. In specific situations, underscoring the importance of choosing algorithms contextually. Additionally, researchers have encountered a critical issue by employing a statistical model randomly to determine significance values without conducting other studies to select a specific model for evaluating performance outcomes. To address this concern, this study employs rigorous statistical testing to underscore substantial performance variations between pairs of algorithms, thereby emphasizing the pivotal role of statistical significance in comparative analysis. It also yields valuable insights into the suitability of algorithms for various optimization challenges, providing professionals with information to make informed decisions. This is achieved by pinpointing algorithm pairs with favorable statistical distributions, facilitating practical algorithm selection. The study encompasses multiple nonparametric statistical hypothesis models, such as the Wilcoxon rank-sum test, single-factor analysis, and two-factor ANOVA tests. This thorough evaluation enhances our grasp of algorithm performance across various evaluation criteria. Notably, the research addresses discrepancies in previous statistical test findings in algorithm comparisons, enhancing result reliability in the later research. The results proved that there are differences in significance results, as seen in examples like Leo versus the FDO, the DA versus the WOA, and so on. It highlights the need to tailor test models to specific scenarios, as p-value outcomes differ among various tests within the same algorithm pair