71 research outputs found
Application of Pigeon Inspired Optimization for Multidimensional Knapsack Problem
The multidimensional knapsack problem (MKP) is a generalization of the classical knapsack problem, a problem for allocating a resource by selecting a subset of objects that seek for the highest profit while satisfying the capacity of knapsack constraint. The MKP have many practical applications in different areas and classified as a NP-hard problem. An exact method like branch and bound and dynamic programming can solve the problem, but its time computation increases exponentially with the size of the problem. Whereas some approximation method has been developed to produce a near-optimal solution within reasonable computational times. In this paper a pigeon inspired optimization (PIO) is proposed for solving MKP. PIO is one of the metaheuristic algorithms that is classified in population-based swarm intelligent that is developed based on the behavior of the pigeon to find its home although it had gone far away from it home. In this paper, PIO implementation to solve MKP is applied to two different characteristic cases in total 10 cases. The result of the implementation of the two-best combination of parameter values for 10 cases compared to particle swarm optimization, intelligent water drop algorithm and the genetic algorithm gives satisfactory results
Flower pollination algorithm with pollinator attraction
The Flower Pollination Algorithm (FPA) is a highly efficient optimization algorithm that is inspired by the evolution process of flowering plants. In the present study, a modified version of FPA is proposed accounting for an additional feature of flower pollination in nature that is the so-called pollinator attraction. Pollinator attraction represents the natural tendency of flower species to evolve in order to attract pollinators by using their colour, shape and scent as well as nutritious rewards. To reflect this evolution mechanism, the proposed FPA variant with Pollinator Attraction (FPAPA) provides fitter flowers of the population with higher probabilities of achieving pollen transfer via biotic pollination than other flowers. FPAPA is tested against a set of 28 benchmark mathematical functions, defined in IEEE-CEC’13 for real-parameter single-objective optimization problems, as well as structural optimization problems. Numerical experiments show that the modified FPA represents a statistically significant improvement upon the original FPA and that it can outperform other state-of-the-art optimization algorithms offering better and more robust optimal solutions. Additional research is suggested to combine FPAPA with other modified and hybridized versions of FPA to further increase its performance in challenging optimization problems
Flower pollination algorithm parameters tuning
The flower pollination algorithm (FPA) is a highly efficient metaheuristic optimization algorithm that is inspired by the pollination process of flowering species. FPA is characterised by simplicity in its formulation and high computational performance. Previous studies on FPA assume fixed parameter values based on empirical observations or experimental comparisons of limited scale and scope. In this study, a comprehensive effort is made to identify appropriate values of the FPA parameters that maximize its computational performance. To serve this goal, a simple non-iterative, single-stage sampling tuning method is employed, oriented towards practical applications of FPA. The tuning method is applied to the set of 28 functions specified in IEEE-CEC'13 for real-parameter single-objective optimization problems. It is found that the optimal FPA parameters depend significantly on the objective functions, the problem dimensions and affordable computational cost. Furthermore, it is found that the FPA parameters that minimize mean prediction errors do not always offer the most robust predictions. At the end of this study, recommendations are made for setting the optimal FPA parameters as a function of problem dimensions and affordable computational cost. [Abstract copyright: © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.
Nature-inspired Methods for Stochastic, Robust and Dynamic Optimization
Nature-inspired algorithms have a great popularity in the current scientific community, being the focused scope of many research contributions in the literature year by year. The rationale behind the acquired momentum by this broad family of methods lies on their outstanding performance evinced in hundreds of research fields and problem instances. This book gravitates on the development of nature-inspired methods and their application to stochastic, dynamic and robust optimization. Topics covered by this book include the design and development of evolutionary algorithms, bio-inspired metaheuristics, or memetic methods, with empirical, innovative findings when used in different subfields of mathematical optimization, such as stochastic, dynamic, multimodal and robust optimization, as well as noisy optimization and dynamic and constraint satisfaction problems
Nature-Inspired Algorithms in Optimization: Introduction, Hybridization and Insights
Many problems in science and engineering are optimization problems, which may
require sophisticated optimization techniques to solve. Nature-inspired
algorithms are a class of metaheuristic algorithms for optimization, and some
algorithms or variants are often developed by hybridization. Benchmarking is
also important in evaluating the performance of optimization algorithms. This
chapter focuses on the overview of optimization, nature-inspired algorithms and
the role of hybridization. We will also highlight some issues with
hybridization of algorithms.Comment: 15 pages, 4 figure
Design and Optimization of PID Controller using Various Algorithms for Micro-Robotics System
Microparticles have the potentials to be used for many medical purposes in-side the human body such as drug delivery and other operations. This paper attempts to provide a thorough comparison between five meta-heuristic search algorithms:Â Sparrow Search Algorithm (SSA), Flower Pollination Algorithm (FPA), Slime Mould Algorithm (SMA), Marine Predator Algorithm (MPA), and Multi-Verse Optimizer (MVO). These approaches were used to calculate the PID controller optimal indicators with the application of different functions, including Integral Absolute Error (IAE), Integral of Time Multiplied by Square Error (ITSE), Integral Square Time multiplied square Error (ISTES), Integral Square Error (ISE), Integral of Square Time multiplied by square Error (ISTSE), and Integral of Time multiplied by Absolute Error (ITAE). Every method of controlling was presented in a MATLAB Simulink numerical model, and LABVIEW software was used to run the experimental tests. It is observed that the MPA technique achieves the highest values of settling error for both simulation and experimental results among other control approaches, while the SSA approach reduces the settling error by 50% compared to former experiments. The results indicate that SSA is the best method among all approaches and that ISTES is the best choice of PID for optimizing the controlling parameters
Recommended from our members
HEDCOS: High Efficiency Dynamic Combinatorial Optimization System using Ant Colony Optimization algorithm
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDynamic combinatorial optimization is gaining popularity among industrial practitioners due to the ever-increasing scale of their optimization problems and efforts to solve them to remain competitive. Larger optimization problems are not only more computationally intense to optimize but also have more uncertainty within problem inputs. If some aspects of the problem are subject to dynamic change, it becomes a Dynamic Optimization Problem (DOP).
In this thesis, a High Efficiency Dynamic Combinatorial Optimization System is built to solve challenging DOPs with high-quality solutions. The system is created using Ant Colony Optimization (ACO) baseline algorithm with three novel developments.
First, introduced an extension method for ACO algorithm called Dynamic Impact. Dynamic Impact is designed to improve convergence and solution quality by solving challenging optimization problems with a non-linear relationship between resource consumption and fitness. This proposed method is tested against the real-world Microchip Manufacturing Plant Production Floor Optimization (MMPPFO) problem and the theoretical benchmark Multidimensional Knapsack Problem (MKP).
Second, a non-stochastic dataset generation method was introduced to solve the dynamic optimization research replicability problem. This method uses a static benchmark dataset as a starting point and source of entropy to generate a sequence of dynamic states. Then using this method, 1405 Dynamic Multidimensional Knapsack Problem (DMKP) benchmark datasets were generated and published using famous static MKP benchmark instances as the initial state.
Third, introduced a nature-inspired discrete dynamic optimization strategy for ACO by modelling real-world ants’ symbiotic relationship with aphids. ACO with Aphids strategy is designed to solve discrete domain DOPs with event-triggered discrete dynamism. The strategy improved inter-state convergence by allowing better solution recovery after dynamic environment changes. Aphids mediate the information from previous dynamic optimization states to maximize initial results performance and minimize the impact on convergence speed. This strategy is tested for DMKP and against identical ACO implementations using Full-Restart and Pheromone-Sharing strategies, with all other variables isolated.
Overall, Dynamic Impact and ACO with Aphids developments are compounding. Using Dynamic Impact on single objective optimization of MMPPFO, the fitness value was improved by 33.2% over the ACO algorithm without Dynamic Impact. MKP benchmark instances of low complexity have been solved to a 100% success rate even when a high degree of solution sparseness is observed, and large complexity instances have shown the average gap improved by 4.26 times. ACO with Aphids has also demonstrated superior performance over the Pheromone-Sharing strategy in every test on average gap reduced by 29.2% for a total compounded dynamic optimization performance improvement of 6.02 times. Also, ACO with Aphids has outperformed the Full-Restart strategy for large datasets groups, and the overall average gap is reduced by 52.5% for a total compounded dynamic optimization performance improvement of 8.99 times
Dynamic Impact for Ant Colony Optimization algorithm
This paper proposes an extension method for Ant Colony Optimization (ACO) algorithm called Dynamic Impact. Dynamic Impact is designed to solve challenging optimization problems that has nonlinear relationship between resource consumption and fitness in relation to other part of the optimized solution. This proposed method is tested against complex real-world Microchip Manufacturing Plant Production Floor Optimization (MMPPFO) problem, as well as theoretical benchmark Multi-Dimensional Knapsack problem (MKP). MMPPFO is a non-trivial optimization problem, due the nature of solution fitness value dependence on collection of wafer-lots without prioritization of any individual wafer-lot. Using Dynamic Impact on single objective optimization fitness value is improved by 33.2%. Furthermore, MKP benchmark instances of small complexity have been solved to 100% success rate where high degree of solution sparseness is observed, and large instances have showed average gap improved by 4.26 times. Algorithm implementation demonstrated superior performance across small and large datasets and sparse optimization problems.Intel Corporatio
- …