510 research outputs found
Comparative Study of Meta-heuristics Optimization Algorithm using Benchmark Function
Meta-heuristics optimization is becoming a popular tool for solving numerous problems in real-world application due to the ability to overcome many shortcomings in traditional optimization. Despite of the good performance, there is limitation in some algorithms that deteriorates by certain degree of problem type. Therefore it is necessary to compare the performance of these algorithms with certain problem type. This paper compares 7 meta-heuristics optimization with 11 benchmark functions that exhibits certain difficulties and can be assumed as a simulation relevant to the real-world problems. The tested benchmark function has different type of problem such as modality, separability, discontinuity and surface effects with steep-drop global optimum, bowl- and plateau-typed function. Some of the proposed function has the combination of these problems, which might increase the difficulty level of search towards global optimum. The performance comparison includes computation time and convergence of global optimum
Evolutionary Dynamic Multi-Objective Optimisation : A survey
Peer reviewedPostprin
Cooperative Particle Swarm Optimization for Combinatorial Problems
A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms.
In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution.
Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space
Hybrid Algorithm for Solving the Quadratic Assignment Problem
The Quadratic Assignment Problem (QAP) is a combinatorial optimization problem; it belongs to the class of NP-hard problems. This problem is applied in various fields such as hospital layout, scheduling parallel production lines and analyzing chemical reactions for organic compounds. In this paper we propose an application of Golden Ball algorithm mixed with Simulated Annealing (GBSA) to solve QAP. This algorithm is based on different concepts of football. The simulated annealing search can be blocked in a local optimum due to the unacceptable movements; our proposed strategy guides the simulated annealing search to escape from the local optima and to explore in an efficient way the search space. To validate the proposed approach, numerous simulations were conducted on 64 instances of QAPLIB to compare GBSA with existing algorithms in the literature of QAP. The obtained numerical results show that the GBSA produces optimal solutions in reasonable time; it has the better computational time. This work demonstrates that our proposed adaptation is effective in solving the quadratic assignment problem
Recommended from our members
HEDCOS: High Efficiency Dynamic Combinatorial Optimization System using Ant Colony Optimization algorithm
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDynamic combinatorial optimization is gaining popularity among industrial practitioners due to the ever-increasing scale of their optimization problems and efforts to solve them to remain competitive. Larger optimization problems are not only more computationally intense to optimize but also have more uncertainty within problem inputs. If some aspects of the problem are subject to dynamic change, it becomes a Dynamic Optimization Problem (DOP).
In this thesis, a High Efficiency Dynamic Combinatorial Optimization System is built to solve challenging DOPs with high-quality solutions. The system is created using Ant Colony Optimization (ACO) baseline algorithm with three novel developments.
First, introduced an extension method for ACO algorithm called Dynamic Impact. Dynamic Impact is designed to improve convergence and solution quality by solving challenging optimization problems with a non-linear relationship between resource consumption and fitness. This proposed method is tested against the real-world Microchip Manufacturing Plant Production Floor Optimization (MMPPFO) problem and the theoretical benchmark Multidimensional Knapsack Problem (MKP).
Second, a non-stochastic dataset generation method was introduced to solve the dynamic optimization research replicability problem. This method uses a static benchmark dataset as a starting point and source of entropy to generate a sequence of dynamic states. Then using this method, 1405 Dynamic Multidimensional Knapsack Problem (DMKP) benchmark datasets were generated and published using famous static MKP benchmark instances as the initial state.
Third, introduced a nature-inspired discrete dynamic optimization strategy for ACO by modelling real-world ants’ symbiotic relationship with aphids. ACO with Aphids strategy is designed to solve discrete domain DOPs with event-triggered discrete dynamism. The strategy improved inter-state convergence by allowing better solution recovery after dynamic environment changes. Aphids mediate the information from previous dynamic optimization states to maximize initial results performance and minimize the impact on convergence speed. This strategy is tested for DMKP and against identical ACO implementations using Full-Restart and Pheromone-Sharing strategies, with all other variables isolated.
Overall, Dynamic Impact and ACO with Aphids developments are compounding. Using Dynamic Impact on single objective optimization of MMPPFO, the fitness value was improved by 33.2% over the ACO algorithm without Dynamic Impact. MKP benchmark instances of low complexity have been solved to a 100% success rate even when a high degree of solution sparseness is observed, and large complexity instances have shown the average gap improved by 4.26 times. ACO with Aphids has also demonstrated superior performance over the Pheromone-Sharing strategy in every test on average gap reduced by 29.2% for a total compounded dynamic optimization performance improvement of 6.02 times. Also, ACO with Aphids has outperformed the Full-Restart strategy for large datasets groups, and the overall average gap is reduced by 52.5% for a total compounded dynamic optimization performance improvement of 8.99 times
A Framework for Automatic Behavior Generation in Multi-Function Swarms
Multi-function swarms are swarms that solve multiple tasks at once. For
example, a quadcopter swarm could be tasked with exploring an area of interest
while simultaneously functioning as ad-hoc relays. With this type of
multi-function comes the challenge of handling potentially conflicting
requirements simultaneously. Using the Quality-Diversity algorithm MAP-elites
in combination with a suitable controller structure, a framework for automatic
behavior generation in multi-function swarms is proposed. The framework is
tested on a scenario with three simultaneous tasks: exploration, communication
network creation and geolocation of RF emitters. A repertoire is evolved,
consisting of a wide range of controllers, or behavior primitives, with
different characteristics and trade-offs in the different tasks. This
repertoire would enable the swarm to transition between behavior trade-offs
online, according to the situational requirements. Furthermore, the effect of
noise on the behavior characteristics in MAP-elites is investigated. A moderate
number of re-evaluations is found to increase the robustness while keeping the
computational requirements relatively low. A few selected controllers are
examined, and the dynamics of transitioning between these controllers are
explored. Finally, the study develops a methodology for analyzing the makeup of
the resulting controllers. This is done through a parameter variation study
where the importance of individual inputs to the swarm controllers is assessed
and analyzed
Recommended from our members
An Evaluation of Performance Enhancements to Particle Swarm Optimisation on Real-World Data
Swarm Computation is a relatively new optimisation paradigm. The basic premise is to model the collective behaviour of self-organised natural phenomena such as swarms, flocks and shoals, in order to solve optimisation problems. Particle Swarm Optimisation (PSO) is a type of swarm computation inspired by bird flocks or swarms of bees by modelling their collective social influence as they search for optimal solutions.
In many real-world applications of PSO, the algorithm is used as a data pre-processor for a neural network or similar post processing system, and is often extensively modified to suit the application. The thesis introduces techniques that allow unmodified PSO to be applied successfully to a range of problems, specifically three extensions to the basic PSO algorithm: solving optimisation problems by training a hyperspatial matrix, using a hierarchy of swarms to coordinate optimisation on several data sets simultaneously, and dynamic neighbourhood selection in swarms.
Rather than working directly with candidate solutions to an optimisation problem, the PSO algorithm is adapted to train a matrix of weights, to produce a solution to the problem from the inputs. The search space is abstracted from the problem data.
A single PSO swarm optimises a single data set and has difficulties where the data set comprises disjoint parts (such as time series data for different days). To address this problem, we introduce a hierarchy of swarms, where each child swarm optimises one section of the data set whose gbest particle is a member of the swarm above in the hierarchy. The parent swarm(s) coordinate their children and encourage more exploration of the solution space. We show that hierarchical swarms of this type perform better than single swarm PSO optimisers on the disjoint data sets used.
PSO relies on interaction between particles within a neighbourhood to find good solutions. In many PSO variants, possible interactions are arbitrary and fixed on initialisation. Our third contribution is a dynamic neighbourhood selection: particles can modify their neighbourhood, based on the success of the candidate neighbour particle. As PSO is intended to reflect the social interaction of agents, this change significantly increases the ability of the swarm to find optimal solutions. Applied to real-world medical and cosmological data, this modification is and shows improvements over standard PSO approaches with fixed neighbourhoods
- …