4 research outputs found

    PMT : opposition based learning technique for enhancing metaheuristic algorithms performance

    Get PDF
    Metaheuristic algorithms have shown promising performance in solving sophisticated real-world optimization problems. Nevertheless, many metaheuristic algorithms are still suffering from a low convergence rate because of the poor balance between exploration (i.e. roaming new potential search areas) and exploitation (i.e., exploiting the existing neighbors). In some complex problems, the convergence rate can still be poor owing to becoming trapped in local optima. Opposition-based learning (OBL) has shown promising results to address the aforementioned issue. Nonetheless, OBL-based solutions often consider one particular direction of the opposition. Considering only one direction can be problematic as the best solution may come in any of a multitude of directions. Addressing these OBL limitations, this research proposes a new general OBL technique inspired by a natural phenomenon of parallel mirrors systems called the Parallel Mirrors Technique (PMT). Like existing OBL-based approaches, the PMT generates new potential solutions based on the currently selected candidate. Unlike existing OBL-based techniques, the PMT generates more than one candidate in multiple solution-space directions. To evaluate the PMT’s performance and adaptability, the PMT was applied to four contemporary metaheuristic algorithms, Differential Evolution, Particle Swarm Optimization, Simulated Annealing, and Whale Optimization Algorithm, to solve 15 well-known benchmark functions as well as 2 real world problems based on the welded beam design and pressure vessel design. Experimentally, the PMT shows promising results by accelerating the convergence rate against the original algorithms with the same number of fitness evaluations comparing to the original metaheuristic algorithms in benchmark functions and real-world optimization problems

    Micro-differential evolution: diversity enhancement and comparative study.

    Get PDF
    Evolutionary algorithms (EAs), such as the differential evolution (DE) algorithm, suffer from high computational time due to large population size and nature of evaluation, to mention two major reasons. The micro-EAs employ a very small population size, which can converge to a reasonable solution quicker; while they are vulnerable to premature convergence as well as high risk of stagnation. One approach to overcome the stagnation problem is increasing the diversity of the population. In this thesis, a micro-differential evolution algorithm with vectorized random mutation factor (MDEVM) is proposed, which utilizes the small size population benefit while preventing stagnation through diversification of the population. The following contributions are conducted related to the micro-DE (MDE) algorithms in this thesis: providing Monte-Carlo-based simulations for the proposed vectorized random mutation factor (VRMF) method; proposing mutation schemes for DE algorithm with populations sizes less than four; comprehensive comparative simulations and analysis on performance of the MDE algorithms over variant mutation schemes, population sizes, problem types (i.e. uni-modal, multi-modal, and composite), problem dimensionalities, mutation factor ranges, and population diversity analysis in stagnation and trapping in local optimum schemes. The comparative studies are conducted on the 28 benchmark functions provided at the IEEE congress on evolutionary computation 2013 (CEC-2013) and comprehensive analyses are provided. Experimental results demonstrate high performance and convergence speed of the proposed MDEVM algorithm over variant types of functions
    corecore