480 research outputs found

    Small-signal stability analysis of hybrid power system with quasi-oppositional sine cosine algorithm optimized fractional order PID controller

    Get PDF
    This article deals with the frequency instability problem of a hybrid energy power system (HEPS) coordinated with reheat thermal power plant. A stochastic optimization method called a sine-cosine algorithm (SCA) is, initially, applied for optimum tuning of fractional-order proportional-integral-derivative (FOPI-D) controller gains to balance the power generation and load profile. To accelerate the convergence mobility and escape the solutions from the local optimal level, quasi-oppositional based learning (Q-OBL) is integrated with SCA, which results in QOSCA. In this work, the PID-controller's derivative term is placed in the feedback path to avoid the set-point kick problem. A comparative assessment of the energy-storing devices is shown for analyzing the performances of the same in HEPS. The qualitative and quantitative evaluation of the results shows the best performance with the proposed QOSCA: FOPI-D controller compared to SCA-, grey wolf optimizer (GWO), and hyper-spherical search (HSS) optimized FOPI-D controller. It is also seen from the results that the proposed QOSCA: FOPI-D controller has satisfactory disturbance rejection ability and shows robust performance against parametric uncertainties and random load perturbation. The efficacy of the designed controller is confirmed by considering generation rate constraint, governor dead-band, and boiler dynamics effects

    Chaos embedded opposition based learning for gravitational search algorithm

    Full text link
    Due to its robust search mechanism, Gravitational search algorithm (GSA) has achieved lots of popularity from different research communities. However, stagnation reduces its searchability towards global optima for rigid and complex multi-modal problems. This paper proposes a GSA variant that incorporates chaos-embedded opposition-based learning into the basic GSA for the stagnation-free search. Additionally, a sine-cosine based chaotic gravitational constant is introduced to balance the trade-off between exploration and exploitation capabilities more effectively. The proposed variant is tested over 23 classical benchmark problems, 15 test problems of CEC 2015 test suite, and 15 test problems of CEC 2014 test suite. Different graphical, as well as empirical analyses, reveal the superiority of the proposed algorithm over conventional meta-heuristics and most recent GSA variants.Comment: 33 pages, 5 Figure

    Enhancing three variants of harmony search algorithm for continuous optimization problems

    Get PDF
    Meta-heuristic algorithms are well-known optimization methods, for solving real-world optimization problems. Harmony search (HS) is a recognized meta-heuristic algorithm with an efficient exploration process. But the HS has a slow convergence rate, which causes the algorithm to have a weak exploitation process in finding the global optima. Different variants of HS introduced in the literature to enhance the algorithm and fix its problems, but in most cases, the algorithm still has a slow convergence rate. Meanwhile, opposition-based learning (OBL), is an effective technique used to improve the performance of different optimization algorithms, including HS. In this work, we adopted a new improved version of OBL, to improve three variants of Harmony Search, by increasing the convergence rate speed of these variants and improving overall performance. The new OBL version named improved opposition-based learning (IOBL), and it is different from the original OBL by adopting randomness to increase the solution's diversity. To evaluate the hybrid algorithms, we run it on benchmark functions to compare the obtained results with its original versions. The obtained results show that the new hybrid algorithms more efficient compared to the original versions of HS. A convergence rate graph is also used to show the overall performance of the new algorithms

    Opposition-based learning for self-adaptive control parameters in differential evolution for optimal mechanism design

    Get PDF
    In recent decades, new optimization algorithms have attracted much attention from researchers in both gradient- and evolution-based optimal methods. Many strategy techniques are employed to enhance the effectiveness of optimal methods. One of the newest techniques is opposition-based learning (OBL), which shows more power in enhancing various optimization methods. This research presents a new edition of the Differential Evolution (DE) algorithm in which the OBL technique is applied to investigate the opposite point of each candidate of self-adaptive control parameters. In comparison with conventional optimal methods, the proposed method is used to solve benchmark-test optimal problems and applied to real optimizations. Simulation results show the effectiveness and improvement compared with some reference methodologies in terms of the convergence speed and stability of optimal results. © 2019 The Japan Society of Mechanical Engineer

    A single-machine scheduling problem with multiple unavailability constraints: A mathematical model and an enhanced variable neighborhood search approach

    Get PDF
    AbstractThis research focuses on a scheduling problem with multiple unavailability periods and distinct due dates. The objective is to minimize the sum of maximum earliness and tardiness of jobs. In order to optimize the problem exactly a mathematical model is proposed. However due to computational difficulties for large instances of the considered problem a modified variable neighborhood search (VNS) is developed. In basic VNS, the searching process to achieve to global optimum or near global optimum solution is totally random, and it is known as one of the weaknesses of this algorithm. To tackle this weakness, a VNS algorithm is combined with a knowledge module. In the proposed VNS, knowledge module extracts the knowledge of good solution and save them in memory and feed it back to the algorithm during the search process. Computational results show that the proposed algorithm is efficient and effective

    Hybrid harmony search algorithm for continuous optimization problems

    Get PDF
    Harmony Search (HS) algorithm has been extensively adopted in the literature to address optimization problems in many different fields, such as industrial design, civil engineering, electrical and mechanical engineering problems. In order to ensure its search performance, HS requires extensive tuning of its four parameters control namely harmony memory size (HMS), harmony memory consideration rate (HMCR), pitch adjustment rate (PAR), and bandwidth (BW). However, tuning process is often cumbersome and is problem dependent. Furthermore, there is no one size fits all problems. Additionally, despite many useful works, HS and its variant still suffer from weak exploitation which can lead to poor convergence problem. Addressing these aforementioned issues, this thesis proposes to augment HS with adaptive tuning using Grey Wolf Optimizer (GWO). Meanwhile, to enhance its exploitation, this thesis also proposes to adopt a new variant of the opposition-based learning technique (OBL). Taken together, the proposed hybrid algorithm, called IHS-GWO, aims to address continuous optimization problems. The IHS-GWO is evaluated using two standard benchmarking sets and two real-world optimization problems. The first benchmarking set consists of 24 classical benchmark unimodal and multimodal functions whilst the second benchmark set contains 30 state-of-the-art benchmark functions from the Congress on Evolutionary Computation (CEC). The two real-world optimization problems involved the three-bar truss and spring design. Statistical analysis using Wilcoxon rank-sum and Friedman of IHS-GWO’s results with recent HS variants and other metaheuristic demonstrate superior performance

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    PMT : opposition based learning technique for enhancing metaheuristic algorithms performance

    Get PDF
    Metaheuristic algorithms have shown promising performance in solving sophisticated real-world optimization problems. Nevertheless, many metaheuristic algorithms are still suffering from a low convergence rate because of the poor balance between exploration (i.e. roaming new potential search areas) and exploitation (i.e., exploiting the existing neighbors). In some complex problems, the convergence rate can still be poor owing to becoming trapped in local optima. Opposition-based learning (OBL) has shown promising results to address the aforementioned issue. Nonetheless, OBL-based solutions often consider one particular direction of the opposition. Considering only one direction can be problematic as the best solution may come in any of a multitude of directions. Addressing these OBL limitations, this research proposes a new general OBL technique inspired by a natural phenomenon of parallel mirrors systems called the Parallel Mirrors Technique (PMT). Like existing OBL-based approaches, the PMT generates new potential solutions based on the currently selected candidate. Unlike existing OBL-based techniques, the PMT generates more than one candidate in multiple solution-space directions. To evaluate the PMT’s performance and adaptability, the PMT was applied to four contemporary metaheuristic algorithms, Differential Evolution, Particle Swarm Optimization, Simulated Annealing, and Whale Optimization Algorithm, to solve 15 well-known benchmark functions as well as 2 real world problems based on the welded beam design and pressure vessel design. Experimentally, the PMT shows promising results by accelerating the convergence rate against the original algorithms with the same number of fitness evaluations comparing to the original metaheuristic algorithms in benchmark functions and real-world optimization problems

    An adaptive opposition-based learning selection: the case for Jaya algorithm

    Get PDF
    Over the years, opposition-based Learning (OBL) technique has been proven to effectively enhance the convergence of meta-heuristic algorithms. The fact that OBL is able to give alternative candidate solutions in one or more opposite directions ensures good exploration and exploitation of the search space. In the last decade, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL. Although proven useful, much existing adoption of OBL into meta-heuristic algorithms has been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. In fact, if the peaks are close together, relying on a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, assembling a sequence of OBL techniques into meta-heuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. Experimental results using the combinatorial testing problems as case study demonstrate that OBL-JA shows very competitive results against the existing works in term of the test suite size. The results also show that OBL-JA performs better than standard Jaya Algorithm in most of the tested cases due to its ability to adapt its behaviour based on the current performance feedback of the search process
    • …
    corecore