553 research outputs found

    Contingency-Constrained Optimal Power Flow Using Simplex-Based Chaotic-PSO Algorithm

    Get PDF
    This paper proposes solving contingency-constrained optimal power flow (CC-OPF) by a simplex-based chaotic particle swarm optimization (SCPSO). The associated objective of CC-OPF with the considered valve-point loading effects of generators is to minimize the total generation cost, to reduce transmission loss, and to improve the bus-voltage profile under normal or postcontingent states. The proposed SCPSO method, which involves the chaotic map and the downhill simplex search, can avoid the premature convergence of PSO and escape local minima. The effectiveness of the proposed method is demonstrated in two power systems with contingency constraints and compared with other stochastic techniques in terms of solution quality and convergence rate. The experimental results show that the SCPSO-based CC-OPF method has suitable mutation schemes, thus showing robustness and effectiveness in solving contingency-constrained OPF problems

    A hybrid GA–PS–SQP method to solve power system valve-point economic dispatch problems

    No full text
    This study presents a new approach based on a hybrid algorithm consisting of Genetic Algorithm (GA), Pattern Search (PS) and Sequential Quadratic Programming (SQP) techniques to solve the well-known power system Economic dispatch problem (ED). GA is the main optimizer of the algorithm, whereas PS and SQP are used to fine tune the results of GA to increase confidence in the solution. For illustrative purposes, the algorithm has been applied to various test systems to assess its effectiveness. Furthermore, convergence characteristics and robustness of the proposed method have been explored through comparison with results reported in literature. The outcome is very encouraging and suggests that the hybrid GA–PS–SQP algorithm is very efficient in solving power system economic dispatch problem

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Review of Metaheuristics and Generalized Evolutionary Walk Algorithm

    Full text link
    Metaheuristic algorithms are often nature-inspired, and they are becoming very powerful in solving global optimization problems. More than a dozen of major metaheuristic algorithms have been developed over the last three decades, and there exist even more variants and hybrid of metaheuristics. This paper intends to provide an overview of nature-inspired metaheuristic algorithms, from a brief history to their applications. We try to analyze the main components of these algorithms and how and why they works. Then, we intend to provide a unified view of metaheuristics by proposing a generalized evolutionary walk algorithm (GEWA). Finally, we discuss some of the important open questions.Comment: 14 page

    Automated, Parallel Optimization Algorithms for Stochastic Functions

    Get PDF
    The optimization algorithms for stochastic functions are desired specifically for real-world and simulation applications where results are obtained from sampling, and contain experimental error or random noise. We have developed a series of stochastic optimization algorithms based on the well-known classical down hill simplex algorithm. Our parallel implementation of these optimization algorithms, using a framework called MW, is based on a master-worker architecture where each worker runs a massively parallel program. This parallel implementation allows the sampling to proceed independently on many processors as demonstrated by scaling up to more than 100 vertices and 300 cores. This framework is highly suitable for clusters with an ever increasing number of cores per node. The new algorithms have been successfully applied to the reparameterization of a model for liquid water, achieving thermodynamic and structural results for liquid water that are better than a standard model used in molecular simulations, with the the advantage of a fully automated parameterization process

    The design and applications of the african buffalo algorithm for general optimization problems

    Get PDF
    Optimization, basically, is the economics of science. It is concerned with the need to maximize profit and minimize cost in terms of time and resources needed to execute a given project in any field of human endeavor. There have been several scientific investigations in the past several decades on discovering effective and efficient algorithms to providing solutions to the optimization needs of mankind leading to the development of deterministic algorithms that provide exact solutions to optimization problems. In the past five decades, however, the attention of scientists has shifted from the deterministic algorithms to the stochastic ones since the latter have proven to be more robust and efficient, even though they do not guarantee exact solutions. Some of the successfully designed stochastic algorithms include Simulated Annealing, Genetic Algorithm, Ant Colony Optimization, Particle Swarm Optimization, Bee Colony Optimization, Artificial Bee Colony Optimization, Firefly Optimization etc. A critical look at these ‘efficient’ stochastic algorithms reveals the need for improvements in the areas of effectiveness, the number of several parameters used, premature convergence, ability to search diverse landscapes and complex implementation strategies. The African Buffalo Optimization (ABO), which is inspired by the herd management, communication and successful grazing cultures of the African buffalos, is designed to attempt solutions to the observed shortcomings of the existing stochastic optimization algorithms. Through several experimental procedures, the ABO was used to successfully solve benchmark optimization problems in mono-modal and multimodal, constrained and unconstrained, separable and non-separable search landscapes with competitive outcomes. Moreover, the ABO algorithm was applied to solve over 100 out of the 118 benchmark symmetric and all the asymmetric travelling salesman’s problems available in TSPLIB95. Based on the successful experimentation with the novel algorithm, it is safe to conclude that the ABO is a worthy contribution to the scientific literature

    State Transition Algorithm

    Full text link
    In terms of the concepts of state and state transition, a new heuristic random search algorithm named state transition algorithm is proposed. For continuous function optimization problems, four special transformation operators called rotation, translation, expansion and axesion are designed. Adjusting measures of the transformations are mainly studied to keep the balance of exploration and exploitation. Convergence analysis is also discussed about the algorithm based on random search theory. In the meanwhile, to strengthen the search ability in high dimensional space, communication strategy is introduced into the basic algorithm and intermittent exchange is presented to prevent premature convergence. Finally, experiments are carried out for the algorithms. With 10 common benchmark unconstrained continuous functions used to test the performance, the results show that state transition algorithms are promising algorithms due to their good global search capability and convergence property when compared with some popular algorithms.Comment: 18 pages, 28 figure

    Adaptive Penalty and Barrier function based on Fuzzy Logic

    Get PDF
    Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance
    • …
    corecore