730 research outputs found

    Load Forecasting Based Distribution System Network Reconfiguration-A Distributed Data-Driven Approach

    Full text link
    In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, the proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.Comment: 5 pages, preprint for Asilomar Conference on Signals, Systems, and Computers 201

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Global Optimization: Software and Applications

    Get PDF
    Mathematical models are a gateway into both theoretical and experimental understand- ing. However, sometimes these models need certain parameters to be established in order to obtain the optimal behaviour or value. This is done by using an optimization method that obtains certain parameters for optimal behaviour, as described by an objective function that may be a minimum (or maximum) result. Global optimization is a branch of optimization that takes a model and determines the global minimum for a given domain. Global opti- mization can become extremely challenging when the domain yields multiple local minima. Moreover, the complexity of the mathematical model and the consequent lengths of calcu- lations tend to increase the amount of time required for the solver to find the solution. To address these challenges, two software packages were developed to aid a solver in optimizing a black box objective function. The first software package is called Computefarm, a distributed local-resource computing software package that parallelizes the iteration step of a solver by distributing objective function evaluations to idle computers. The second software package is an Optimization Database that is used to monitor the global optimization process by storing information on the objective function evaluation and any extra information on the objective function. The Optimization Database is also used to prevent data from being lost during a failure in the optimization process. In this thesis, both Computefarm and the Optimization Database are used in the context of two particular applications. The first application is quantum error correction gate design. Quantum computers cannot rely on software to correct errors because of the quantum me- chanical properties that allow non-deterministic behaviour in the quantum bit. This means the quantum bits can change states between (0, 1) at any point in time. There are various ways to stabilize the quantum bits; however, errors in the system of quantum bits and the sys- tem to measure the states can occur. Therefore, error correction gates are designed to correct for these different types of errors to ensure a high fidelity in the overall circuit. A simulation of a quantum error correction gate is used to determine the properties of components needed to correct for errors in the circuit of the qubit system. The gate designs for the three-qubit and four-qubit systems are obtained by solving a feasibility problem for the intrinsic fidelity ii(error-correction percentage) to be above the prescribed 99.99% threshold. The Optimization Database is used with the MATLAB ’s Global Search algorithm to obtain the results for the three-qubit and four-qubit systems. The approach used in this thesis yields a faster high- fidelity (≤ 99.99%) three-qubit gate time than obtained previously, and obtained a solution for a fast high-fidelity four-qubit gate time. The second application is Rational Design of Materials, in which global optimization is used to find stable crystal structures of chemical compositions. To predict crystal structures, the enthalpy that determines the stability of the structure is minimized. The Optimization Database is used to store information on the obtained structure that is later used for identification of the crystal structure and Compute- farm is used to speed up the global optimization process. Ten crystal structures for carbon and five crystal structures for silicon-dioxide are obtained by using Global Convergence Par- ticle Swarm Optimization. The stable structures, graphite (carbon) and cristobalite (silicon dioxide), are obtained by using Global Convergence Particle Swarm Optimization. Achieving these results allows for further research on the stable and meta-stable crystal structures to understand various properties like hardness and thermal conductivity

    A Discrete-Continuous Algorithm for Globally Optimal Free Flight Trajectory Optimization

    Get PDF

    On Challenging Techniques for Constrained Global Optimization

    Get PDF
    This chapter aims to address the challenging and demanding issue of solving a continuous nonlinear constrained global optimization problem. We propose four stochastic methods that rely on a population of points to diversify the search for a global solution: genetic algorithm, differential evolution, artificial fish swarm algorithm and electromagnetism-like mechanism. The performance of different variants of these algorithms is analyzed using a benchmark set of problems. Three different strategies to handle the equality and inequality constraints of the problem are addressed. An augmented Lagrangian-based technique, the tournament selection based on feasibility and dominance rules, and a strategy based on ranking objective and constraint violation are presented and tested. Numerical experiments are reported showing the effectiveness of our suggestions. Two well-known engineering design problems are successfully solved by the proposed methods. © Springer-Verlag Berlin Heidelberg 2013.Fundação para a Ciência e a Tecnologia (Foundation for Science and Technology), Portugal for the financial support under fellowship grant: C2007-UMINHO-ALGORITMI-04. The other authors acknowledge FEDER COMPETE, Programa Operacional Fatores de Competitividade (Operational Programme Thematic Factors of Competitiveness) and FCT for the financial support under project grant: FCOMP-01-0124-FEDER-022674info:eu-repo/semantics/publishedVersio

    Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    Full text link
    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize the system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.Comment: 5 pages, preprint for Asilomar Conference on Signals, Systems, and Computers 201

    An Algorithmic Framework for Multiobjective Optimization

    Get PDF
    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization
    corecore