5 research outputs found

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Convex Relaxation of Optimal Power Flow, Part II: Exactness

    Get PDF
    This tutorial summarizes recent advances in the convex relaxation of the optimal power flow (OPF) problem, focusing on structural properties rather than algorithms. Part I presents two power flow models, formulates OPF and their relaxations in each model, and proves equivalence relations among them. Part II presents sufficient conditions under which the convex relaxations are exact.Comment: Citation: IEEE Transactions on Control of Network Systems, June 2014. This is an extended version with Appendex VI that proves the main results in this tutoria

    Recent Advances in Randomized Methods for Big Data Optimization

    Get PDF
    In this thesis, we discuss and develop randomized algorithms for big data problems. In particular, we study the finite-sum optimization with newly emerged variance- reduction optimization methods (Chapter 2), explore the efficiency of second-order information applied to both convex and non-convex finite-sum objectives (Chapter 3) and employ the fast first-order method in power system problems (Chapter 4).In Chapter 2, we propose two variance-reduced gradient algorithms – mS2GD and SARAH. mS2GD incorporates a mini-batching scheme for improving the theoretical complexity and practical performance of SVRG/S2GD, aiming to minimize a strongly convex function represented as the sum of an average of a large number of smooth con- vex functions and a simple non-smooth convex regularizer. While SARAH, short for StochAstic Recursive grAdient algoritHm and using a stochastic recursive gradient, targets at minimizing the average of a large number of smooth functions for both con- vex and non-convex cases. Both methods fall into the category of variance-reduction optimization, and obtain a total complexity of O((n+κ)log(1/ε)) to achieve an ε-accuracy solution for strongly convex objectives, while SARAH also maintains a sub-linear convergence for non-convex problems. Meanwhile, SARAH has a practical variant SARAH+ due to its linear convergence of the expected stochastic gradients in inner loops.In Chapter 3, we declare that randomized batches can be applied with second- order information, as to improve upon convergence in both theory and practice, with a framework of L-BFGS as a novel approach to finite-sum optimization problems. We provide theoretical analyses for both convex and non-convex objectives. Meanwhile, we propose LBFGS-F as a variant where Fisher information matrix is used instead of Hessian information, and prove it applicable to a distributed environment within the popular applications of least-square and cross-entropy losses.In Chapter 4, we develop fast randomized algorithms for solving polynomial optimization problems on the applications of alternating-current optimal power flows (ACOPF) in power system field. The traditional research on power system problem focuses on solvers using second-order method, while no randomized algorithms have been developed. First, we propose a coordinate-descent algorithm as an online solver, applied for solving time-varying optimization problems in power systems. We bound the difference between the current approximate optimal cost generated by our algorithm and the optimal cost for a relaxation using the most recent data from above by a function of the properties of the instance and the rate of change to the instance over time. Second, we focus on a steady-state problem in power systems, and study means of switching from solving a convex relaxation to Newton method working on a non-convex (augmented) Lagrangian of the problem

    Computational Complexity of Electrical Power System Problems

    No full text
    The study of the computational complexity of real-world applications, although theoretical, can provide many pragmatic outcomes. For example, demonstrating that some types of algorithms cannot exist to solve the problem; the creation of challenging benchmark examples; and new insights into the underling structure and properties of the problem. In this thesis, we study the computational complexity of several important problems in the application of electrical power systems. Knowledge of the current state of the power system is important for power network operators. This helps, for example, to predict if the network is trending towards an undesirable state of operation, or if a power line is working at its operational limits. The state of a power system is determined by the demand, the generation and the bus voltage magnitudes and phase angles. The demand of loads can be reliably estimated via forecasts, historic records and/or measurements and the operators of generators report the generation values. Given generation and demand values, the voltage magnitudes and phase angles can be computed. This is what is called the Power Flow problem. Cost for generating power often varies from generator to generator. In the Optimal Power Flow problem, the aim is to find the cheapest generation dispatch, such that the forecast demand can be satisfied. Disasters, such as storms or floods, and operator errors have to potential to destroy parts of the network. This can make it impossible to satisfy all the demand. In the Maximum Power Flow problem, the aim is to find a generation dispatch that can satisfy as much demand as possible. In this thesis, we provide the proofs that the Maximum Power Flow, Optimal Power Flow and the Power Flow problem are NP-hard for: radial networks in the Alternating Current power flow model and planar networks in the Linear AC Approximation (DC) power flow model with line switching. Furthermore, we show that there does not exist a polynomial approximation algorithm for the Optimal Power Flow problem in any of these settings. We also study the complexity of the Lossless-Sin AC Approximation power flow model, showing that the Maximum Power Flow and Optimal Power Flow problem are strongly NP-hard for planar networks
    corecore