9,130 research outputs found

    A differential analysis of the power flow equations

    Get PDF
    The AC power flow equations are fundamental in all aspects of power systems planning and operations. They are routinely solved using Newton-Raphson like methods. However, there is little theoretical understanding of when these algorithms are guaranteed to find a solution of the power flow equations or how long they may take to converge. Further, it is known that in general these equations have multiple solutions and can exhibit chaotic behavior. In this paper, we show that the power flow equations can be solved efficiently provided that the solution lies in a certain set. We introduce a family of convex domains, characterized by Linear Matrix Inequalities, in the space of voltages such that there is at most one power flow solution in each of these domains. Further, if a solution exists in one of these domains, it can be found efficiently, and if one does not exist, a certificate of non-existence can also be obtained efficiently. The approach is based on the theory of monotone operators and related algorithms for solving variational inequalities involving monotone operators. We validate our approach on IEEE test networks and show that practical power flow solutions lie within an appropriately chosen convex domain.Comment: arXiv admin note: text overlap with arXiv:1506.0847

    Distributed Random Convex Programming via Constraints Consensus

    Full text link
    This paper discusses distributed approaches for the solution of random convex programs (RCP). RCPs are convex optimization problems with a (usually large) number N of randomly extracted constraints; they arise in several applicative areas, especially in the context of decision under uncertainty, see [2],[3]. We here consider a setup in which instances of the random constraints (the scenario) are not held by a single centralized processing unit, but are distributed among different nodes of a network. Each node "sees" only a small subset of the constraints, and may communicate with neighbors. The objective is to make all nodes converge to the same solution as the centralized RCP problem. To this end, we develop two distributed algorithms that are variants of the constraints consensus algorithm [4],[5]: the active constraints consensus (ACC) algorithm, and the vertex constraints consensus (VCC) algorithm. We show that the ACC algorithm computes the overall optimal solution in finite time, and with almost surely bounded communication at each iteration. The VCC algorithm is instead tailored for the special case in which the constraint functions are convex also w.r.t. the uncertain parameters, and it computes the solution in a number of iterations bounded by the diameter of the communication graph. We further devise a variant of the VCC algorithm, namely quantized vertex constraints consensus (qVCC), to cope with the case in which communication bandwidth among processors is bounded. We discuss several applications of the proposed distributed techniques, including estimation, classification, and random model predictive control, and we present a numerical analysis of the performance of the proposed methods. As a complementary numerical result, we show that the parallel computation of the scenario solution using ACC algorithm significantly outperforms its centralized equivalent

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
    • …
    corecore