14 research outputs found

    An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow

    Get PDF
    A novel trust region method for solving linearly constrained nonlinear programs is presented. The proposed technique is amenable to a distributed implementation, as its salient ingredient is an alternating projected gradient sweep in place of the Cauchy point computation. It is proven that the algorithm yields a sequence that globally converges to a critical point. As a result of some changes to the standard trust region method, namely a proximal regularisation of the trust region subproblem, it is shown that the local convergence rate is linear with an arbitrarily small ratio. Thus, convergence is locally almost superlinear, under standard regularity assumptions. The proposed method is successfully applied to compute local solutions to alternating current optimal power flow problems in transmission and distribution networks. Moreover, the new mechanism for computing a Cauchy point compares favourably against the standard projected search as for its activity detection properties

    Distributed Optimization with Application to Power Systems and Control

    Get PDF
    In many engineering domains, systems are composed of partially independent subsystems—power systems are composed of distribution and transmission systems, teams of robots are composed of individual robots, and chemical process systems are composed of vessels, heat exchangers and reactors. Often, these subsystems should reach a common goal such as satisfying a power demand with minimum cost, flying in a formation, or reaching an optimal set-point. At the same time, limited information exchange is desirable—for confidentiality reasons but also due to communication constraints. Moreover, a fast and reliable decision process is key as applications might be safety-critical. Mathematical optimization techniques are among the most successful tools for controlling systems optimally with feasibility guarantees. Yet, they are often centralized—all data has to be collected in one central and computationally powerful entity. Methods from distributed optimization control the subsystems in a distributed or decentralized fashion, reducing or avoiding central coordination. These methods have a long and successful history. Classical distributed optimization algorithms, however, are typically designed for convex problems. Hence, they are only partially applicable in the above domains since many of them lead to optimization problems with non-convex constraints. This thesis develops one of the first frameworks for distributed and decentralized optimization with non-convex constraints. Based on the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) algorithm, a bi-level distributed ALADIN framework is presented, solving the coordination step of ALADIN in a decentralized fashion. This framework can handle various decentralized inner algorithms, two of which we develop here: a decentralized variant of the Alternating Direction Method of Multipliers (ADMM) and a novel decentralized Conjugate Gradient algorithm. Decentralized conjugate gradient is to the best of our knowledge the first decentralized algorithm with a guarantee of convergence to the exact solution in a finite number of iterates. Sufficient conditions for fast local convergence of bi-level ALADIN are derived. Bi-level ALADIN strongly reduces the communication and coordination effort of ALADIN and preserves its fast convergence guarantees. We illustrate these properties on challenging problems from power systems and control, and compare performance to the widely used ADMM. The developed methods are implemented in the open-source MATLAB toolbox ALADIN-—one of the first toolboxes for decentralized non-convex optimization. ALADIN- comes with a rich set of application examples from different domains showing its broad applicability. As an additional contribution, this thesis provides new insights why state-of-the-art distributed algorithms might encounter issues for constrained problems

    Structure Exploitation in Mixed-Integer Optimization with Applications to Energy Systems

    Get PDF
    Das Ziel dieser Arbeit ist neue numerische Methoden fĂŒr gemischt-ganzzahlige Optimierungsprobleme zu entwickeln um eine verbesserte Geschwindigkeit und Skalierbarkeit zu erreichen. Dies erfolgt durch Ausnutzung gĂ€ngiger Problemstrukturen wie separierbarkeit oder Turnpike-eigenschaften. Methoden, die diese Strukturen ausnutzen können, wurden bereits im Bereich der verteilten Optimierung und optimalen Steuerung entwickelt, sie sind jedoch nicht direkt auf gemischt-ganztĂ€gige Probleme anwendbar. Um verteilte Rechenressourcen zur Lösung von gemischt-ganzzahligen Problemen nutzen zu können, sind neue Methoden erforderlich. Zu diesem Zweck werden verschiedene Erweiterungen bestehender Methoden sowie neuartige Techniken zur gemischt-ganzzahligen Optimierung vorgestellt. Benchmark-Probleme aus Strom- und Energiesystemen werden verwendet, um zu demonstrieren, dass die vorgestellten Methoden zu schnelleren Laufzeiten fĂŒhren und die Lösung großer Probleme ermöglichen, die sonst nicht zentral gelöst werden können. Die vorliegende Arbeit enthĂ€lt die folgenden BeitrĂ€ge: - Eine Erweiterung des Augmented Lagrangian Alternating Direction Inexact Newton-Algorithmus zur verteilten Optimierung fĂŒr gemischt-ganzzahlige Probleme. - Ein neuer, teilweise-verteilter Optimierungsalgorithmus fĂŒr die gemischt-ganzzahlige Optimierung basierend auf Ă€ußeren Approximationsverfahren. - Ein neuer Optimierungsalgorithmus fĂŒr die verteilte gemischt-ganzzahlige Optimierung, der auf branch-and-bound Verfahren basiert. - Eine erste Untersuchung von Turnpike-Eigenschaften bei Optimalsteuerungsproblemen mit gemischten-Ganzzahligen EntscheidungsgrĂ¶ĂŸen und ein spezieller Algorithmus zur Lösung dieser Probleme. - Eine neue Branch-and-Bound Heuristik, die a priori Probleminformationen effizienter nutzt als aktuelle Warmstarttechniken. Schließlich wird gezeigt, dass die Ergebnisse der vorgestellten Optimierungsalgorithmen fĂŒr verteilte gemischt-ganzzahlige Optimierung stark PartitionierungsabhĂ€ngig sind. Zu diesem Zweck wird auch eine Untersuchung von Partitionierungsmethoden fĂŒr die verteilte Optimierung vorgestellt

    Distributed Optimization with Application to Power Systems and Control

    Get PDF
    Mathematical optimization techniques are among the most successful tools for controlling technical systems optimally with feasibility guarantees. Yet, they are often centralized—all data has to be collected in one central and computationally powerful entity. Methods from distributed optimization overcome this limitation. Classical approaches, however, are often not applicable due to non-convexities. This work develops one of the first frameworks for distributed non-convex optimization

    Data-driven coordination of subproblems in enterprise-wide optimization under organizational considerations

    Get PDF
    While decomposition techniques in mathematical programming are usually designed for numerical efficiency, coordination problems within enterprise-wide optimization are often limited by organizational rather than numerical considerations. We propose a “data-driven” coordination framework which manages to recover the same optimum as the equivalent centralized formulation while allowing coordinating agents to retain autonomy, privacy, and flexibility over their own objectives, constraints, and variables. This approach updates the coordinated, or shared, variables based on derivative-free optimization (DFO) using only coordinated variables to agent-level optimal subproblem evaluation “data.” We compare the performance of our framework using different DFO solvers (CUATRO, Py-BOBYQA, DIRECT-L, GPyOpt) against conventional distributed optimization (ADMM) on three case studies: collaborative learning, facility location, and multiobjective blending. We show that in low-dimensional and nonconvex subproblems, the exploration-exploitation trade-offs of DFO solvers can be leveraged to converge faster and to a better solution than in distributed optimization

    Distributed Optimization with Application to Power Systems and Control

    Get PDF
    Mathematical optimization techniques are among the most successful tools for controlling technical systems optimally with feasibility guarantees. Yet, they are often centralized—all data has to be collected in one central and computationally powerful entity. Methods from distributed optimization overcome this limitation. Classical approaches, however, are often not applicable due to non-convexities. This work develops one of the first frameworks for distributed non-convex optimization

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Spatial and temporal hierarchical decomposition methods for the optimal power flow problem

    Get PDF
    The subject of this thesis is the development of spatial and temporal decomposition methods for the optimal power flow problem, such as in the transmissiondistribution network topologies. In this context, we propose novel decomposition interfaces and effectivemethodology for both the spatial and temporal dimensions applicable to linear and non-linear representations of the OPF problem. These two decomposition strategies are combined with a Benders-based algorithmand have advantages in model building time, memory management and solving time. For example, in the 2880-period linear problems, the decomposition finds optimal solutions up to 50 times faster and allows even larger instances to be solved; and in multi-period non-linear problems with 48 periods, close-to-optimal feasible solutions are found 7 times faster. With these decompositions, detailed networks can be optimized in coordination, effectively exploiting the value of the time-linked elements in both transmission and distribution levels while speeding up the solution process, preserving privacy, and adding flexibility when dealing with different models at each level. In the non-linear methodology, significant challenges, such as active set determination, instability and non-convex overestimations, may hinder its effectiveness, and they are addressed, making the proposed methodology more robust and stable. A test network was constructed by combining standard publicly available networks resulting in nearly 1000 buses and lines with up to 8760 connected periods; several interfaces were presented depending on the problemtype and its topology using a modified Benders algorithm. Insight was given into why a Benders-based decomposition was used for this type of problem instead of a common alternative: ADMM. The methodology is useful mainly in two sets of applications: when highly detailed long-termlinear operational problems need to be solved, such as in planning frameworks where the operational problems solved assume no prior knowledge; and in full AC-OPF problems where prior information from historic solutions can be used to speed up convergence
    corecore