8,564 research outputs found
Linear Optimal Power Flow Using Cycle Flows
Linear optimal power flow (LOPF) algorithms use a linearization of the
alternating current (AC) load flow equations to optimize generator dispatch in
a network subject to the loading constraints of the network branches. Common
algorithms use the voltage angles at the buses as optimization variables, but
alternatives can be computationally advantageous. In this article we provide a
review of existing methods and describe a new formulation that expresses the
loading constraints directly in terms of the flows themselves, using a
decomposition of the network graph into a spanning tree and closed cycles. We
provide a comprehensive study of the computational performance of the various
formulations, in settings that include computationally challenging applications
such as multi-period LOPF with storage dispatch and generation capacity
expansion. We show that the new formulation of the LOPF solves up to 7 times
faster than the angle formulation using a commercial linear programming solver,
while another existing cycle-based formulation solves up to 20 times faster,
with an average speed-up of factor 3 for the standard networks considered here.
If generation capacities are also optimized, the average speed-up rises to a
factor of 12, reaching up to factor 213 in a particular instance. The speed-up
is largest for networks with many buses and decentral generators throughout the
network, which is highly relevant given the rise of distributed renewable
generation and the computational challenge of operation and planning in such
networks.Comment: 11 pages, 5 figures; version 2 includes results for generation
capacity optimization; version 3 is the final accepted journal versio
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Optimistic Robust Optimization With Applications To Machine Learning
Robust Optimization has traditionally taken a pessimistic, or worst-case
viewpoint of uncertainty which is motivated by a desire to find sets of optimal
policies that maintain feasibility under a variety of operating conditions. In
this paper, we explore an optimistic, or best-case view of uncertainty and show
that it can be a fruitful approach. We show that these techniques can be used
to address a wide variety of problems. First, we apply our methods in the
context of robust linear programming, providing a method for reducing
conservatism in intuitive ways that encode economically realistic modeling
assumptions. Second, we look at problems in machine learning and find that this
approach is strongly connected to the existing literature. Specifically, we
provide a new interpretation for popular sparsity inducing non-convex
regularization schemes. Additionally, we show that successful approaches for
dealing with outliers and noise can be interpreted as optimistic robust
optimization problems. Although many of the problems resulting from our
approach are non-convex, we find that DCA or DCA-like optimization approaches
can be intuitive and efficient
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
- …