10,353 research outputs found
Inexact Convex Relaxations for AC Optimal Power Flow: Towards AC Feasibility
Convex relaxations of AC optimal power flow (AC-OPF) problems have attracted
significant interest as in several instances they provably yield the global
optimum to the original non-convex problem. If, however, the relaxation is
inexact, the obtained solution is not AC-feasible. The quality of the obtained
solution is essential for several practical applications of AC-OPF, but
detailed analyses are lacking in existing literature. This paper aims to cover
this gap. We provide an in-depth investigation of the solution characteristics
when convex relaxations are inexact, we assess the most promising AC
feasibility recovery methods for large-scale systems, and we propose two new
metrics that lead to a better understanding of the quality of the identified
solutions. We perform a comprehensive assessment on 96 different test cases,
ranging from 14 to 3120 buses, and we show the following: (i) Despite an
optimality gap of less than 1%, several test cases still exhibit substantial
distances to both AC feasibility and local optimality and the newly proposed
metrics characterize these deviations. (ii) Penalization methods fail to
recover an AC-feasible solution in 15 out of 45 cases, and using the proposed
metrics, we show that most failed test instances exhibit substantial distances
to both AC-feasibility and local optimality. For failed test instances with
small distances, we show how our proposed metrics inform a fine-tuning of
penalty weights to obtain AC-feasible solutions. (iii) The computational
benefits of warm-starting non-convex solvers have significant variation, but a
computational speedup exists in over 75% of the cases
MM Algorithms for Geometric and Signomial Programming
This paper derives new algorithms for signomial programming, a generalization
of geometric programming. The algorithms are based on a generic principle for
optimization called the MM algorithm. In this setting, one can apply the
geometric-arithmetic mean inequality and a supporting hyperplane inequality to
create a surrogate function with parameters separated. Thus, unconstrained
signomial programming reduces to a sequence of one-dimensional minimization
problems. Simple examples demonstrate that the MM algorithm derived can
converge to a boundary point or to one point of a continuum of minimum points.
Conditions under which the minimum point is unique or occurs in the interior of
parameter space are proved for geometric programming. Convergence to an
interior point occurs at a linear rate. Finally, the MM framework easily
accommodates equality and inequality constraints of signomial type. For the
most important special case, constrained quadratic programming, the MM
algorithm involves very simple updates.Comment: 16 pages, 1 figur
A second derivative SQP method: local convergence
In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact â„“1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud
\ud
Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud
\ud
Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set
Efficient First Order Methods for Linear Composite Regularizers
A wide class of regularization problems in machine learning and statistics
employ a regularization term which is obtained by composing a simple convex
function \omega with a linear transformation. This setting includes Group Lasso
methods, the Fused Lasso and other total variation methods, multi-task learning
methods and many more. In this paper, we present a general approach for
computing the proximity operator of this class of regularizers, under the
assumption that the proximity operator of the function \omega is known in
advance. Our approach builds on a recent line of research on optimal first
order optimization methods and uses fixed point iterations for numerically
computing the proximity operator. It is more general than current approaches
and, as we show with numerical simulations, computationally more efficient than
available first order methods which do not achieve the optimal rate. In
particular, our method outperforms state of the art O(1/T) methods for
overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused
Lasso and tree structured Group Lasso.Comment: 19 pages, 8 figure
AC OPF in Radial Distribution Networks - Parts I,II
The optimal power-flow problem (OPF) has played a key role in the planning
and operation of power systems. Due to the non-linear nature of the AC
power-flow equations, the OPF problem is known to be non-convex, therefore hard
to solve. Most proposed methods for solving the OPF rely on approximations that
render the problem convex, but that may yield inexact solutions. Recently,
Farivar and Low proposed a method that is claimed to be exact for radial
distribution systems, despite no apparent approximations. In our work, we show
that it is, in fact, not exact. On one hand, there is a misinterpretation of
the physical network model related to the ampacity constraint of the lines'
current flows. On the other hand, the proof of the exactness of the proposed
relaxation requires unrealistic assumptions related to the unboundedness of
specific control variables. We also show that the extension of this approach to
account for exact line models might provide physically infeasible solutions.
Recently, several contributions have proposed OPF algorithms that rely on the
use of the alternating-direction method of multipliers (ADMM). However, as we
show in this work, there are cases for which the ADMM-based solution of the
non-relaxed OPF problem fails to converge. To overcome the aforementioned
limitations, we propose an algorithm for the solution of a non-approximated,
non-convex OPF problem in radial distribution systems that is based on the
method of multipliers, and on a primal decomposition of the OPF. This work is
divided in two parts. In Part I, we specifically discuss the limitations of BFM
and ADMM to solve the OPF problem. In Part II, we provide a centralized version
and a distributed asynchronous version of the proposed OPF algorithm and we
evaluate its performances using both small-scale electrical networks, as well
as a modified IEEE 13-node test feeder
Shape Parameter Estimation
Performance of machine learning approaches depends strongly on the choice of
misfit penalty, and correct choice of penalty parameters, such as the threshold
of the Huber function. These parameters are typically chosen using expert
knowledge, cross-validation, or black-box optimization, which are time
consuming for large-scale applications. We present a principled, data-driven
approach to simultaneously learn the model pa- rameters and the misfit penalty
parameters. We discuss theoretical properties of these joint inference
problems, and develop algorithms for their solution. We show synthetic examples
of automatic parameter tuning for piecewise linear-quadratic (PLQ) penalties,
and use the approach to develop a self-tuning robust PCA formulation for
background separation.Comment: 20 pages, 10 figure
- …