2,818 research outputs found
Methods for many-objective optimization: an analysis
Decomposition-based methods are often cited as the
solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as
Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods
Distributed Reconstruction of Nonlinear Networks: An ADMM Approach
In this paper, we present a distributed algorithm for the reconstruction of
large-scale nonlinear networks. In particular, we focus on the identification
from time-series data of the nonlinear functional forms and associated
parameters of large-scale nonlinear networks. Recently, a nonlinear network
reconstruction problem was formulated as a nonconvex optimisation problem based
on the combination of a marginal likelihood maximisation procedure with
sparsity inducing priors. Using a convex-concave procedure (CCCP), an iterative
reweighted lasso algorithm was derived to solve the initial nonconvex
optimisation problem. By exploiting the structure of the objective function of
this reweighted lasso algorithm, a distributed algorithm can be designed. To
this end, we apply the alternating direction method of multipliers (ADMM) to
decompose the original problem into several subproblems. To illustrate the
effectiveness of the proposed methods, we use our approach to identify a
network of interconnected Kuramoto oscillators with different network sizes
(500~100,000 nodes).Comment: To appear in the Preprints of 19th IFAC World Congress 201
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
A mixed integer quadratic programming formulation for the economic dispatch of generators with prohibited operating zones
In this paper, an optimisation-based approach is proposed using a mixed integer quadratic programming model for the economic dispatch of electrical power generators with prohibited zones of operation. The main advantage of the proposed approach is its capability to solve case studies from the literature to global optimality quickly and without any targeting of solution procedures. (c) 2006 Elsevier B.V. All rights reserved
An Inequality Constrained SL/QP Method for Minimizing the Spectral Abscissa
We consider a problem in eigenvalue optimization, in particular finding a
local minimizer of the spectral abscissa - the value of a parameter that
results in the smallest value of the largest real part of the spectrum of a
matrix system. This is an important problem for the stabilization of control
systems. Many systems require the spectra to lie in the left half plane in
order for them to be stable. The optimization problem, however, is difficult to
solve because the underlying objective function is nonconvex, nonsmooth, and
non-Lipschitz. In addition, local minima tend to correspond to points of
non-differentiability and locally non-Lipschitz behavior. We present a
sequential linear and quadratic programming algorithm that solves a series of
linear or quadratic subproblems formed by linearizing the surfaces
corresponding to the largest eigenvalues. We present numerical results
comparing the algorithms to the state of the art
- …