802 research outputs found
Recommended from our members
Implementing and modifying Broyden class updates for large scale optimization
Funder: Justus-Liebig-Universität Gießen (3114)AbstractWe consider Broyden class updates for large scale optimization problems in n dimensions, restricting attention to the case when the initial second derivative approximation is the identity matrix. Under this assumption we present an implementation of the Broyden class based on a coordinate transformation on each iteration. It requires only
2
n
k
+
O
(
k
2
)
+
O
(
n
)
multiplications on the kth iteration and stores
n
K
+
O
(
K
2
)
+
O
(
n
)
numbers, where K is the total number of iterations. We investigate a modification of this algorithm by a scaling approach and show a substantial improvement in performance over the BFGS method. We also study several adaptations of the new implementation to the limited memory situation, presenting algorithms that work with a fixed amount of storage independent of the number of iterations. We show that one such algorithm retains the property of quadratic termination. The practical performance of the new methods is compared with the performance of Nocedal’s (Math Comput 35:773--782, 1980) method, which is considered the benchmark in limited memory algorithms. The tests show that the new algorithms can be significantly more efficient than Nocedal’s method. Finally, we show how a scaling technique can significantly improve both Nocedal’s method and the new generalized conjugate gradient algorithm.</jats:p
Regularization of Limited Memory Quasi-Newton Methods for Large-Scale Nonconvex Minimization
This paper deals with regularized Newton methods, a flexible class of
unconstrained optimization algorithms that is competitive with line search and
trust region methods and potentially combines attractive elements of both. The
particular focus is on combining regularization with limited memory
quasi-Newton methods by exploiting the special structure of limited memory
algorithms. Global convergence of regularization methods is shown under mild
assumptions and the details of regularized limited memory quasi-Newton updates
are discussed including their compact representations.
Numerical results using all large-scale test problems from the CUTEst
collection indicate that our regularized version of L-BFGS is competitive with
state-of-the-art line search and trust-region L-BFGS algorithms and previous
attempts at combining L-BFGS with regularization, while potentially
outperforming some of them, especially when nonmonotonicity is involved.Comment: 23 pages, 4 figure
Optimizing network robustness via Krylov subspaces
We consider the problem of attaining either the maximal increase or reduction
of the robustness of a complex network by means of a bounded modification of a
subset of the edge weights. We propose two novel strategies combining Krylov
subspace approximations with a greedy scheme and an interior point method
employing either the Hessian or its approximation computed via the
limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS). The paper
discusses the computational and modeling aspects of our methodology and
illustrates the various optimization problems on networks that can be addressed
within the proposed framework. Finally, in the numerical experiments we compare
the performances of our algorithms with state-of-the-art techniques on
synthetic and real-world networks
BFGS-like updates of constraint preconditioners for sequences of KKT linear systems in quadratic programming
We focus on efficient preconditioning techniques for sequences of KKT linear systems
arising from the interior point solution of large convex quadratic programming problems.
Constraint Preconditioners~(CPs), though very effective in accelerating Krylov methods
in the solution of KKT systems, have a very high computational cost in some instances,
because their factorization
may be the most time-consuming task at each interior point iteration.
We overcome this problem by computing the CP from scratch only at selected interior point
iterations and by updating the last computed CP at the remaining iterations, via suitable
low-rank modifications based on a BFGS-like formula.
This work extends the limited-memory preconditioners for symmetric positive definite
matrices proposed by Gratton, Sartenaer and Tshimanga in [SIAM J. Optim. 2011; 21(3):912--935,
by exploiting specific features of KKT systems and CPs.
We prove that the updated preconditioners
still belong to the class of exact CPs, thus allowing the use of the conjugate gradient
method. Furthermore, they have the property of increasing the number of unit
eigenvalues of the preconditioned matrix as compared to generally used CPs.
Numerical experiments are reported, which show the effectiveness of our updating
technique when the cost for the factorization of the CP is high
Parallel Deterministic and Stochastic Global Minimization of Functions with Very Many Minima
The optimization of three problems with high dimensionality and many local minima are investigated
under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO
package, and QNSTOP, a new algorithm developed at Indiana University
An Efficient Interior-Point Decomposition Algorithm for Parallel Solution of Large-Scale Nonlinear Problems with Significant Variable Coupling
In this dissertation we develop multiple algorithms for efficient parallel solution of structured nonlinear programming problems by decomposition of the linear augmented system solved at each iteration of a nonlinear interior-point approach. In particular, we address large-scale, block-structured problems with a significant number of complicating, or coupling variables. This structure arises in many important problem classes including multi-scenario optimization, parameter estimation, two-stage stochastic programming, optimal control and power network problems. The structure of these problems induces a block-angular structure in the augmented system, and parallel solution is possible using a Schur-complement decomposition. Three major variants are implemented: a serial, full-space interior-point method, serial and parallel versions of an explicit Schur-complement decomposition, and serial and parallel versions of an implicit PCG-based Schur-complement decomposition. All of these algorithms have been implemented in C++ in an extensible software framework for nonlinear optimization.
The explicit Schur-complement decomposition is typically effective for problems with a few hundred coupling variables. We demonstrate the performance of our implementation on an important problem in optimal power grid operation, the contingency-constrained AC optimal power ow problem. In this dissertation, we present a rectangular IV formulation for the contingency-constrained ACOPF problem and demonstrate that the explicit Schur-complement decomposition can dramatically reduce solution times for a problem with a large number of contingency scenarios. Moreover, a comparison of the explicit Schur-complement decomposition implementation and the Progressive Hedging approach provided by Pyomo is provided, showing that the internal decomposition approach is computationally favorable to the external approach. However, the explicit Schur-complement decomposition approach is not appropriate for problems with a large number of coupling variables because of the high computational cost associated with forming and solving the dense Schur-complement.
We show that this bottleneck can be overcome by solving the Schur-complement equations implicitly using a quasi-Newton preconditioned conjugate gradient method.
This new algorithm avoids explicit formation and factorization of the Schur-complement.
The computational efficiency of the serial and parallel versions of this algorithm are compared with the serial full-space approach, and the serial and parallel explicit
Schur-complement approach on a set of quadratic parameter estimation problems and nonlinear optimization problems. These results show that the PCG implicit Schur-complement approach dramatically reduces the computational expense for problems with many coupling variables
- …