557 research outputs found
Convergence analysis of an Inexact Infeasible Interior Point method for Semidefinite Programming
In this paper we present an extension to SDP of the well known infeasible Interior Point method for linear programming of Kojima,Megiddo and Mizuno (A primal-dual infeasible-interior-point algorithm for Linear Programming, Math. Progr., 1993). The extension developed here allows the use of inexact search directions; i.e., the linear systems defining the search directions can be solved with an accuracy that increases as the solution is approached. A convergence analysis is carried out and the global convergence of the method is prove
Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
A regularization algorithm using inexact function values and inexact
derivatives is proposed and its evaluation complexity analyzed. This algorithm
is applicable to unconstrained problems and to problems with inexpensive
constraints (that is constraints whose evaluation and enforcement has
negligible cost) under the assumption that the derivative of highest degree is
-H\"{o}lder continuous. It features a very flexible adaptive mechanism
for determining the inexactness which is allowed, at each iteration, when
computing objective function values and derivatives. The complexity analysis
covers arbitrary optimality order and arbitrary degree of available approximate
derivatives. It extends results of Cartis, Gould and Toint (2018) on the
evaluation complexity to the inexact case: if a th order minimizer is sought
using approximations to the first derivatives, it is proved that a suitable
approximate minimizer within is computed by the proposed algorithm
in at most iterations and at most
approximate
evaluations. An algorithmic variant, although more rigid in practice, can be
proved to find such an approximate minimizer in
evaluations.While
the proposed framework remains so far conceptual for high degrees and orders,
it is shown to yield simple and computationally realistic inexact methods when
specialized to the unconstrained and bound-constrained first- and second-order
cases. The deterministic complexity results are finally extended to the
stochastic context, yielding adaptive sample-size rules for subsampling methods
typical of machine learning.Comment: 32 page
Adaptive Regularization for Nonconvex Optimization Using Inexact Function Values and Randomly Perturbed Derivatives
A regularization algorithm allowing random noise in derivatives and inexact
function values is proposed for computing approximate local critical points of
any order for smooth unconstrained optimization problems. For an objective
function with Lipschitz continuous -th derivative and given an arbitrary
optimality order , it is shown that this algorithm will, in
expectation, compute such a point in at most
inexact evaluations of and its derivatives whenever , where
is the tolerance for th order accuracy. This bound becomes at
most
inexact evaluations if and all derivatives are Lipschitz continuous.
Moreover these bounds are sharp in the order of the accuracy tolerances. An
extension to convexly constrained problems is also outlined.Comment: 22 page
Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections
This work focuses on the iterative solution of sequences of KKT linear
systems arising in interior point methods applied to large convex quadratic
programming problems. This task is the computational core of the interior point
procedure and an efficient preconditioning strategy is crucial for the
efficiency of the overall method. Constraint preconditioners are very effective
in this context; nevertheless, their computation may be very expensive for
large-scale problems, and resorting to approximations of them may be
convenient. Here we propose a procedure for building inexact constraint
preconditioners by updating a "seed" constraint preconditioner computed for a
KKT matrix at a previous interior point iteration. These updates are obtained
through low-rank corrections of the Schur complement of the (1,1) block of the
seed preconditioner. The updated preconditioners are analyzed both
theoretically and computationally. The results obtained show that our updating
procedure, coupled with an adaptive strategy for determining whether to
reinitialize or update the preconditioner, can enhance the performance of
interior point methods on large problems.Comment: 22 page
On affine scaling inexact dogleg methods for bound-constrained nonlinear systems
Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model.
Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given
On the Convergence Properties of a Stochastic Trust-Region Method with Inexact Restoration
We study the convergence properties of SIRTR, a stochastic inexact restoration trust-region method suited for the minimization of a finite sum of continuously differentiable functions. This method combines the trust-region methodology with random function and gradient estimates formed by subsampling. Unlike other existing schemes, it forces the decrease of a merit function by combining the function approximation with an infeasibility term, the latter of which measures the distance of the current sample size from its maximum value. In a previous work, the expected iteration complexity to satisfy an approximate first-order optimality condition was given. Here, we elaborate on the convergence analysis of SIRTR and prove its convergence in probability under suitable accuracy requirements on random function and gradient estimates. Furthermore, we report the numerical results obtained on some nonconvex classification test problems, discussing the impact of the probabilistic requirements on the selection of the sample sizes
Partially Updated Switching-Method for systems of nonlinear equations
AbstractA hybrid method for solving systems of n nonlinear equations is given. The method does not use derivative information and is especially attractive when good starting points are not available and the given system is expensive to evaluate. It is shown that, after a few steps, each iteration requires (2k + 1) function evaluations where k, 1 ā©½ k ā©½ n, is chosen so as to have an efficient algorithm. Global convergence results are given and superlinear convergence is established. Some numerical results show the numerical performance of the proposed method
- ā¦