13 research outputs found
Evaluation complexity for nonlinear constrained optimization using unscaled kkt conditions and high-order models
FAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOThe evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.The evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.262951967FAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICOFAPESP - FUNDAĂĂO DE AMPARO Ă PESQUISA DO ESTADO DE SĂO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂFICO E TECNOLĂGICO2010/10133-0; 2013/03447-6; 2013/05475-7; 2013/07375-0; 2013/23494-9304032/2010-7; 309517/2014-1; 303750/2014-6; 490326/2013-
Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature
A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization
is considered, which uses a model consisting of a Taylor expansion of arbitrary
degree and regularization term involving a possibly non-smooth norm. It is
shown that the non-smoothness of the norm does not affect the
upper bound on evaluation complexity for finding
first-order -approximate minimizers using derivatives, and that
this result does not hinge on the equivalence of norms in . It is also
shown that, if , the bound of evaluations for finding
second-order -approximate minimizers still holds for a variant of
AR1pGN named AR2GN, despite the possibly non-smooth nature of the
regularization term. Moreover, the adaptation of the existing theory for
handling the non-smoothness results in an interesting modification of the
subproblem termination rules, leading to an even more compact complexity
analysis. In particular, it is shown when the Newton's step is acceptable for
an adaptive regularization method. The approximate minimization of quadratic
polynomials regularized with non-smooth norms is then discussed, and a new
approximate second-order necessary optimality condition is derived for this
case. An specialized algorithm is then proposed to enforce the first- and
second-order conditions that are strong enough to ensure the existence of a
suitable step in AR1pGN (when ) and in AR2GN, and its iteration complexity
is analyzed.Comment: A correction will be available soo
Stochastic Trust Region Methods with Trust Region Radius Depending on Probabilistic Models
We present a stochastic trust-region model-based framework in which its
radius is related to the probabilistic models. Especially, we propose a
specific algorithm, termed STRME, in which the trust-region radius depends
linearly on the latest model gradient. The complexity of STRME method in
non-convex, convex and strongly convex settings has all been analyzed, which
matches the existing algorithms based on probabilistic properties. In addition,
several numerical experiments are carried out to reveal the benefits of the
proposed methods compared to the existing stochastic trust-region methods and
other relevant stochastic gradient methods
On complexity and convergence of high-order coordinate descent algorithms
Coordinate descent methods with high-order regularized models for
box-constrained minimization are introduced. High-order stationarity asymptotic
convergence and first-order stationarity worst-case evaluation complexity
bounds are established. The computer work that is necessary for obtaining
first-order -stationarity with respect to the variables of each
coordinate-descent block is whereas the computer
work for getting first-order -stationarity with respect to all the
variables simultaneously is . Numerical examples
involving multidimensional scaling problems are presented. The numerical
performance of the methods is enhanced by means of coordinate-descent
strategies for choosing initial points