11 research outputs found
An inexact -order regularized proximal Newton method for nonconvex composite optimization
This paper concerns the composite problem of minimizing the sum of a twice
continuously differentiable function and a nonsmooth convex function. For
this class of nonconvex and nonsmooth problems, by leveraging a practical
inexactness criterion and a novel selection strategy for iterates, we propose
an inexact -order regularized proximal Newton method, which becomes
an inexact cubic regularization (CR) method for . We justify that its
iterate sequence converges to a stationary point for the KL objective function,
and if the objective function has the KL property of exponent
, the convergence has a local -superlinear rate
of order . In particular, under a locally H\"{o}lderian
error bound of order on a second-order stationary
point set, the iterate sequence converges to a second-order stationary point
with a local -superlinear rate of order , which is
specified as -quadratic rate for and . This is the first
practical inexact CR method with -quadratic convergence rate for nonconvex
composite optimization. We validate the efficiency of the proposed method with
ZeroFPR as the solver of subproblems by applying it to convex and nonconvex
composite problems with a highly nonlinear
Regularisation, optimisation, subregularity
Regularisation theory in Banach spaces, and non-norm-squared regularisation even in finite dimensions, generally relies upon Bregman divergences to replace norm convergence. This is comparable to the extension of first-order optimisation methods to Banach spaces. Bregman divergences can, however, be somewhat suboptimal in terms of descriptiveness. Using the concept of (strong) metric subregularity, previously used to prove the fast local convergence of optimisation methods, we show norm convergence in Banach spaces and for non-norm-squared regularisation. For problems such as total variation regularised image reconstruction, the metric subregularity reduces to a geometric condition on the ground truth: flat areas in the ground truth have to compensate for the fidelity term not having second-order growth within the kernel of the forward operator. Our approach to proving such regularisation results is based on optimisation formulations of inverse problems. As a side result of the regularisation theory that we develop, we provide regularisation complexity results for optimisation methods: how many steps N-delta of the algorithm do we have to take for the approximate solutions to converge as the corruption level delta 0?Peer reviewe
Perturbation of error bounds
Our aim in the current article is to extend the developments in Kruger et al. (SIAM J Optim 20(6):3280â3296, 2010. doi: 10.1137/100782206) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by lower semicontinuous functions under data perturbations. We propose new concepts of (arbitrary, convex and linear) perturbations of the given function defining the system under consideration, which turn out to be a useful tool in our analysis. The characterizations of error bounds for families of perturbations can be interpreted as estimates of the âradius of error boundsâ. The definitions and characterizations are illustrated by examples.The research is supported by the Australian Research Council: project DP160100854; EDF and the Jacques Hadamard Mathematical Foundation: Gaspard Monge Program for Optimization and Operations Research. The research of the second and third authors is also supported by MINECO of Spain and FEDER of EU: Grant MTM2014-59179-C2-1-P
An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization
This paper focuses on the minimization of a sum of a twice continuously
differentiable function and a nonsmooth convex function. We propose an
inexact regularized proximal Newton method by an approximation of the Hessian
involving the th power of the KKT residual. For
, we demonstrate the global convergence of the iterate sequence for
the KL objective function and its -linear convergence rate for the KL
objective function of exponent . For , we establish the
global convergence of the iterate sequence and its superlinear convergence rate
of order under an assumption that cluster points satisfy a
local H\"{o}lderian local error bound of order
on the strong stationary point set;
and when cluster points satisfy a local error bound of order on
the common stationary point set, we also obtain the global convergence of the
iterate sequence, and its superlinear convergence rate of order
if . A dual
semismooth Newton augmented Lagrangian method is developed for seeking an
inexact minimizer of subproblem. Numerical comparisons with two
state-of-the-art methods on -regularized Student's -regression,
group penalized Student's -regression, and nonconvex image restoration
confirm the efficiency of the proposed method
A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
We introduce Bella, a locally superlinearly convergent Bregman forward
backward splitting method for minimizing the sum of two nonconvex functions,
one of which satisfying a relative smoothness condition and the other one
possibly nonsmooth. A key tool of our methodology is the Bregman
forward-backward envelope (BFBE), an exact and continuous penalty function with
favorable first- and second-order properties, and enjoying a nonlinear error
bound when the objective function satisfies a Lojasiewicz-type property. The
proposed algorithm is of linesearch type over the BFBE along candidate update
directions, and converges subsequentially to stationary points, globally under
a KL condition, and owing to the given nonlinear error bound can attain
superlinear convergence rates even when the limit point is a nonisolated
minimum, provided the directions are suitably selected
A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization
We propose a novel trust region method for solving a class of nonsmooth and
nonconvex composite-type optimization problems. The approach embeds inexact
semismooth Newton steps for finding zeros of a normal map-based stationarity
measure for the problem in a trust region framework. Based on a new merit
function and acceptance mechanism, global convergence and transition to fast
local q-superlinear convergence are established under standard conditions. In
addition, we verify that the proposed trust region globalization is compatible
with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence
results. We further derive new normal map-based representations of the
associated second-order optimality conditions that have direct connections to
the local assumptions required for fast convergence. Finally, we study the
behavior of our algorithm when the Hessian matrix of the smooth part of the
objective function is approximated by BFGS updates. We successfully link the KL
theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type
condition to show superlinear convergence of the quasi-Newton version of our
method. Numerical experiments on sparse logistic regression and image
compression illustrate the efficiency of the proposed algorithm.Comment: 56 page
Convergence de Fisher et H-différentiabilité des applications multivoques
Dans cette thÚse nous présentons dans un premier temps une nouvelle notion de différentiabilité généralisée pour les applications multivoques, faisant intervenir des applications positivement homogÚnes: la H-différentiabilité. Nous étudions la stabilité de cette notion en utilisant la convergence de Fischer, d'abord dédiée aux ensembles mais que nous avons adaptée aux applications multivoques. Nous nous intéressons ensuite à l'étude de la dépendance continue des ensembles de points fixes d'une application multivoque contractante par rapport aux données. Finalement nous analysons la convergence d'une méthode d'approximations successives de type forward-backward splitting, des zéros de la somme de deux opérateurs multivoques non monotones, jouissants notamment de propriétés de pseudo H-différentiabilitéIn this thesis we present at first a new concept of generalized differentiation for setvalued mappings, involving positively homogeneous applications: the H-differentiability. We study the stability of this notion by using Fischer convergence,firstly dedicated to sets but which we have adapted to set-valued mappings. We establish the continuous dependence of fixed points sets of set-valued contraction and finally we study the convergence of a forward-backward splitting method for approximating the zeros of the sum of two non-monotone set-valued mappings, notably using properties of pseudo H-differentiability.POINTE A PITRE-BU (971202101) / SudocSudocFranceF