11 research outputs found

    An inexact qq-order regularized proximal Newton method for nonconvex composite optimization

    Full text link
    This paper concerns the composite problem of minimizing the sum of a twice continuously differentiable function ff and a nonsmooth convex function. For this class of nonconvex and nonsmooth problems, by leveraging a practical inexactness criterion and a novel selection strategy for iterates, we propose an inexact q∈[2,3]q\in[2,3]-order regularized proximal Newton method, which becomes an inexact cubic regularization (CR) method for q=3q=3. We justify that its iterate sequence converges to a stationary point for the KL objective function, and if the objective function has the KL property of exponent ξ∈(0,q−1q)\theta\in(0,\frac{q-1}{q}), the convergence has a local QQ-superlinear rate of order q−1ξq\frac{q-1}{\theta q}. In particular, under a locally H\"{o}lderian error bound of order γ∈(1q−1,1]\gamma\in(\frac{1}{q-1},1] on a second-order stationary point set, the iterate sequence converges to a second-order stationary point with a local QQ-superlinear rate of order γ(q ⁣− ⁣1)\gamma(q\!-\!1), which is specified as QQ-quadratic rate for q=3q=3 and γ=1\gamma=1. This is the first practical inexact CR method with QQ-quadratic convergence rate for nonconvex composite optimization. We validate the efficiency of the proposed method with ZeroFPR as the solver of subproblems by applying it to convex and nonconvex composite problems with a highly nonlinear ff

    Regularisation, optimisation, subregularity

    Get PDF
    Regularisation theory in Banach spaces, and non-norm-squared regularisation even in finite dimensions, generally relies upon Bregman divergences to replace norm convergence. This is comparable to the extension of first-order optimisation methods to Banach spaces. Bregman divergences can, however, be somewhat suboptimal in terms of descriptiveness. Using the concept of (strong) metric subregularity, previously used to prove the fast local convergence of optimisation methods, we show norm convergence in Banach spaces and for non-norm-squared regularisation. For problems such as total variation regularised image reconstruction, the metric subregularity reduces to a geometric condition on the ground truth: flat areas in the ground truth have to compensate for the fidelity term not having second-order growth within the kernel of the forward operator. Our approach to proving such regularisation results is based on optimisation formulations of inverse problems. As a side result of the regularisation theory that we develop, we provide regularisation complexity results for optimisation methods: how many steps N-delta of the algorithm do we have to take for the approximate solutions to converge as the corruption level delta 0?Peer reviewe

    Perturbation of error bounds

    Get PDF
    Our aim in the current article is to extend the developments in Kruger et al. (SIAM J Optim 20(6):3280–3296, 2010. doi: 10.1137/100782206) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by lower semicontinuous functions under data perturbations. We propose new concepts of (arbitrary, convex and linear) perturbations of the given function defining the system under consideration, which turn out to be a useful tool in our analysis. The characterizations of error bounds for families of perturbations can be interpreted as estimates of the ‘radius of error bounds’. The definitions and characterizations are illustrated by examples.The research is supported by the Australian Research Council: project DP160100854; EDF and the Jacques Hadamard Mathematical Foundation: Gaspard Monge Program for Optimization and Operations Research. The research of the second and third authors is also supported by MINECO of Spain and FEDER of EU: Grant MTM2014-59179-C2-1-P

    An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization

    Full text link
    This paper focuses on the minimization of a sum of a twice continuously differentiable function ff and a nonsmooth convex function. We propose an inexact regularized proximal Newton method by an approximation of the Hessian ∇2 ⁣f(x)\nabla^2\!f(x) involving the ϱ\varrhoth power of the KKT residual. For ϱ=0\varrho=0, we demonstrate the global convergence of the iterate sequence for the KL objective function and its RR-linear convergence rate for the KL objective function of exponent 1/21/2. For ϱ∈(0,1)\varrho\in(0,1), we establish the global convergence of the iterate sequence and its superlinear convergence rate of order q(1 ⁣+â€‰âŁÏ±)q(1\!+\!\varrho) under an assumption that cluster points satisfy a local H\"{o}lderian local error bound of order q∈(max⁥(ϱ,11+ϱ),1]q\in(\max(\varrho,\frac{1}{1+\varrho}),1] on the strong stationary point set; and when cluster points satisfy a local error bound of order q>1+ϱq>1+\varrho on the common stationary point set, we also obtain the global convergence of the iterate sequence, and its superlinear convergence rate of order (q−ϱ)2q\frac{(q-\varrho)^2}{q} if q>2ϱ+1+4ϱ+12q>\frac{2\varrho+1+\sqrt{4\varrho+1}}{2}. A dual semismooth Newton augmented Lagrangian method is developed for seeking an inexact minimizer of subproblem. Numerical comparisons with two state-of-the-art methods on ℓ1\ell_1-regularized Student's tt-regression, group penalized Student's tt-regression, and nonconvex image restoration confirm the efficiency of the proposed method

    A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima

    Full text link
    We introduce Bella, a locally superlinearly convergent Bregman forward backward splitting method for minimizing the sum of two nonconvex functions, one of which satisfying a relative smoothness condition and the other one possibly nonsmooth. A key tool of our methodology is the Bregman forward-backward envelope (BFBE), an exact and continuous penalty function with favorable first- and second-order properties, and enjoying a nonlinear error bound when the objective function satisfies a Lojasiewicz-type property. The proposed algorithm is of linesearch type over the BFBE along candidate update directions, and converges subsequentially to stationary points, globally under a KL condition, and owing to the given nonlinear error bound can attain superlinear convergence rates even when the limit point is a nonisolated minimum, provided the directions are suitably selected

    A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization

    Full text link
    We propose a novel trust region method for solving a class of nonsmooth and nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence results. We further derive new normal map-based representations of the associated second-order optimality conditions that have direct connections to the local assumptions required for fast convergence. Finally, we study the behavior of our algorithm when the Hessian matrix of the smooth part of the objective function is approximated by BFGS updates. We successfully link the KL theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type condition to show superlinear convergence of the quasi-Newton version of our method. Numerical experiments on sparse logistic regression and image compression illustrate the efficiency of the proposed algorithm.Comment: 56 page

    Convergence de Fisher et H-différentiabilité des applications multivoques

    Get PDF
    Dans cette thÚse nous présentons dans un premier temps une nouvelle notion de différentiabilité généralisée pour les applications multivoques, faisant intervenir des applications positivement homogÚnes: la H-différentiabilité. Nous étudions la stabilité de cette notion en utilisant la convergence de Fischer, d'abord dédiée aux ensembles mais que nous avons adaptée aux applications multivoques. Nous nous intéressons ensuite à l'étude de la dépendance continue des ensembles de points fixes d'une application multivoque contractante par rapport aux données. Finalement nous analysons la convergence d'une méthode d'approximations successives de type forward-backward splitting, des zéros de la somme de deux opérateurs multivoques non monotones, jouissants notamment de propriétés de pseudo H-différentiabilitéIn this thesis we present at first a new concept of generalized differentiation for setvalued mappings, involving positively homogeneous applications: the H-differentiability. We study the stability of this notion by using Fischer convergence,firstly dedicated to sets but which we have adapted to set-valued mappings. We establish the continuous dependence of fixed points sets of set-valued contraction and finally we study the convergence of a forward-backward splitting method for approximating the zeros of the sum of two non-monotone set-valued mappings, notably using properties of pseudo H-differentiability.POINTE A PITRE-BU (971202101) / SudocSudocFranceF
    corecore