50 research outputs found

    An optimal subgradient algorithm for large-scale convex optimization in simple domains

    Full text link
    This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a simple closed convex domain, where the orthogonal projection on this feasible domain is efficiently available; (ii) a convex objective with a simple convex functional constraint. If we equip OSGA with an appropriate prox-function, the OSGA subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed scheme. A software package implementing OSGA for above domains is available

    A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima

    Full text link
    We introduce Bella, a locally superlinearly convergent Bregman forward backward splitting method for minimizing the sum of two nonconvex functions, one of which satisfying a relative smoothness condition and the other one possibly nonsmooth. A key tool of our methodology is the Bregman forward-backward envelope (BFBE), an exact and continuous penalty function with favorable first- and second-order properties, and enjoying a nonlinear error bound when the objective function satisfies a Lojasiewicz-type property. The proposed algorithm is of linesearch type over the BFBE along candidate update directions, and converges subsequentially to stationary points, globally under a KL condition, and owing to the given nonlinear error bound can attain superlinear convergence rates even when the limit point is a nonisolated minimum, provided the directions are suitably selected

    Local convergence of the Levenberg-Marquardt method under H\"{o}lder metric subregularity

    Get PDF
    We describe and analyse Levenberg-Marquardt methods for solving systems of nonlinear equations. More specifically, we propose an adaptive formula for the Levenberg-Marquardt parameter and analyse the local convergence of the method under H\"{o}lder metric subregularity of the function defining the equation and H\"older continuity of its gradient mapping. Further, we analyse the local convergence of the method under the additional assumption that the \L{}ojasiewicz gradient inequality holds. We finally report encouraging numerical results confirming the theoretical findings for the problem of computing moiety conserved steady states in biochemical reaction networks. This problem can be cast as finding a solution of a system of nonlinear equations, where the associated mapping satisfies the \L{}ojasiewicz gradient inequality assumption.Comment: 30 pages, 10 figure

    Bregman Finito/MISO for nonconvex regularized finite sum minimization without Lipschitz gradient continuity

    Full text link
    We introduce two algorithms for nonconvex regularized finite sum minimization, where typical Lipschitz differentiability assumptions are relaxed to the notion of relative smoothness. The first one is a Bregman extension of Finito/MISO, studied for fully nonconvex problems when the sampling is random, or under convexity of the nonsmooth term when it is essentially cyclic. The second algorithm is a low-memory variant, in the spirit of SVRG and SARAH, that also allows for fully nonconvex formulations. Our analysis is made remarkably simple by employing a Bregman Moreau envelope as Lyapunov function. In the randomized case, linear convergence is established when the cost function is strongly convex, yet with no convexity requirements on the individual functions in the sum. For the essentially cyclic and low-memory variants, global and linear convergence results are established when the cost function satisfies the Kurdyka-\L ojasiewicz property
    corecore