203 research outputs found

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    Structure-Aware Methods for Expensive Derivative-Free Nonsmooth Composite Optimization

    Full text link
    We present new methods for solving a broad class of bound-constrained nonsmooth composite minimization problems. These methods are specially designed for objectives that are some known mapping of outputs from a computationally expensive function. We provide accompanying implementations of these methods: in particular, a novel manifold sampling algorithm (\mspshortref) with subproblems that are in a sense primal versions of the dual problems solved by previous manifold sampling methods and a method (\goombahref) that employs more difficult optimization subproblems. For these two methods, we provide rigorous convergence analysis and guarantees. We demonstrate extensive testing of these methods. Open-source implementations of the methods developed in this manuscript can be found at \url{github.com/POptUS/IBCDFO/}

    Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case

    Get PDF
    Trust-region methods are a broad class of methods for continuous optimization that found application in a variety of problems and contexts. In particular, they have been studied and applied for problems without using derivatives. The analysis of trust-region derivative-free methods has focused on global convergence, and they have been proven to generate a sequence of iterates converging to stationarity independently of the starting point. Most of such an analysis is carried out in the smooth case, and, moreover, little is known about the complexity or global rate of these methods. In this paper, we start by analyzing the worst case complexity of general trust-region derivative-free methods for smooth functions. For the nonsmooth case, we propose a smoothing approach, for which we prove global convergence and bound the worst case complexity effort. For the special case of nonsmooth functions that result from the composition of smooth and nonsmooth/convex components, we show how to improve the existing results of the literature and make them applicable to the general methodology

    A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions

    Full text link
    In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving non-smooth convex objective functions. Our approach is in the line of a previous work where was considered the case of convex di erentiable objective functions. Based on the Yosida regularization of the subdi erential operators involved in the system, we obtain the existence of strong global trajectories. We prove a descent property for each objective function, and the convergence of trajectories to weak Pareto minima. This approach provides a dynamical endogenous weighting of the objective functions. Applications are given to cooperative games, inverse problems, and numerical multiobjective optimization
    corecore