106 research outputs found

    A Descent Algorithm for Large-Scale Linearly Constrained Convex Nonsmooth Minimization

    Get PDF
    A descent algorithm is given for solving a large convex program obtained by augmenting the objective of a linear program with a (possibly nondifferentiable) convex function depending on relatively few variables. Such problems often arise in practice as deterministic equivalents of stochastic programming problem. The algorithm s search direction finding subproblems can be solved efficiently by the existing software for large-scale smooth optimization. The algorithm is both readily implementable and globally convergent

    A Bundle of Method for Minimizing a Sum of Convex Functions with Smooth Weights

    Get PDF
    We give a bundle method for minimizing a (possibly nondifferentiable and nonconvex) function h(z) = sum_{i=1}^m p_i(x) f_i(x) over a closed convex set in R^n, where p_i are nonnegative and smooth and f_i are finite-valued convex. Such functions arise in certain stochastic programming problems and scenario analysis. The method finds search directions via quadratic programming, using a polyhedral model of h that involves current linearizations of p_i and polyhedral models of f_i based on their accumulated subgradients. We show that the method is globally convergent to stationary points of h. The method exploits the structure of h and hence seems more promising than general-purpose bundle methods for nonconvex minimization

    Randomized Search Directions in Descent Methods for Minimizing Certain Quasi-Differentiable Functions

    Get PDF
    Several descent methods have recently been proposed for minimizing smooth compositions of max-type functions. The methods generate many search directions at each iteration. It is shown here that a random choice of only two search directions at each iteration suffices to retain convergence to in#-stationary points with probability 1. Use of this technique may significantly decrease the effort involved in quadratic programming and line searches, thus allowing efficient implementations of the methods. This paper is a contribution to research on non-smooth optimization currently underway in the System and Decision Sciences Program

    A Globally Convergent Quadratic Approximation for Inequality Constrained Minimax Problems

    Get PDF
    In this paper we present an implementable algorithm for solving optimization problems of the type: minimize f_0(x), subject to f(x)<0, where x element of R^N and f_0 and f are real-valued functions that are the pointwise maxima of two families of continuously differentiable functions

    NOA1: A Fortran Package of Nondifferentiable Optimization Algorithms Methodological and User's Guide

    Get PDF
    This paper is one of the series of 11 Working Papers presenting the software for interactive decision support and software tools for developing decision support systems. These products constitute the outcome of the contracted study agreement between the System and Decision Sciences Program at IIASA and several Polish scientific institutions. The theoretical part of these results is presented in the IIASA Working Paper WP-88-071 entitled "Theory, Software and Testing Examples in Decision Support Systems". This volume contains the theoretical and methodological backgrounds of the software systems developed within the project. This paper constitutes a methodological guide and user's manual for NOA1, a package of Fortran subroutines designed to locate the minimum of a locally Lipschitz continuous function subject to locally Lipschitzian inequality and equality constraints, general linear constraints and simple upper and lower bounds. The user must provide a Fortran subroutine for evaluating the (possibly nondifferentiable and nonconvex) functions being minimized and their subgradients. The package implements several descent methods, and is intended for solving small-scale nondifferentiable minimization problems on a professional microcomputer

    Convergence of the steepest descent method for minimizing quasiconvex functions

    Full text link
    To minimize a continuously differentiable quasiconvex function f : ℝ n →ℝ, Armijo's steepest descent method generates a sequence x k +1 = x k − t k ∇ f ( x k ), where t k >0. We establish strong convergence properties of this classic method: either , s.t. ; or arg min f = ∅, ∄ x k ∄ ↓ ∞ and f(x k )↓ inf f . We also discuss extensions to other line searches.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45245/1/10957_2005_Article_BF02192649.pd

    Parsimonious Kernel Fisher Discrimination

    No full text
    By applying recent results in optimization transfer, a new algorithm for kernel Fisher Discriminant Analysis is provided that makes use of a non-smooth penalty on the coefficients to provide a parsimonious solution. The algorithm is simple, easily programmed and is shown to perform as well as or better than a number of leading machine learning algorithms on a substantial benchmark. It is then applied to a set of extreme small-sample-size problems in virtual screening where it is found to be less accurate than a currently leading approach but is still comparable in a number of cases

    Non-smooth optimization methods for computation of the conditional value-at-risk and portfolio optimization

    Get PDF
    We examine numerical performance of various methods of calculation of the Conditional Value-at-risk (CVaR), and portfolio optimization with respect to this risk measure. We concentrate on the method proposed by Rockafellar and Uryasev in (Rockafellar, R.T. and Uryasev, S., 2000, Optimization of conditional value-at-risk. Journal of Risk, 2, 21-41), which converts this problem to that of convex optimization. We compare the use of linear programming techniques against a non-smooth optimization method of the discrete gradient, and establish the supremacy of the latter. We show that non-smooth optimization can be used efficiently for large portfolio optimization, and also examine parallel execution of this method on computer clusters.<br /

    Convergence Analysis of Some Methods for Minimizing a Nonsmooth Convex Function

    Full text link
    In this paper, we analyze a class of methods for minimizing a proper lower semicontinuous extended-valued convex function . Instead of the original objective function f , we employ a convex approximation f k + 1 at the k th iteration. Some global convergence rate estimates are obtained. We illustrate our approach by proposing (i) a new family of proximal point algorithms which possesses the global convergence rate estimate even it the iteration points are calculated approximately, where are the proximal parameters, and (ii) a variant proximal bundle method. Applications to stochastic programs are discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45249/1/10957_2004_Article_417694.pd
    • 

    corecore