131 research outputs found

    Use of the "minimum norm" search direction in a nonmonotone version of the Gauss-Newton method.

    Get PDF

    An optimal subgradient algorithm for large-scale convex optimization in simple domains

    Full text link
    This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a simple closed convex domain, where the orthogonal projection on this feasible domain is efficiently available; (ii) a convex objective with a simple convex functional constraint. If we equip OSGA with an appropriate prox-function, the OSGA subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed scheme. A software package implementing OSGA for above domains is available

    Planar Conjugate Gradient Algorithm for Large-Scale Unconstrained Optimization, Part 1: Theory

    Get PDF
    Abstract. In this paper, we describe an application of the planar conjugate gradient method introduced in Part 1 (Ref. 1) and aimed at solving indefinite nonsingular sets of linear equations. We prove that it can be used fruitfully within optimization frameworks; in particular, we present a globally convergent truncated Newton scheme, which uses the above planar method for solving the Newton equation. Finally, our approach is tested over several problems from the CUTE collection (Ref. 2). Key Words. Large-scale unconstrained optimization, truncated Newto

    Some Unconstrained Optimization Methods

    Get PDF
    Although it is a very old theme, unconstrained optimization is an area which is always actual for many scientists. Today, the results of unconstrained optimization are applied in different branches of science, as well as generally in practice. Here, we present the line search techniques. Further, in this chapter we consider some unconstrained optimization methods. We try to present these methods but also to present some contemporary results in this area

    A quasi-Newton proximal splitting method

    Get PDF
    A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification
    • …
    corecore