54 research outputs found

    An efficient sieving based secant method for sparse optimization problems with least-squares constraints

    Full text link
    In this paper, we propose an efficient sieving based secant method to address the computational challenges of solving sparse optimization problems with least-squares constraints. A level-set method has been introduced in [X. Li, D.F. Sun, and K.-C. Toh, SIAM J. Optim., 28 (2018), pp. 1842--1866] that solves these problems by using the bisection method to find a root of a univariate nonsmooth equation φ(λ)=ϱ\varphi(\lambda) = \varrho for some ϱ>0\varrho > 0, where φ()\varphi(\cdot) is the value function computed by a solution of the corresponding regularized least-squares optimization problem. When the objective function in the constrained problem is a polyhedral gauge function, we prove that (i) for any positive integer kk, φ()\varphi(\cdot) is piecewise CkC^k in an open interval containing the solution λ\lambda^* to the equation φ(λ)=ϱ\varphi(\lambda) = \varrho; (ii) the Clarke Jacobian of φ()\varphi(\cdot) is always positive. These results allow us to establish the essential ingredients of the fast convergence rates of the secant method. Moreover, an adaptive sieving technique is incorporated into the secant method to effectively reduce the dimension of the level-set subproblems for computing the value of φ()\varphi(\cdot). The high efficiency of the proposed algorithm is demonstrated by extensive numerical results

    Least Change Secant Update Methods for Nonlinear Complementarity Problem

    Get PDF
    In this work, we introduce a family of Least Change Secant Update Methods for solving Nonlinear Complementarity Problems based on its reformulation as a nonsmooth system using the one-parametric class of nonlinear complementarity functions introduced by Kanzow and Kleinmichel -- We prove local and superlinear convergence for the algorithms -- Some numerical experiments show a good performance of this algorith

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115

    A q-VARIANT OF STEFFENSEN'S METHOD OF FOURTH-ORDER CONVERGENCE

    Get PDF
    Starting from q-Taylor formula, we suggest a new q-variant of Stef-fensen's method of fourth-order convergence for solving non-linear equations

    A Unified Convergence Analysis for Some Two-Point Type Methods for Nonsmooth Operators

    Get PDF
    The aim of this paper is the approximation of nonlinear equations using iterative methods. We present a unified convergence analysis for some two-point type methods. This way we compare specializations of our method using not necessarily the same convergence criteria. We consider both semilocal and local analysis. In the first one, the hypotheses are imposed on the initial guess and in the second on the solution. The results can be applied for smooth and nonsmooth operators.Research of the first and third authors supported in part by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 20928/PI/18 and by MTM2015-64382-P. Research of the fourth and fifth authors supported by Ministerio de Economía y Competitividad under grant MTM2014-52016-C2-1P. This research received no external funding

    A class of Steffensen type methods with optimal order of convergente

    Full text link
    In this paper, a family of Steffensen type methods of fourth-order convergence for solving nonlinear smooth equations is suggested. In the proposed methods, a linear combination of divided differences is used to get a better approximation to the derivative of the given function. Each derivative-free member of the family requires only three evaluations of the given function per iteration. Therefore, this class of methods has efficiency index equal to 1.587. Kung and Traub conjectured that the order of convergence of any multipoint method without memory cannot exceed the bound 2d-1, where d is the number of functional evaluations per step. The new class of methods agrees with this conjecture for the case d=3. Numerical examples are made to show the performance of the presented methods, on smooth and nonsmooth equations, and to compare with other ones. © 2011 Elsevier Inc. All rights reserved.This research was supported by Ministerio de Ciencia y Tecnologia MTM2010-18539.Cordero Barbero, A.; Torregrosa Sánchez, JR. (2011). A class of Steffensen type methods with optimal order of convergente. Applied Mathematics and Computation. 217(19):7653-7659. https://doi.org/10.1016/j.amc.2011.02.067S765376592171
    corecore