56 research outputs found

    Huber approximation for the non-linear ℓ1 problem

    Get PDF
    Cataloged from PDF version of article.The smooth Huber approximation to the non-linear ‘1 problem was proposed by Tishler and Zang (1982), and further developed in Yang (1995). In the present paper, we use the ideas of Gould (1989) to give a new algorithm with rate of convergence results for the smooth Huber approximation. Results of computational tests are reported. 2005 Elsevier B.V. All rights reserved

    Duality-based Higher-order Non-smooth Optimization on Manifolds

    Full text link
    We propose a method for solving non-smooth optimization problems on manifolds. In order to obtain superlinear convergence, we apply a Riemannian Semi-smooth Newton method to a non-smooth non-linear primal-dual optimality system based on a recent extension of Fenchel duality theory to Riemannian manifolds. We also propose an inexact version of the Riemannian Semi-smooth Newton method and prove conditions for local linear and superlinear convergence. Numerical experiments on l2-TV-like problems confirm superlinear convergence on manifolds with positive and negative curvature

    Huber approximation for the non-linear ℓ1 problem

    Get PDF
    The smooth Huber approximation to the non-linear ℓ1 problem was proposed by Tishler and Zang (1982), and further developed in Yang (1995). In the present paper, we use the ideas of Gould (1989) to give a new algorithm with rate of convergence results for the smooth Huber approximation. Results of computational tests are reported. © 2005 Elsevier B.V. All rights reserved

    Local convergence of the Levenberg-Marquardt method under H\"{o}lder metric subregularity

    Get PDF
    We describe and analyse Levenberg-Marquardt methods for solving systems of nonlinear equations. More specifically, we propose an adaptive formula for the Levenberg-Marquardt parameter and analyse the local convergence of the method under H\"{o}lder metric subregularity of the function defining the equation and H\"older continuity of its gradient mapping. Further, we analyse the local convergence of the method under the additional assumption that the \L{}ojasiewicz gradient inequality holds. We finally report encouraging numerical results confirming the theoretical findings for the problem of computing moiety conserved steady states in biochemical reaction networks. This problem can be cast as finding a solution of a system of nonlinear equations, where the associated mapping satisfies the \L{}ojasiewicz gradient inequality assumption.Comment: 30 pages, 10 figure

    Parallel inexact Newton-Krylov and quasi-Newton solvers for nonlinear elasticity

    Full text link
    In this work, we address the implementation and performance of inexact Newton-Krylov and quasi-Newton algorithms, more specifically the BFGS method, for the solution of the nonlinear elasticity equations, and compare them to a standard Newton-Krylov method. This is done through a systematic analysis of the performance of the solvers with respect to the problem size, the magnitude of the data and the number of processors in both almost incompressible and incompressible mechanics. We consider three test cases: Cook's membrane (static, almost incompressible), a twist test (static, incompressible) and a cardiac model (complex material, time dependent, almost incompressible). Our results suggest that quasi-Newton methods should be preferred for compressible mechanics, whereas inexact Newton-Krylov methods should be preferred for incompressible problems. We show that these claims are also backed up by the convergence analysis of the methods. In any case, all methods present adequate performance, and provide a significant speed-up over the standard Newton-Krylov method, with a CPU time reduction exceeding 50% in the best cases

    A curvilinear search using tridiagonal secant updates for unconstrained optimization

    Get PDF
    The idea of doing a curvilinear search along the Levenberg- Marquardt path s(μ) = - (H + μI)⁻¹g always has been appealing, but the cost of solving a linear system for each trial value of the parameter y has discouraged its implementation. In this paper, an algorithm for searching along a path which includes s(μ) is studied. The algorithm uses a special inexpensive QTcQT to QT₊QT Hessian update which trivializes the linear algebra required to compute s(μ). This update is based on earlier work of Dennis-Marwil and Martinez on least-change secant updates of matrix factors. The new algorithm is shown to be local and q-superlinearily convergent to stationary points, and to be globally q-superlinearily convergent for quasi-convex functions. Computational tests are given that show the new algorithm to be robust and efficient.Facultad de Ciencias Exacta

    Computational Experiments with Systems of Nonlinear Equations

    Get PDF
    Higher Educatio

    Scaling rank-one updating formula and its application in unconstrained optimization

    No full text
    This thesis deals with algorithms used to solve unconstrained optimization problems. We analyse the properties of a scaling symmetric rank one (SSRl) update, prove the convergence of the matrices generated by SSRl to the true Hessian matrix and show that algorithm SSRl possesses the quadratic termination property with inexact line search. A new algorithm (OCSSRl) is presented, in which the scaling parameter in SSRl is choosen automatically by satisfying Davidon's criterion for an optimaly conditioned Hessian estimate. Numerical tests show that the new method compares favourably with BFGS. Using the OCSSRl update, we propose a hybrid QN algorithm which does not need to store any matrix. Numerical results show that it is a very promising method for solving large scale optimization problems. In addition, some popular technologies in unconstrained optimization are also discussed, for example, the trust region step, the descent direction with supermemory and. the detection of large residual in nonlinear least squares problems. The thesis consists of two parts. The first part gives a brief survey of unconstrained optimization. It contains four chapters, and introduces basic results on unconstrained optimization, some popular methods and their properties based on quadratic approximations to the objective function, some methods which are suitable for solving large scale optimization problems and some methods for solving nonlinear least squares problems. The second part outlines the new research results, and containes five chapters, In Chapter 5, the scaling rank one updating formula is analysed and studied. Chapter 6, Chapter 7 and Chapter 8 discuss the applications for the trust region method, large scale optimization problems and nonlinear least squares. A final chapter summarizes the problems used in numerical testing
    corecore