79 research outputs found

    A short survey on Kantorovich-like theorems for Newton's method

    No full text
    We survey influential quantitative results on the convergence of the Newton iterator towards simple roots of continuously differentiable maps defined over Banach spaces. We present a general statement of Kantorovich's theorem, with a concise proof from scratch, dedicated to wide audience. From it, we quickly recover known results, and gather historical notes together with pointers to recent articles

    Trajectory computational techniques emphasizing existence, uniqueness, and construction of solutions to boundary problems for ordinary differential equations Final report

    Get PDF
    Trajectory computational techniques emphasizing existence, uniqueness, and construction of solutions to boundary problems for ordinary differential equation

    Global Convergence of Damped Newton's Method for Nonsmooth Equations, via the Path Search

    Get PDF
    A natural damping of Newton's method for nonsmooth equations is presented. This damping, via the path search instead of the traditional line search, enlarges the domain of convergence of Newton's method and therefore is said to be globally convergent. Convergence behavior is like that of line search damped Newton's method for smooth equations, including Q-quadratic convergence rates under appropriate conditions. Applications of the path search include damping Robinson-Newton's method for nonsmooth normal equations corresponding to nonlinear complementarity problems and variational inequalities, hence damping both Wilson's method (sequential quadratic programming) for nonlinear programming and Josephy-Newton's method for generalized equations. Computational examples from nonlinear programming are given

    A generalised multiple shooting method

    Get PDF

    Deflation for semismooth equations

    Full text link
    Variational inequalities can in general support distinct solutions. In this paper we study an algorithm for computing distinct solutions of a variational inequality, without varying the initial guess supplied to the solver. The central idea is the combination of a semismooth Newton method with a deflation operator that eliminates known solutions from consideration. Given one root of a semismooth residual, deflation constructs a new problem for which a semismooth Newton method will not converge to the known root, even from the same initial guess. This enables the discovery of other roots. We prove the effectiveness of the deflation technique under the same assumptions that guarantee locally superlinear convergence of a semismooth Newton method. We demonstrate its utility on various finite- and infinite-dimensional examples drawn from constrained optimization, game theory, economics and solid mechanics.Comment: 24 pages, 3 figure

    Difference equations and iterative processes

    Get PDF
    Divergence equations and iterative processe

    Generalized Descent Methods for Asymmetric Systems of Equations and Variational Inequalities

    Get PDF
    We consider generalizations of the steepest descent algorithm for solving asymmetric systems of equations. We first show that if the system is linear and is defined by a matrix M, then the method converges if M2 is positive definite. We also establish easy to verify conditions on the matrix M that ensure that M is positive definite, and develop a scaling procedure that extends the class of matrices that satisfy the convergence conditions. In addition, we establish a local convergence result for nonlinear systems defined by uniformly monotone maps, and discuss a class of general descent methods. Finally, we show that a variant of the Frank-Wolfe method will solve a certain class of variational inequality problems. All of the methods that we consider reduce to standard nonlinear programming algorithms for equivalent optimization problems when the Jacobian of the underlying problem map is symmetric. We interpret the convergence conditions for the generalized steepest descent algorithms as restricting the degree of asymmetry of the problem map

    Estudio sobre convergencia y dinámica de los métodos de Newton, Stirling y alto orden

    Get PDF
    Las matemáticas, desde el origen de esta ciencia, han estado al servicio de la sociedad tratando de dar respuesta a los problemas que surgían. Hoy en día sigue siendo así, el desarrollo de las matemáticas está ligado a la demanda de otras ciencias que necesitan dar solución a situaciones concretas y reales. La mayoría de los problemas de ciencia e ingeniería no pueden resolverse usando ecuaciones lineales, es por tanto que hay que recurrir a las ecuaciones no lineales para modelizar dichos problemas (Amat, 2008; véase también Argyros y Magreñán, 2017, 2018), entre otros. El conflicto que presentan las ecuaciones no lineales es que solo en unos pocos casos es posible encontrar una solución única, por tanto, en la mayor parte de los casos, para resolverlas hay que recurrir a los métodos iterativos. Los métodos iterativos generan, a partir de un punto inicial, una sucesión que puede converger o no a la solución

    Nonlinear Asymptotic Integration Algorithms for One-dimensional Autonomous Dissipative First-order Odes

    Get PDF
    Nonlinear asymptotic integrators are applied to one-dimensional, nonlinear, autonomous, dissipative, ordinary differential equations. These integrators, including a one-step explicit, a one-step implicit, and a one- and two-step midpoint algorithm, are designed to follow the asymptotic behavior of a system approaching a steady state. The methods require that the differential equation be written in a particular asymptotic form. This is always possible for a one-dimensional equation with a globally asymptotic steady state. In this case, conditions are obtained to guarantee that the implicit algorithms are well defined. Further conditions are determined for the implicit methods to be contractive. These methods are all first order accurate, while under certain conditions the midpoint algorithms may also become second order accurate. The stability of each method is investigated and an estimate of the local error is provided
    • …
    corecore