slides

Nonlinear optimization in Hilbert space using Sobolev gradients with applications

Abstract

The problem of finding roots or solutions of a nonlinear partial differential equation may be formulated as the problem of minimizing a sum of squared residuals. One then defines an evolution equation so that in the asymptotic limit a minimizer, and often a solution of the PDE, is obtained. The corresponding discretized nonlinear least squares problem is an often met problem in the field of numerical optimization, and thus there exist a wide variety of methods for solving such problems. We review here Newton's method from nonlinear optimization both in a discrete and continuous setting and present results of a similar nature for the Levernberg-Marquardt method. We apply these results to the Ginzburg-Landau model of superconductivity

    Similar works

    Full text

    thumbnail-image

    Available Versions