9 research outputs found

    Using the Nonlinear L-curve and its Dual

    No full text
    The L-curve has been used on linear inverse problems for a long time. We generalize the L-curve to the nonlinear finite dimensional setting and introduce another most useful dual curve. The analytic and geometrical properties of these curves are derived together with a discussion on their use in algorithms. Key words: Nonlinear least squares, optimization, L-curve, regularization, Gauss-Newton method 1 Introduction Inverse problems appear in many different engineering applications. An inverse problem consists of a direct problem and some unknown function(s) or parameter(s). In many cases the solution does not depend continuously on the unknown quantities and the problem is ill-posed. A typical ill-posed problem is when the task is to determine these unkowns given measured, inexact, data. Given such an ill-posed problem it is a good idea to reformulate the original problem into a well-posed problem giving a solution that is not too large and with a small residual. 1.1 An example from..

    Algorithms for using the nonlinear L-curve

    No full text
    When using Tikhonov regularization for finite dimensional ill-posed problems there is a problem dependent choice of the regularization parameter. We present general tools for determining a proper regularization parameter that are based on the nonlinear L-curve and the associated (dual) a-curve. Given approximations of the solution of the Tikhonov problem we define upper and lower piecewise linear approximations of the L- and a-curve called shadow curves. These shadow curves are thouroughly investigated. Finally, we present ways to update the shadow curves and their use to identify good regularized solutions. AMS(MOS) subject classification: 62J05, 65U05 Key words: Nonlinear least squares, optimization, L-curve, regularization, Gauss-Newton method Contents 1 Introduction 1 2 Local results 3 3 Shadow curves 4 3.1 Basic ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 The polygon shadow curves . . . . . . . . . . . . . . . . . . . . . 4 3.3 The connection betwe..

    Regularization Methods for Nonlinear Least Squares Problems. Part I: Exactly Rank-deficient Problems

    No full text
    In two papers, we develop theory and methods for regularization of nonlinear least squares problems to minimize the Euclidean norm of f(x) 2 ! m ; x 2 ! n . In this first paper, we consider the case where the Jacobian is exactly rank-deficient. Then due to the constant rank theorem the function f(x) = h(z(x)), where z 2 ! r and r ! min(m; n) has the property of being exactly rank-deficient almost everywhere. This composed function simplifies derivations and reveals that such a nonlinear least squares problem can be formulated as a nonlinear minimum norm problem. We propose two regularization methods to solve the nonlinear minimum norm problem: A Gauss-Newton minimum norm method and a Tikhonov regularization method. It is proved that both methods converge to a minimum norm solution. Local and asymptotic convergence rates are thoroughly investigated and it is shown that the convergence depends on the curvatures both in the function space and in the parameter space. Numerical result..

    Regularization Methods for Nonlinear Least Squares Problems. Part II: Almost Rank-Deficiency

    No full text
    . A nonlinear least squares problem is almost rank deficient at a local minimum if there is a large gap in the singular values of the Jacobian and at least one singular value is small. We analyze the almost rank deficient problem giving the relevant KKT-conditions and propose two methods based on truncation and Tikhonov regularization. Our approach is based on techniques from linear algebra and nonlinear optimization. This enables us to develop a local and asymptotic convergence theory based on second order information for Gauss-Newton like methods applied to the nonlinear truncated and Tikhonov regularized problems with known regularization parameter. Finally, we test the methods on artificial problems where we are able to choose the singular values and the nonlinearity of the problem making it possible to show the different features of the problem and the methods. The method based on Tikhonov regularization is more generally applicable to illposed problems having no gap in the singul..

    First Order Error Analysis of a Linear System of Equations by use of Error Propagation Matrices connected to the Pseudo Inverse Solution

    No full text
    The singular value decomposition (SVD) of a matrix A = U#V is a useful tool for analyzing the e#ect of errors in A of the pseudoinverse solution to Ax = b. Let E be the error propagation matrix such that the first order error propagation result dx = E A dA(:) is satisfied. Then the SVD of E is directly available from the SVD of A. It is shown how to calculate E A in di#erent cases: well-, over- and underdetermined as well as rank-deficient. Illustrative small examples are analyzed as is the connection between the singular values of E and condition numbers. A few steps are also taken towards the analysis of a regularized solution of Ax = b

    Regularization Tools for Training Large-Scale Neural Networks

    No full text
    We present regularization tools for training small-and-medium as well as large-scale artificial feedforward neural networks. The determination of the weights leads to very ill-conditioned nonlinear least squares problems and regularization is often suggested to get control over the network complexity, small variance error, and nice optimization problems. The algorithms proposed solve explicitly a sequence of Tikhonov regularized nonlinear least squares problems. For small-and-medium size problems the Gauss-Newton method is applied to the regularized problem that is much more well-conditioned than the original problem, and exhibits far better convergence properties than a Levenberg-Marquardt method. Numerical results presented also confirm that the proposed implementations are more reliable and efficient than the Levenberg-Marquardt method. For large-scale problems, methods using new special purpose automatic differentiation combined with conjugate gradient methods are proposed. The alg..

    New Perturbation Results For Regularized Tikhonov Inverses And Pseudo-Inverses.

    No full text
    . Consider the Tikhonov regularized linear least squares problem minx kJx \Gamma bk 2 2 + ¯ 2 kL(x \Gamma c)k 2 2 , where J 2 ! m\Thetan ; b 2 ! m and L 2 ! p\Thetan . The interesting part of the solution to this problem (attained by putting c = 0) is J # L b; J # L = (J T J + ¯ 2 L T L) \Gamma1 J T . As ¯ ! 0 the solution of the regularized problem tends to the solution, J + L b, of minx kL(x \Gamma c)k2 subject to the constraint that kJx \Gamma bk2 is minimized. The main result of this paper is perturbation identities for J + L . However, in order to attain this result perturbation identities for J # L are derived first and then the fact that J # L ! J + L b is used. The perturbation identities for J + L and J # L are useful for ill-posed, ill-conditioned and rank-deficient problems. Key words. Tikhonov regularization, GSVD, perturbation theory, rank deficiency, pseudoinverses, filter factors, numerical rank AMS subject classifications. 65 K ..

    Algorithms For Constrained And Weighted Nonlinear Least Squares

    No full text
    . A hybrid algorithm consisting of a Gauss-Newton method and a second order method for solving constrained and weighted nonlinear least squares problems is developed, analyzed and tested. One of the advantages of the algorithm is that arbitrary large weights can be handled and that the weights in the merit function do not get unnecessary large when the iterates diverge from a saddle point. The local convergence properties for the Gauss-Newton method is thoroughly analyzed and simple ways of estimating and calculating the local convergence rate for the Gauss-Newton method are given. Under the assumption that the constrained and weighted linear least squares subproblems attained in the Gauss-Newton method are not too ill-conditioned, global convergence towards a first order KKT point is proved. Key words. nonlinear least squares, optimization, parameter estimation, weights AMS subject classifications. 65K, 49D 1. Introduction. Assume that f : R n ! R m is a twice continuously diffe..

    Computational methods of linear algebra

    No full text
    corecore