22 research outputs found

    A short survey on Kantorovich-like theorems for Newton's method

    No full text
    We survey influential quantitative results on the convergence of the Newton iterator towards simple roots of continuously differentiable maps defined over Banach spaces. We present a general statement of Kantorovich's theorem, with a concise proof from scratch, dedicated to wide audience. From it, we quickly recover known results, and gather historical notes together with pointers to recent articles

    Difference equations and iterative processes

    Get PDF
    Divergence equations and iterative processe

    Lie series for celestial mechanics, accelerators, satellite stabilization and optimization

    Get PDF
    Lie series applications to celestial mechanics, accelerators, satellite orbits, and optimizatio

    Iterative Linear Algebra for Parameter Estimation

    Get PDF
    The principal goal of this thesis is the development and analysis of effcient numerical methods for large-scale nonlinear parameter estimation problems. These problems are of high relevance in all sciences that predict the future using big data sets of the past by fitting and then extrapolating a mathematical model. This thesis is concerned with the fitting part. The challenges lie in the treatment of the nonlinearities and the sheer size of the data and the unknowns. The state-of-the-art for the numerical solution of parameter estimation problems is the Gauss-Newton method, which solves a sequence of linearized subproblems. One of the contributions of this thesis is a thorough analysis of the problem class on the basis of covariant and contravariant k-theory. Based on this analysis, it is possible to devise a new stopping criterion for the iterative solution of the inner linearized subproblems. The analysis reveals that the inner subproblems can be solved with only low accuracy without impeding the speed of convergence of the outer iteration dramatically. In addition, I prove that this new stopping criterion is a quantitative measure of how accurate the solution of the subproblems needs to be in order to produce inexact Gauss- Newton sequences that converge to a statistically stable estimate provided that at least one exists. Thus, this new local approach results to be an inexact Gauss-Newton method that requires far less inner iterations for computing the inexact Gauss-Newton step than the classical exact Gauss-Newton method based on factorization algorithm for computing the Gauss-Newton step that requires to perform 100% of the inner iterations, which is computationally prohibitively expensive when the number of parameters to be estimated is large. Furthermore, we generalize the local ideas of this local inexact Gauss-Newton approach, and introduce a damped inexact Gauss-Newton method using the Backward Step Control for global Newton-type theory of Potschka. We evaluate the efficiency of our new approach using two examples. The first one is a parameter identification of a nonlinear elliptical partial differential equation, and the second one is a real world parameter estimation on a large-scale bundle adjustment problem. Both of those examples are ill conditioned. Thus, a convenient regularization in each one is considered. Our experimental results show that this new inexact Gauss- Newton approach requires less than 3% of the inner iterations for computing the inexact Gauss-Newton step in order to converge to a statistically stable estimate

    Estimation and variational methods for gradient algorithm generation.

    Get PDF
    Thesis. 1977. M.S.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.Bibliography: leaves 110-113.M.S

    On studies in the field of space flight and guidance theory progress report no. 4 <20 dec. 1962 - 18 jul. 1963<

    Get PDF
    Trajectories, orbital calculations, and adaptive guidanc
    corecore