6,106 research outputs found
Newton's Method for Solving Inclusions Using Set-Valued Approximations
International audienceResults on stability of both local and global metric regularity under set-valued perturbations are presented. As an application, we study (super)linear convergence of a Newton- type iterative process for solving generalized equations. We investigate several iterative schemes such as the inexact Newton’s method, the nonsmooth Newton’s method for semismooth functions, the inexact proximal point algorithm, etc. Moreover, we also cover a forward-backward splitting algorithm for finding a zero of the sum of two multivalued (not necessarily monotone) operators. Finally, a globalization of the Newton’s method is discussed
Inexact Newton-Type Optimization with Iterated Sensitivities
This paper presents and analyzes an Inexact Newton-type optimization method based on Iterated Sensitivities (INIS). A particular class of Nonlinear Programming (NLP) problems is considered, where a subset of the variables is defined by nonlinear equality constraints. The proposed algorithm considers any problem-specific approximation for the Jacobian of these constraints. Unlike other inexact Newton methods, the INIS-type optimization algorithm is shown to preserve the local convergence properties and the asymptotic contraction rate of the Newton-type scheme for the feasibility problem, yielded by the same Jacobian approximation. The INIS approach results in a computational cost which can be made close to that of the standard inexact Newton implementation. In addition, an adjoint-free (AF-INIS) variant of the approach is presented which, under certain conditions, becomes considerably easier to implement than the adjoint based scheme. The applicability of these results is motivated, specifically for dynamic optimization problems. In addition, the numerical performance of a specific open-source implementation is illustrated
Inexact Newton regularizations with uniformly convex stability terms: a unified convergence analysis
We present a unified convergence analysis of inexact Newton regularizations for nonlinear ill-posed problems in Banach spaces. These schemes consist of an outer (Newton) iteration and an inner iteration which provides the update of the current outer iterate. To this end the nonlinear problem is linearized about the current iterate and
the resulting linear system is approximately (inexactly) solved by an inner regularization method. In our analysis we only rely on generic assumptions of the inner methods and we show that a variety of regularization techniques satisfies these assumptions. For instance, gradient-type and iterated-Tikhonov methods are covered. Not only the technique of proof is novel, but also the results obtained, because for the first time uniformly convex
penalty terms stabilize the inner scheme
Iterative Linear Algebra for Parameter Estimation
The principal goal of this thesis is the development and analysis of effcient numerical
methods for large-scale nonlinear parameter estimation problems. These problems are of
high relevance in all sciences that predict the future using big data sets of the past by
fitting and then extrapolating a mathematical model. This thesis is concerned with the
fitting part. The challenges lie in the treatment of the nonlinearities and the sheer size of
the data and the unknowns. The state-of-the-art for the numerical solution of parameter
estimation problems is the Gauss-Newton method, which solves a sequence of linearized
subproblems.
One of the contributions of this thesis is a thorough analysis of the problem class on
the basis of covariant and contravariant k-theory. Based on this analysis, it is possible
to devise a new stopping criterion for the iterative solution of the inner linearized subproblems.
The analysis reveals that the inner subproblems can be solved with only low
accuracy without impeding the speed of convergence of the outer iteration dramatically.
In addition, I prove that this new stopping criterion is a quantitative measure of how
accurate the solution of the subproblems needs to be in order to produce inexact Gauss-
Newton sequences that converge to a statistically stable estimate provided that at least
one exists. Thus, this new local approach results to be an inexact Gauss-Newton method
that requires far less inner iterations for computing the inexact Gauss-Newton step than
the classical exact Gauss-Newton method based on factorization algorithm for computing
the Gauss-Newton step that requires to perform 100% of the inner iterations, which is
computationally prohibitively expensive when the number of parameters to be estimated
is large. Furthermore, we generalize the local ideas of this local inexact Gauss-Newton
approach, and introduce a damped inexact Gauss-Newton method using the Backward
Step Control for global Newton-type theory of Potschka.
We evaluate the efficiency of our new approach using two examples. The first one
is a parameter identification of a nonlinear elliptical partial differential equation, and
the second one is a real world parameter estimation on a large-scale bundle adjustment
problem. Both of those examples are ill conditioned. Thus, a convenient regularization
in each one is considered. Our experimental results show that this new inexact Gauss-
Newton approach requires less than 3% of the inner iterations for computing the inexact
Gauss-Newton step in order to converge to a statistically stable estimate
Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem Solver
We consider a variant of inexact Newton Method, called Newton-MR, in which
the least-squares sub-problems are solved approximately using Minimum Residual
method. By construction, Newton-MR can be readily applied for unconstrained
optimization of a class of non-convex problems known as invex, which subsumes
convexity as a sub-class. For invex optimization, instead of the classical
Lipschitz continuity assumptions on gradient and Hessian, Newton-MR's global
convergence can be guaranteed under a weaker notion of joint regularity of
Hessian and gradient. We also obtain Newton-MR's problem-independent local
convergence to the set of minima. We show that fast local/global convergence
can be guaranteed under a novel inexactness condition, which, to our knowledge,
is much weaker than the prior related works. Numerical results demonstrate the
performance of Newton-MR as compared with several other Newton-type
alternatives on a few machine learning problems.Comment: 35 page
- …