5 research outputs found
LSMR: An iterative algorithm for sparse least-squares problems
An iterative method LSMR is presented for solving linear systems and
least-squares problem \min \norm{Ax-b}_2, with being sparse or a fast
linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It
is analytically equivalent to the MINRES method applied to the normal equation
A\T Ax = A\T b, so that the quantities \norm{A\T r_k} are monotonically
decreasing (where is the residual for the current iterate
). In practice we observe that \norm{r_k} also decreases monotonically.
Compared to LSQR, for which only \norm{r_k} is monotonic, it is safer to
terminate LSMR early. Improvements for the new iterative method in the presence
of extra available memory are also explored.Comment: 21 page
MINRES-QLP: a Krylov subspace method for indefinite or singular symmetric systems
CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric
systems of linear equations. When these methods are applied to an incompatible
system (that is, a singular symmetric least-squares problem), CG could break
down and SYMMLQ's solution could explode, while MINRES would give a
least-squares solution but not necessarily the minimum-length (pseudoinverse)
solution. This understanding motivates us to design a MINRES-like algorithm to
compute minimum-length solutions to singular symmetric systems.
MINRES uses QR factors of the tridiagonal matrix from the Lanczos process
(where R is upper-tridiagonal). MINRES-QLP uses a QLP decomposition (where
rotations on the right reduce R to lower-tridiagonal form). On ill-conditioned
systems (singular or not), MINRES-QLP can give more accurate solutions than
MINRES. We derive preconditioned MINRES-QLP, new stopping rules, and better
estimates of the solution and residual norms, the matrix norm, and the
condition number.Comment: 26 pages, 6 figure
Recommended from our members
The state-of-the-art of preconditioners for sparse linear least-squares problems
In recent years, a variety of preconditioners have been proposed for use in solving large sparse linear least-squares problems. These include simple diagonal preconditioning, preconditioners based on incomplete factorizations and stationary inner iterations used with Krylov subspace methods. In this study, we briefly review preconditioners for which software has been made available and then present a numerical evaluation of them using performance profiles and a large set of problems arising from practical applications. Comparisons are made with state-of-the-art sparse direct methods
Iterative Linear Algebra for Parameter Estimation
The principal goal of this thesis is the development and analysis of effcient numerical
methods for large-scale nonlinear parameter estimation problems. These problems are of
high relevance in all sciences that predict the future using big data sets of the past by
fitting and then extrapolating a mathematical model. This thesis is concerned with the
fitting part. The challenges lie in the treatment of the nonlinearities and the sheer size of
the data and the unknowns. The state-of-the-art for the numerical solution of parameter
estimation problems is the Gauss-Newton method, which solves a sequence of linearized
subproblems.
One of the contributions of this thesis is a thorough analysis of the problem class on
the basis of covariant and contravariant k-theory. Based on this analysis, it is possible
to devise a new stopping criterion for the iterative solution of the inner linearized subproblems.
The analysis reveals that the inner subproblems can be solved with only low
accuracy without impeding the speed of convergence of the outer iteration dramatically.
In addition, I prove that this new stopping criterion is a quantitative measure of how
accurate the solution of the subproblems needs to be in order to produce inexact Gauss-
Newton sequences that converge to a statistically stable estimate provided that at least
one exists. Thus, this new local approach results to be an inexact Gauss-Newton method
that requires far less inner iterations for computing the inexact Gauss-Newton step than
the classical exact Gauss-Newton method based on factorization algorithm for computing
the Gauss-Newton step that requires to perform 100% of the inner iterations, which is
computationally prohibitively expensive when the number of parameters to be estimated
is large. Furthermore, we generalize the local ideas of this local inexact Gauss-Newton
approach, and introduce a damped inexact Gauss-Newton method using the Backward
Step Control for global Newton-type theory of Potschka.
We evaluate the efficiency of our new approach using two examples. The first one
is a parameter identification of a nonlinear elliptical partial differential equation, and
the second one is a real world parameter estimation on a large-scale bundle adjustment
problem. Both of those examples are ill conditioned. Thus, a convenient regularization
in each one is considered. Our experimental results show that this new inexact Gauss-
Newton approach requires less than 3% of the inner iterations for computing the inexact
Gauss-Newton step in order to converge to a statistically stable estimate