18,270 research outputs found

    LSMR: An iterative algorithm for sparse least-squares problems

    Full text link
    An iterative method LSMR is presented for solving linear systems Ax=bAx=b and least-squares problem \min \norm{Ax-b}_2, with AA being sparse or a fast linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the MINRES method applied to the normal equation A\T Ax = A\T b, so that the quantities \norm{A\T r_k} are monotonically decreasing (where rk=b−Axkr_k = b - Ax_k is the residual for the current iterate xkx_k). In practice we observe that \norm{r_k} also decreases monotonically. Compared to LSQR, for which only \norm{r_k} is monotonic, it is safer to terminate LSMR early. Improvements for the new iterative method in the presence of extra available memory are also explored.Comment: 21 page

    Accuracy controlled data assimilation for parabolic problems

    Get PDF
    This paper is concerned with the recovery of (approximate) solutions to parabolic problems from incomplete and possibly inconsistent observational data, given on a time-space cylinder that is a strict subset of the computational domain under consideration. Unlike previous approaches to this and related problems our starting point is a regularized least squares formulation in a continuous infinite-dimensional setting that is based on stable variational time-space formulations of the parabolic PDE. This allows us to derive a priori as well as a posteriori error bounds for the recovered states with respect to a certain reference solution. In these bounds the regularization parameter is disentangled from the underlying discretization. An important ingredient for the derivation of a posteriori bounds is the construction of suitable Fortin operators which allow us to control oscillation errors stemming from the discretization of dual norms. Moreover, the variational framework allows us to contrive preconditioners for the discrete problems whose application can be performed in linear time, and for which the condition numbers of the preconditioned systems are uniformly proportional to that of the regularized continuous problem. In particular, we provide suitable stopping criteria for the iterative solvers based on the a posteriori error bounds. The presented numerical experiments quantify the theoretical findings and demonstrate the performance of the numerical scheme in relation with the underlying discretization and regularization

    A Gauss--Newton iteration for Total Least Squares problems

    Get PDF
    The Total Least Squares solution of an overdetermined, approximate linear equation Ax≈bAx \approx b minimizes a nonlinear function which characterizes the backward error. We show that a globally convergent variant of the Gauss--Newton iteration can be tailored to compute that solution. At each iteration, the proposed method requires the solution of an ordinary least squares problem where the matrix AA is perturbed by a rank-one term.Comment: 14 pages, no figure

    MINRES-QLP: a Krylov subspace method for indefinite or singular symmetric systems

    Full text link
    CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric least-squares problem), CG could break down and SYMMLQ's solution could explode, while MINRES would give a least-squares solution but not necessarily the minimum-length (pseudoinverse) solution. This understanding motivates us to design a MINRES-like algorithm to compute minimum-length solutions to singular symmetric systems. MINRES uses QR factors of the tridiagonal matrix from the Lanczos process (where R is upper-tridiagonal). MINRES-QLP uses a QLP decomposition (where rotations on the right reduce R to lower-tridiagonal form). On ill-conditioned systems (singular or not), MINRES-QLP can give more accurate solutions than MINRES. We derive preconditioned MINRES-QLP, new stopping rules, and better estimates of the solution and residual norms, the matrix norm, and the condition number.Comment: 26 pages, 6 figure
    • …
    corecore