610 research outputs found

    LSMR: An iterative algorithm for sparse least-squares problems

    Full text link
    An iterative method LSMR is presented for solving linear systems Ax=bAx=b and least-squares problem \min \norm{Ax-b}_2, with AA being sparse or a fast linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the MINRES method applied to the normal equation A\T Ax = A\T b, so that the quantities \norm{A\T r_k} are monotonically decreasing (where rk=b−Axkr_k = b - Ax_k is the residual for the current iterate xkx_k). In practice we observe that \norm{r_k} also decreases monotonically. Compared to LSQR, for which only \norm{r_k} is monotonic, it is safer to terminate LSMR early. Improvements for the new iterative method in the presence of extra available memory are also explored.Comment: 21 page

    Approximation of the scattering amplitude

    Get PDF
    The simultaneous solution of Ax=b and ATy=g is required in a number of situations. Darmofal and Lu have proposed a method based on the Quasi-Minimal residual algorithm (QMR). We will introduce a technique for the same purpose based on the LSQR method and show how its performance can be improved when using the Generalized LSQR method. We further show how preconditioners can be introduced to enhance the speed of convergence and discuss different preconditioners that can be used. The scattering amplitude gTx, a widely used quantity in signal processing for example, has a close connection to the above problem since x represents the solution of the forward problem and g is the right hand side of the adjoint system. We show how this quantity can be efficiently approximated using Gauss quadrature and introduce a Block-Lanczos process that approximates the scattering amplitude and which can also be used with preconditioners
    • …
    corecore