13 research outputs found
Recommended from our members
The state-of-the-art of preconditioners for sparse linear least-squares problems
In recent years, a variety of preconditioners have been proposed for use in solving large sparse linear least-squares problems. These include simple diagonal preconditioning, preconditioners based on incomplete factorizations and stationary inner iterations used with Krylov subspace methods. In this study, we briefly review preconditioners for which software has been made available and then present a numerical evaluation of them using performance profiles and a large set of problems arising from practical applications. Comparisons are made with state-of-the-art sparse direct methods
Recommended from our members
A note on performance profiles for benchmarking software
In recent years, performance profiles have become a popular and widely used tool for benchmarking and evaluating the performance of several solvers when run on a large test set. Here we use data from a real application as well as a simple artificial example to illustrate that caution should be exercised when trying to interpret performance profiles to assess the relative performance of the solvers
Recommended from our members
Sparse stretching for solving sparse-dense linear least-squares problems
Large-scale linear least-squares problems arise in a wide range of practical applications. In some cases, the system matrix contains a small number of dense rows. These make the problem significantly harder to solve because their presence limits the direct applicability of sparse matrix techniques. In particular, the normal matrix is (close to) dense,
so that forming it is impractical. One way to help overcome the dense row problem is to employ matrix stretching.
Stretching is a sparse matrix technique that improves sparsity by making the least-squares problem larger.
We show that standard stretching can still result in the normal matrix for the stretched problem having an unacceptably large amount of fill. This motivates us to propose a new sparse stretching strategy that performs the stretching so as to limit the fill in the normal matrix and its Cholesky factor. Numerical examples from real problems
are used to illustrate the potential gains
An inexact dual logarithmic barrier method for solving sparse semidefinite programs
A dual logarithmic barrier method for solving large, sparse semidefinite programs is proposed in this paper. The method avoids any explicit use of the primal variable X and therefore is well-suited to problems with a sparse dual matrix S. It relies on inexact Newton steps in dual space which are computed by the conjugate gradient method applied to the Schur complement of the reduced KKT system. The method may take advantage of low-rank representations of matrices Ai to perform implicit matrix-vector products with the Schur complement matrix and to compute only specific parts of this matrix. This allows the construction of the partial Cholesky factorization of the Schur complement matrix which serves as a good preconditioner for it and permits the method to be run in a matrix-free scheme. Convergence properties of the method are studied and a polynomial complexity result is extended to the case when inexact Newton steps are employed. A Matlab-based implementation is developed and preliminary computational results of applying the method to maximum cut and matrix completion problems are reported
Obtaining Pseudo-inverse Solutions With MINRES
The celebrated minimum residual method (MINRES), proposed in the seminal
paper of Paige and Saunders, has seen great success and wide-spread use in
solving linear least-squared problems involving Hermitian matrices, with
further extensions to complex symmetric settings. Unless the system is
consistent whereby the right-hand side vector lies in the range of the matrix,
MINRES is not guaranteed to obtain the pseudo-inverse solution. Variants of
MINRES, such as MINRES-QLP, which can achieve such minimum norm solutions, are
known to be both computationally expensive and challenging to implement. We
propose a novel and remarkably simple lifting strategy that seamlessly
integrates with the final MINRES iteration, enabling us to obtain the minimum
norm solution with negligible additional computational costs. We study our
lifting strategy in a diverse range of settings encompassing Hermitian and
complex symmetric systems as well as those with semi-definite preconditioners.
We also provide numerical experiments to support our analysis and showcase the
effects of our lifting strategy
Recommended from our members
A computational study of using black-box QR solvers for large-scale sparse-dense linear least squares problems
Large-scale overdetermined linear least squares problems arise in many practical applications. One popular solution method is based on the backward stable QR factorization of the system matrix A . This article focuses on sparse-dense least squares problems in which A is sparse except from a small number of rows that are considered dense. For large-scale problems, the direct application of a QR solver either fails because of insufficient memory or is unacceptably slow. We study several solution approaches based on using a sparse QR solver without modification, focussing on the case that the sparse part of A is rank deficient. We discuss partial matrix stretching and regularization and propose extending the augmented system formulation with iterative refinement for sparse problems to sparse-dense problems, optionally incorporating multi-precision arithmetic. In summary, our computational study shows that, before applying a black-box QR factorization, a check should be made for rows that are classified as dense and, if such rows are identified, then A should be split into sparse and dense blocks; a number of ways to use a black-box QR factorization to exploit this splitting are possible, with no single method found to be the best in all cases