1,236 research outputs found

    Robust Dropping Criteria for F-norm Minimization Based Sparse Approximate Inverse Preconditioning

    Full text link
    Dropping tolerance criteria play a central role in Sparse Approximate Inverse preconditioning. Such criteria have received, however, little attention and have been treated heuristically in the following manner: If the size of an entry is below some empirically small positive quantity, then it is set to zero. The meaning of "small" is vague and has not been considered rigorously. It has not been clear how dropping tolerances affect the quality and effectiveness of a preconditioner MM. In this paper, we focus on the adaptive Power Sparse Approximate Inverse algorithm and establish a mathematical theory on robust selection criteria for dropping tolerances. Using the theory, we derive an adaptive dropping criterion that is used to drop entries of small magnitude dynamically during the setup process of MM. The proposed criterion enables us to make MM both as sparse as possible as well as to be of comparable quality to the potentially denser matrix which is obtained without dropping. As a byproduct, the theory applies to static F-norm minimization based preconditioning procedures, and a similar dropping criterion is given that can be used to sparsify a matrix after it has been computed by a static sparse approximate inverse procedure. In contrast to the adaptive procedure, dropping in the static procedure does not reduce the setup time of the matrix but makes the application of the sparser MM for Krylov iterations cheaper. Numerical experiments reported confirm the theory and illustrate the robustness and effectiveness of the dropping criteria.Comment: 27 pages, 2 figure

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution

    Geometrical inverse preconditioning for symmetric positive definite matrices

    Full text link
    We focus on inverse preconditioners based on minimizing F(X)=1cos(XA,I)F(X) = 1-\cos(XA,I), where XAXA is the preconditioned matrix and AA is symmetric and positive definite. We present and analyze gradient-type methods to minimize F(X)F(X) on a suitable compact set. For that we use the geometrical properties of the non-polyhedral cone of symmetric and positive definite matrices, and also the special properties of F(X)F(X) on the feasible set. Preliminary and encouraging numerical results are also presented in which dense and sparse approximations are included
    corecore