898 research outputs found

    Enhanced multi-level block ILU preconditioning strategies for general sparse linear systems

    Get PDF
    AbstractThis paper introduces several strategies to deal with pivot blocks in multi-level block incomplete LU factorization (BILUM) preconditioning techniques. These techniques are aimed at increasing the robustness and controlling the amount of fill-ins of BILUM for solving large sparse linear systems when large-size blocks are used to form block-independent set. Techniques proposed in this paper include double-dropping strategies, approximate singular-value decomposition, variable size blocks and use of an arrowhead block submatrix. We point out the advantages and disadvantages of these strategies and discuss their efficient implementations. Numerical experiments are conducted to show the usefulness of the new techniques in dealing with hard-to-solve problems arising from computational fluid dynamics. In addition, we discuss the relation between multi-level ILU preconditioning methods and algebraic multi-level methods

    LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS IN ARROWHEAD FORM

    Get PDF
    This paper deals with different approaches for solving linear systems of the first order differential equations with the system matrix in the symmetric arrowhead form.Some needed algebraic properties of the symmetric arrowhead matrix are proposed.We investigate the form of invariant factors of the arrowhead matrix.Also the entries of the adjugate matrix of the characteristic matrix of the arrowhead matrix are considered. Some reductions techniques for linear systems of differential equations with the system matrix in the arrowhead form are presented

    A Massively Parallel Algorithm for the Approximate Calculation of Inverse p-th Roots of Large Sparse Matrices

    Get PDF
    We present the submatrix method, a highly parallelizable method for the approximate calculation of inverse p-th roots of large sparse symmetric matrices which are required in different scientific applications. We follow the idea of Approximate Computing, allowing imprecision in the final result in order to be able to utilize the sparsity of the input matrix and to allow massively parallel execution. For an n x n matrix, the proposed algorithm allows to distribute the calculations over n nodes with only little communication overhead. The approximate result matrix exhibits the same sparsity pattern as the input matrix, allowing for efficient reuse of allocated data structures. We evaluate the algorithm with respect to the error that it introduces into calculated results, as well as its performance and scalability. We demonstrate that the error is relatively limited for well-conditioned matrices and that results are still valuable for error-resilient applications like preconditioning even for ill-conditioned matrices. We discuss the execution time and scaling of the algorithm on a theoretical level and present a distributed implementation of the algorithm using MPI and OpenMP. We demonstrate the scalability of this implementation by running it on a high-performance compute cluster comprised of 1024 CPU cores, showing a speedup of 665x compared to single-threaded execution

    Explicit preconditioned domain decomposition schemes for solving nonlinear boundary value problems

    Get PDF
    AbstractA new class of inner-outer iterative procedures in conjunction with Picard-Newton methods based on explicit preconditioning iterative methods for solving nonlinear systems is presented. Explicit preconditioned iterative schemes, based on the explicit computation of a class of domain decomposition generalized approximate inverse matrix techniques are presented for the efficient solution of nonlinear boundary value problems on multiprocessor systems. Applications of the new composite scheme on characteristic nonlinear boundary value problems are discussed and numerical results are given

    A General Framework for Fair Regression

    Full text link
    Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints in kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of memory and computation is preserved for such models and tightly bound the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use and group labels are only required on training data.Comment: 8 pages, 4 figures, 2 pages reference

    Thick-restarted joint Lanczos bidiagonalization for the GSVD

    Full text link
    The computation of the partial generalized singular value decomposition (GSVD) of large-scale matrix pairs can be approached by means of iterative methods based on expanding subspaces, particularly Krylov subspaces. We consider the joint Lanczos bidiagonalization method, and analyze the feasibility of adapting the thick restart technique that is being used successfully in the context of other linear algebra problems. Numerical experiments illustrate the effectiveness of the proposed method. We also compare the new method with an alternative solution via equivalent eigenvalue problems, considering accuracy as well as computational performance. The analysis is done using a parallel implementation in the SLEPc library
    corecore