2,737 research outputs found

    Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices

    Full text link
    It is well known that as a famous type of iterative methods in numerical linear algebra, Gauss-Seidel iterative methods are convergent for linear systems with strictly or irreducibly diagonally dominant matrices, invertible HH-matrices (generalized strictly diagonally dominant matrices) and Hermitian positive definite matrices. But, the same is not necessarily true for linear systems with nonstrictly diagonally dominant matrices and general HH-matrices. This paper firstly proposes some necessary and sufficient conditions for convergence on Gauss-Seidel iterative methods to establish several new theoretical results on linear systems with nonstrictly diagonally dominant matrices and general HH-matrices. Then, the convergence results on preconditioned Gauss-Seidel (PGS) iterative methods for general HH-matrices are presented. Finally, some numerical examples are given to demonstrate the results obtained in this paper

    A Direct Elliptic Solver Based on Hierarchically Low-rank Schur Complements

    Full text link
    A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N)O(N \log^2 N) arithmetic complexity and O(NlogN)O(N \log N) memory footprint. We provide a baseline for performance and applicability by comparing with well known implementations of the H\mathcal{H}-LU factorization and algebraic multigrid with a parallel implementation that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as H\mathcal{H}-LU and that it can tackle problems where algebraic multigrid fails to converge

    Average characteristic polynomials for multiple orthogonal polynomial ensembles

    Get PDF
    Multiple orthogonal polynomials (MOP) are a non-definite version of matrix orthogonal polynomials. They are described by a Riemann-Hilbert matrix Y consisting of four blocks Y_{1,1}, Y_{1,2}, Y_{2,1} and Y_{2,2}. In this paper, we show that det Y_{1,1} (det Y_{2,2}) equals the average characteristic polynomial (average inverse characteristic polynomial, respectively) over the probabilistic ensemble that is associated to the MOP. In this way we generalize classical results for orthogonal polynomials, and also some recent results for MOP of type I and type II. We then extend our results to arbitrary products and ratios of characteristic polynomials. In the latter case an important role is played by a matrix-valued version of the Christoffel-Darboux kernel. Our proofs use determinantal identities involving Schur complements, and adaptations of the classical results by Heine, Christoffel and Uvarov.Comment: 32 page

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    Physical properties of the Schur complement of local covariance matrices

    Get PDF
    General properties of global covariance matrices representing bipartite Gaussian states can be decomposed into properties of local covariance matrices and their Schur complements. We demonstrate that given a bipartite Gaussian state ρ12\rho_{12} described by a 4×44\times 4 covariance matrix \textbf{V}, the Schur complement of a local covariance submatrix V1\textbf{V}_1 of it can be interpreted as a new covariance matrix representing a Gaussian operator of party 1 conditioned to local parity measurements on party 2. The connection with a partial parity measurement over a bipartite quantum state and the determination of the reduced Wigner function is given and an operational process of parity measurement is developed. Generalization of this procedure to a nn-partite Gaussian state is given and it is demonstrated that the n1n-1 system state conditioned to a partial parity projection is given by a covariance matrix such as its 2×22 \times 2 block elements are Schur complements of special local matrices.Comment: 10 pages. Replaced with final published versio

    Sampling Random Spanning Trees Faster than Matrix Multiplication

    Full text link
    We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in O~(n4/3m1/2+n2)\tilde{O}(n^{4/3}m^{1/2}+n^{2}) time (The O~()\tilde{O}(\cdot) notation hides polylog(n)\operatorname{polylog}(n) factors). The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω)O(n^\omega). For the special case of unweighted graphs, this improves upon the best previously known running time of O~(min{nω,mn,m4/3})\tilde{O}(\min\{n^{\omega},m\sqrt{n},m^{4/3}\}) for mn5/3m \gg n^{5/3} (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute ϵ\epsilon-approximate effective resistances for a set SS of vertex pairs via approximate Schur complements in O~(m+(n+S)ϵ2)\tilde{O}(m+(n + |S|)\epsilon^{-2}) time, without using the Johnson-Lindenstrauss lemma which requires O~(min{(m+S)ϵ2,m+nϵ4+Sϵ2})\tilde{O}( \min\{(m + |S|)\epsilon^{-2}, m+n\epsilon^{-4} +|S|\epsilon^{-2}\}) time. We combine this approximation procedure with an error correction procedure for handing edges where our estimate isn't sufficiently accurate
    corecore