2,737 research outputs found
Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices
It is well known that as a famous type of iterative methods in numerical
linear algebra, Gauss-Seidel iterative methods are convergent for linear
systems with strictly or irreducibly diagonally dominant matrices, invertible
matrices (generalized strictly diagonally dominant matrices) and Hermitian
positive definite matrices. But, the same is not necessarily true for linear
systems with nonstrictly diagonally dominant matrices and general matrices.
This paper firstly proposes some necessary and sufficient conditions for
convergence on Gauss-Seidel iterative methods to establish several new
theoretical results on linear systems with nonstrictly diagonally dominant
matrices and general matrices. Then, the convergence results on
preconditioned Gauss-Seidel (PGS) iterative methods for general matrices
are presented. Finally, some numerical examples are given to demonstrate the
results obtained in this paper
A Direct Elliptic Solver Based on Hierarchically Low-rank Schur Complements
A parallel fast direct solver for rank-compressible block tridiagonal linear
systems is presented. Algorithmic synergies between Cyclic Reduction and
Hierarchical matrix arithmetic operations result in a solver with arithmetic complexity and memory footprint. We provide a
baseline for performance and applicability by comparing with well known
implementations of the -LU factorization and algebraic multigrid
with a parallel implementation that leverages the concurrency features of the
method. Numerical experiments reveal that this method is comparable with other
fast direct solvers based on Hierarchical Matrices such as -LU and
that it can tackle problems where algebraic multigrid fails to converge
Average characteristic polynomials for multiple orthogonal polynomial ensembles
Multiple orthogonal polynomials (MOP) are a non-definite version of matrix
orthogonal polynomials. They are described by a Riemann-Hilbert matrix Y
consisting of four blocks Y_{1,1}, Y_{1,2}, Y_{2,1} and Y_{2,2}. In this paper,
we show that det Y_{1,1} (det Y_{2,2}) equals the average characteristic
polynomial (average inverse characteristic polynomial, respectively) over the
probabilistic ensemble that is associated to the MOP. In this way we generalize
classical results for orthogonal polynomials, and also some recent results for
MOP of type I and type II. We then extend our results to arbitrary products and
ratios of characteristic polynomials. In the latter case an important role is
played by a matrix-valued version of the Christoffel-Darboux kernel. Our proofs
use determinantal identities involving Schur complements, and adaptations of
the classical results by Heine, Christoffel and Uvarov.Comment: 32 page
Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections
This work focuses on the iterative solution of sequences of KKT linear
systems arising in interior point methods applied to large convex quadratic
programming problems. This task is the computational core of the interior point
procedure and an efficient preconditioning strategy is crucial for the
efficiency of the overall method. Constraint preconditioners are very effective
in this context; nevertheless, their computation may be very expensive for
large-scale problems, and resorting to approximations of them may be
convenient. Here we propose a procedure for building inexact constraint
preconditioners by updating a "seed" constraint preconditioner computed for a
KKT matrix at a previous interior point iteration. These updates are obtained
through low-rank corrections of the Schur complement of the (1,1) block of the
seed preconditioner. The updated preconditioners are analyzed both
theoretically and computationally. The results obtained show that our updating
procedure, coupled with an adaptive strategy for determining whether to
reinitialize or update the preconditioner, can enhance the performance of
interior point methods on large problems.Comment: 22 page
Physical properties of the Schur complement of local covariance matrices
General properties of global covariance matrices representing bipartite
Gaussian states can be decomposed into properties of local covariance matrices
and their Schur complements. We demonstrate that given a bipartite Gaussian
state described by a covariance matrix \textbf{V}, the
Schur complement of a local covariance submatrix of it can be
interpreted as a new covariance matrix representing a Gaussian operator of
party 1 conditioned to local parity measurements on party 2. The connection
with a partial parity measurement over a bipartite quantum state and the
determination of the reduced Wigner function is given and an operational
process of parity measurement is developed. Generalization of this procedure to
a -partite Gaussian state is given and it is demonstrated that the
system state conditioned to a partial parity projection is given by a
covariance matrix such as its block elements are Schur complements
of special local matrices.Comment: 10 pages. Replaced with final published versio
Sampling Random Spanning Trees Faster than Matrix Multiplication
We present an algorithm that, with high probability, generates a random
spanning tree from an edge-weighted undirected graph in
time (The notation hides
factors). The tree is sampled from a distribution
where the probability of each tree is proportional to the product of its edge
weights. This improves upon the previous best algorithm due to Colbourn et al.
that runs in matrix multiplication time, . For the special case of
unweighted graphs, this improves upon the best previously known running time of
for (Colbourn
et al. '96, Kelner-Madry '09, Madry et al. '15).
The effective resistance metric is essential to our algorithm, as in the work
of Madry et al., but we eschew determinant-based and random walk-based
techniques used by previous algorithms. Instead, our algorithm is based on
Gaussian elimination, and the fact that effective resistance is preserved in
the graph resulting from eliminating a subset of vertices (called a Schur
complement). As part of our algorithm, we show how to compute
-approximate effective resistances for a set of vertex pairs via
approximate Schur complements in time,
without using the Johnson-Lindenstrauss lemma which requires time. We
combine this approximation procedure with an error correction procedure for
handing edges where our estimate isn't sufficiently accurate
- …