765 research outputs found
Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices
It is well known that as a famous type of iterative methods in numerical
linear algebra, Gauss-Seidel iterative methods are convergent for linear
systems with strictly or irreducibly diagonally dominant matrices, invertible
matrices (generalized strictly diagonally dominant matrices) and Hermitian
positive definite matrices. But, the same is not necessarily true for linear
systems with nonstrictly diagonally dominant matrices and general matrices.
This paper firstly proposes some necessary and sufficient conditions for
convergence on Gauss-Seidel iterative methods to establish several new
theoretical results on linear systems with nonstrictly diagonally dominant
matrices and general matrices. Then, the convergence results on
preconditioned Gauss-Seidel (PGS) iterative methods for general matrices
are presented. Finally, some numerical examples are given to demonstrate the
results obtained in this paper
Disc Separation of the Schur Complement of Diagonally Dominant Matrices and Determinantal Bounds
We consider the Gersgorin disc separation from the origin for (doubly) diagonally dominant matrices and their Schur complements, showing that the separation of the Schur complement of a (doubly) diagonally dominant matrix is greater than that of the original grand matrix. As application we discuss the localization of eigenvalues and present some upper and lower bounds for the determinant of diagonally dominant matrices
Geometric aspects of the symmetric inverse M-matrix problem
We investigate the symmetric inverse M-matrix problem from a geometric
perspective. The central question in this geometric context is, which
conditions on the k-dimensional facets of an n-simplex S guarantee that S has
no obtuse dihedral angles. First we study the properties of an n-simplex S
whose k-facets are all nonobtuse, and generalize some classical results by
Fiedler. We prove that if all (n-1)-facets of an n-simplex S are nonobtuse,
each makes at most one obtuse dihedral angle with another facet. This helps to
identify a special type of tetrahedron, which we will call sub-orthocentric,
with the property that if all tetrahedral facets of S are sub-orthocentric,
then S is nonobtuse. Rephrased in the language of linear algebra, this
constitutes a purely geometric proof of the fact that each symmetric
ultrametric matrix is the inverse of a weakly diagonally dominant M-matrix.
Review papers support our belief that the linear algebraic perspective on the
inverse M-matrix problem dominates the literature. The geometric perspective
however connects sign properties of entries of inverses of a symmetric positive
definite matrix to the dihedral angle properties of an underlying simplex, and
enables an explicit visualization of how these angles and signs can be
manipulated. This will serve to formulate purely geometric conditions on the
k-facets of an n-simplex S that may render S nonobtuse also for k>3. For this,
we generalize the class of sub-orthocentric tetrahedra that gives rise to the
class of ultrametric matrices, to sub-orthocentric simplices that define
symmetric positive definite matrices A with special types of k x k principal
submatrices for k>3. Each sub-orthocentric simplices is nonobtuse, and we
conjecture that any simplex with sub-orthocentric facets only, is
sub-orthocentric itself.Comment: 42 pages, 20 figure
On the intersection of the classes of doubly diagonally dominant matrices and S-strictly diagonally dominant matrices
We denote by H0 the subclass of H-matrices consisting of all the matrices that lay simultaneously on the classes of doubly diagonally dominant (DDD) matrices (A = [aij ] ∈ Cn×n : |aii||ajj | ≥ k =i |aik| k =j |ajk|, i = j) and S-strictly diagonally dominant (S-SDD) matrices. Notice that strictly doubly diagonally dominant matrices (also called Ostrowsky matrices) are a subclass of H0. Strictly diagonally dominant matrices (SDD) are also a subclass of H0. In this paper we analyze some properties of the class H0 = DDD ∩ S-SDD
Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods
In this paper, we study matrix scaling and balancing, which are fundamental
problems in scientific computing, with a long line of work on them that dates
back to the 1960s. We provide algorithms for both these problems that, ignoring
logarithmic factors involving the dimension of the input matrix and the size of
its entries, both run in time where is the amount of error we are willing to
tolerate. Here, represents the ratio between the largest and the
smallest entries of the optimal scalings. This implies that our algorithms run
in nearly-linear time whenever is quasi-polynomial, which includes, in
particular, the case of strictly positive matrices. We complement our results
by providing a separate algorithm that uses an interior-point method and runs
in time .
In order to establish these results, we develop a new second-order
optimization framework that enables us to treat both problems in a unified and
principled manner. This framework identifies a certain generalization of linear
system solving that we can use to efficiently minimize a broad class of
functions, which we call second-order robust. We then show that in the context
of the specific functions capturing matrix scaling and balancing, we can
leverage and generalize the work on Laplacian system solving to make the
algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201
- …