32 research outputs found
A new perturbation bound for the LDU factorization of diagonally dominant matrices
This work introduces a new perturbation bound for the L factor of the LDU factorization
of (row) diagonally dominant matrices computed via the column diagonal dominance pivoting
strategy. This strategy yields L and U factors which are always well-conditioned and, so, the LDU
factorization is guaranteed to be a rank-revealing decomposition. The new bound together with
those for the D and U factors in [F. M. Dopico and P. Koev, Numer. Math., 119 (2011), pp. 337â
371] establish that if diagonally dominant matrices are parameterized via their diagonally dominant
parts and off-diagonal entries, then tiny relative componentwise perturbations of these parameters
produce tiny relative normwise variations of L and U and tiny relative entrywise variations of D when
column diagonal dominance pivoting is used. These results will allow us to prove in a follow-up work
that such perturbations also lead to strong perturbation bounds for many other problems involving
diagonally dominant matrices.Research supported in part by Ministerio de EconomĂa y Competitividad
of Spain under grant MTM2012-32542.Publicad
Relative perturbation theory for diagonally dominant matrices
In this paper, strong relative perturbation bounds are developed for a number of linear algebra problems involving diagonally dominant matrices. The key point is to parameterize diagonally dominant matrices using their off-diagonal entries and diagonally dominant parts and to consider small relative componentwise perturbations of these parameters. This allows us to obtain new relative perturbation bounds for the inverse, the solution to linear systems, the symmetric indefinite eigenvalue problem, the singular value problem, and the nonsymmetric eigenvalue problem. These bounds are much stronger than traditional perturbation results, since they are independent of either the standard condition number or the magnitude of eigenvalues/singular values. Together with previously derived perturbation bounds for the LDU factorization and the symmetric positive definite eigenvalue problem, this paper presents a complete and detailed account of relative structured perturbation theory for diagonally dominant matrices.This research was partially supported by the Ministerio de EconomĂa y Competitividad of Spain under grant MTM2012-32542.Publicad
RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES
Diagonally dominant matrices arise in many applications. In this work, we exploit the structure of diagonally dominant matrices to provide sharp entrywise relative perturbation bounds. We first generalize the results of Dopico and Koev to provide relative perturbation bounds for the LDU factorization with a well conditioned L factor. We then establish relative perturbation bounds for the inverse that are entrywise and independent of the condition number. This allows us to also present relative perturbation bounds for the linear system Ax=b that are independent of the condition number. Lastly, we continue the work of Ye to provide relative perturbation bounds for the eigenvalues of symmetric indefinite matrices and non-symmetric matrices
On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices
The problem of finding a submatrix of maximum volume of a matrix
is of interest in a variety of applications. For example, it yields a
quasi-best low-rank approximation constructed from the rows and columns of .
We show that such a submatrix can always be chosen to be a principal submatrix
if is symmetric semidefinite or diagonally dominant. Then we analyze the
low-rank approximation error returned by a greedy method for volume
maximization, cross approximation with complete pivoting. Our bound for general
matrices extends an existing result for symmetric semidefinite matrices and
yields new error estimates for diagonally dominant matrices. In particular, for
doubly diagonally dominant matrices the error is shown to remain within a
modest factor of the best approximation error. We also illustrate how the
application of our results to cross approximation for functions leads to new
and better convergence results
Accurate and Efficient Expression Evaluation and Linear Algebra
We survey and unify recent results on the existence of accurate algorithms
for evaluating multivariate polynomials, and more generally for accurate
numerical linear algebra with structured matrices. By "accurate" we mean that
the computed answer has relative error less than 1, i.e., has some correct
leading digits. We also address efficiency, by which we mean algorithms that
run in polynomial time in the size of the input. Our results will depend
strongly on the model of arithmetic: Most of our results will use the so-called
Traditional Model (TM). We give a set of necessary and sufficient conditions to
decide whether a high accuracy algorithm exists in the TM, and describe
progress toward a decision procedure that will take any problem and provide
either a high accuracy algorithm or a proof that none exists. When no accurate
algorithm exists in the TM, it is natural to extend the set of available
accurate operations by a library of additional operations, such as , dot
products, or indeed any enumerable set which could then be used to build
further accurate algorithms. We show how our accurate algorithms and decision
procedure for finding them extend to this case. Finally, we address other
models of arithmetic, and the relationship between (im)possibility in the TM
and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl
Computing the singular value decomposition with high relative accuracy
AbstractWe analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, which in general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as finite element problems and quantum mechanics, it is the smallest singular values that have physical meaning, and should be determined accurately by the data. Many recent papers have identified special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite different, motivating us to seek a common perturbation theory and common algorithm. We provide these in this paper, and show that high relative accuracy is possible in many new cases as well. The briefest way to describe our results is that we can compute the SVD of G to high relative accuracy provided we can accurately factor G=XDYT where D is diagonal and X and Y are any well-conditioned matrices; furthermore, the LDU factorization frequently does the job. We provide many examples of matrix classes permitting such an LDU decomposition
Fast and accurate con-eigenvalue algorithm for optimal rational approximations
The need to compute small con-eigenvalues and the associated con-eigenvectors
of positive-definite Cauchy matrices naturally arises when constructing
rational approximations with a (near) optimally small error.
Specifically, given a rational function with poles in the unit disk, a
rational approximation with poles in the unit disk may be obtained
from the th con-eigenvector of an Cauchy matrix, where the
associated con-eigenvalue gives the approximation error in the
norm. Unfortunately, standard algorithms do not accurately compute
small con-eigenvalues (and the associated con-eigenvectors) and, in particular,
yield few or no correct digits for con-eigenvalues smaller than the machine
roundoff. We develop a fast and accurate algorithm for computing
con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices,
yielding even the tiniest con-eigenvalues with high relative accuracy. The
algorithm computes the th con-eigenvalue in operations
and, since the con-eigenvalues of positive-definite Cauchy matrices decay
exponentially fast, we obtain (near) optimal rational approximations in
operations, where is the
approximation error in the norm. We derive error bounds
demonstrating high relative accuracy of the computed con-eigenvalues and the
high accuracy of the unit con-eigenvectors. We also provide examples of using
the algorithm to compute (near) optimal rational approximations of functions
with singularities and sharp transitions, where approximation errors close to
machine precision are obtained. Finally, we present numerical tests on random
(complex-valued) Cauchy matrices to show that the algorithm computes all the
con-eigenvalues and con-eigenvectors with nearly full precision