5 research outputs found
Analysis of a Classical Matrix Preconditioning Algorithm
We study a classical iterative algorithm for balancing matrices in the
norm via a scaling transformation. This algorithm, which goes back
to Osborne and Parlett \& Reinsch in the 1960s, is implemented as a standard
preconditioner in many numerical linear algebra packages. Surprisingly, despite
its widespread use over several decades, no bounds were known on its rate of
convergence. In this paper we prove that, for any irreducible (real
or complex) input matrix~, a natural variant of the algorithm converges in
elementary balancing operations, where
measures the initial imbalance of~ and is the target imbalance
of the output matrix. (The imbalance of~ is , where
are the maximum entries in magnitude in the
th row and column respectively.) This bound is tight up to the
factor. A balancing operation scales the th row and column so that their
maximum entries are equal, and requires arithmetic operations on
average, where is the number of non-zero elements in~. Thus the running
time of the iterative algorithm is . This is the first time
bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We
also prove a conjecture of Chen that characterizes those matrices for which the
limit of the balancing process is independent of the order in which balancing
operations are performed.Comment: The previous version (1) (see also STOC'15) handled UB ("unique
balance") input matrices. In this version (2) we extend the work to handle
all input matrice
Much Faster Algorithms for Matrix Scaling
We develop several efficient algorithms for the classical \emph{Matrix
Scaling} problem, which is used in many diverse areas, from preconditioning
linear systems to approximation of the permanent. On an input
matrix , this problem asks to find diagonal (scaling) matrices and
(if they exist), so that -approximates a doubly
stochastic, or more generally a matrix with prescribed row and column sums.
We address the general scaling problem as well as some important special
cases. In particular, if has nonzero entries, and if there exist
and with polynomially large entries such that is doubly stochastic,
then we can solve the problem in total complexity .
This greatly improves on the best known previous results, which were either
or .
Our algorithms are based on tailor-made first and second order techniques,
combined with other recent advances in continuous optimization, which may be of
independent interest for solving similar problems