5 research outputs found

    Analysis of a Classical Matrix Preconditioning Algorithm

    Get PDF
    We study a classical iterative algorithm for balancing matrices in the LL_\infty norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett \& Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for any irreducible n×nn\times n (real or complex) input matrix~AA, a natural variant of the algorithm converges in O(n3log(nρ/ε))O(n^3\log(n\rho/\varepsilon)) elementary balancing operations, where ρ\rho measures the initial imbalance of~AA and ε\varepsilon is the target imbalance of the output matrix. (The imbalance of~AA is maxilog(aiout/aiin)\max_i |\log(a_i^{\text{out}}/a_i^{\text{in}})|, where aiout,aiina_i^{\text{out}},a_i^{\text{in}} are the maximum entries in magnitude in the iith row and column respectively.) This bound is tight up to the logn\log n factor. A balancing operation scales the iith row and column so that their maximum entries are equal, and requires O(m/n)O(m/n) arithmetic operations on average, where mm is the number of non-zero elements in~AA. Thus the running time of the iterative algorithm is O~(n2m)\tilde{O}(n^2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.Comment: The previous version (1) (see also STOC'15) handled UB ("unique balance") input matrices. In this version (2) we extend the work to handle all input matrice

    Much Faster Algorithms for Matrix Scaling

    Full text link
    We develop several efficient algorithms for the classical \emph{Matrix Scaling} problem, which is used in many diverse areas, from preconditioning linear systems to approximation of the permanent. On an input n×nn\times n matrix AA, this problem asks to find diagonal (scaling) matrices XX and YY (if they exist), so that XAYX A Y ε\varepsilon-approximates a doubly stochastic, or more generally a matrix with prescribed row and column sums. We address the general scaling problem as well as some important special cases. In particular, if AA has mm nonzero entries, and if there exist XX and YY with polynomially large entries such that XAYX A Y is doubly stochastic, then we can solve the problem in total complexity O~(m+n4/3)\tilde{O}(m + n^{4/3}). This greatly improves on the best known previous results, which were either O~(n4)\tilde{O}(n^4) or O(mn1/2/ε)O(m n^{1/2}/\varepsilon). Our algorithms are based on tailor-made first and second order techniques, combined with other recent advances in continuous optimization, which may be of independent interest for solving similar problems
    corecore