15 research outputs found

    Finding effective support-tree preconditioners

    Full text link
    In 1995, Gremban, Miller, and Zagha introduced support-tree preconditioners and a parallel algorithm called support-tree conjugate gradient (STCG) for solving linear systems of the form Ax = b, where A is an n × n Laplacian matrix. A Laplacian is a symmetric matrix in which the off-diagonal entries are non-positive, and the row and column sums are zero. A Laplacian A with 2m non-zeros can be interpreted as an undirected positively-weighted graph G with n vertices and m edges, where there is an edge between two nodes i and j with weight c((i, j)) = −Ai,j = −Aj,i if Ai,j = Aj,i < 0. Gremban et al. showed experimentally that STCG performs well on several classes of graphs commonly used in scientific computations. In his thesis, Gremban also proved upper bounds on the number of iterations required for STCG to converge for certain classes of graphs. In this paper, we present an algorithm for finding a preconditioner for an arbitrary graph G = (V, E) with n nodes, m edges, and a weight function c> 0 on the edges, where w.l.o.g., mine∈E c(e) = 1. Equipped with this preconditioner, STCG requires O(log 4 n · ïżœ ∆/α) iterations, where α = min U⊂V,|U|≀|V |/2 c(U, V \U)/|U | is the minimum edge expansion of the graph, and ∆ = maxv∈V c(v) is the maximum incident weight on any vertex. Each iteration requires O(m) work and can be implemented in O(log n) steps in parallel, using only O(m) space. Our results generalize to matrices that are symmetric and diagonally-dominant (SDD).

    Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods

    Full text link
    In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time O~(mlog⁥Îșlog⁥2(1/Ï”))\widetilde{O}\left(m\log \kappa \log^2 (1/\epsilon)\right) where Ï”\epsilon is the amount of error we are willing to tolerate. Here, Îș\kappa represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever Îș\kappa is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time O~(m3/2log⁥(1/Ï”))\widetilde{O}(m^{3/2} \log (1/\epsilon)). In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201

    Combinatorial problems in solving linear systems

    Get PDF
    42 pages, available as LIP research report RR-2009-15Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today's numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices

    RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES

    Get PDF
    Diagonally dominant matrices arise in many applications. In this work, we exploit the structure of diagonally dominant matrices to provide sharp entrywise relative perturbation bounds. We first generalize the results of Dopico and Koev to provide relative perturbation bounds for the LDU factorization with a well conditioned L factor. We then establish relative perturbation bounds for the inverse that are entrywise and independent of the condition number. This allows us to also present relative perturbation bounds for the linear system Ax=b that are independent of the condition number. Lastly, we continue the work of Ye to provide relative perturbation bounds for the eigenvalues of symmetric indefinite matrices and non-symmetric matrices
    corecore