164 research outputs found

    New lower bounds on eigenvalue of the Hadamard product of an M-matrix and its inverse

    Get PDF
    AbstractSome new lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse are given. These bounds improve the results of [H.B. Li, T.Z. Huang, S.Q. Shen, H. Li, Lower bounds for the minimum eigenvalue of Hadamard product of an M-matrix and its inverse, Linear Algebra Appl. 420 (2007) 235–247]

    Generalized Descent Methods for Asymmetric Systems of Equations and Variational Inequalities

    Get PDF
    We consider generalizations of the steepest descent algorithm for solving asymmetric systems of equations. We first show that if the system is linear and is defined by a matrix M, then the method converges if M2 is positive definite. We also establish easy to verify conditions on the matrix M that ensure that M is positive definite, and develop a scaling procedure that extends the class of matrices that satisfy the convergence conditions. In addition, we establish a local convergence result for nonlinear systems defined by uniformly monotone maps, and discuss a class of general descent methods. Finally, we show that a variant of the Frank-Wolfe method will solve a certain class of variational inequality problems. All of the methods that we consider reduce to standard nonlinear programming algorithms for equivalent optimization problems when the Jacobian of the underlying problem map is symmetric. We interpret the convergence conditions for the generalized steepest descent algorithms as restricting the degree of asymmetry of the problem map

    Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods

    Full text link
    In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time O~(mlogκlog2(1/ϵ))\widetilde{O}\left(m\log \kappa \log^2 (1/\epsilon)\right) where ϵ\epsilon is the amount of error we are willing to tolerate. Here, κ\kappa represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever κ\kappa is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time O~(m3/2log(1/ϵ))\widetilde{O}(m^{3/2} \log (1/\epsilon)). In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201

    Graph Clustering by Flow Simulation

    Get PDF
    corecore