991 research outputs found

    Forest matrices around the Laplacian matrix

    Get PDF
    We study the matrices Q_k of in-forests of a weighted digraph G and their connections with the Laplacian matrix L of G. The (i,j) entry of Q_k is the total weight of spanning converging forests (in-forests) with k arcs such that i belongs to a tree rooted at j. The forest matrices, Q_k, can be calculated recursively and expressed by polynomials in the Laplacian matrix; they provide representations for the generalized inverses, the powers, and some eigenvectors of L. The normalized in-forest matrices are row stochastic; the normalized matrix of maximum in-forests is the eigenprojection of the Laplacian matrix, which provides an immediate proof of the Markov chain tree theorem. A source of these results is the fact that matrices Q_k are the matrix coefficients in the polynomial expansion of adj(a*I+L). Thereby they are precisely Faddeev's matrices for -L. Keywords: Weighted digraph; Laplacian matrix; Spanning forest; Matrix-forest theorem; Leverrier-Faddeev method; Markov chain tree theorem; Eigenprojection; Generalized inverse; Singular M-matrixComment: 19 pages, presented at the Edinburgh (2001) Conference on Algebraic Graph Theor

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336

    Polynomials with Lorentzian Signature, and Computing Permanents via Hyperbolic Programming

    Full text link
    We study the class of polynomials whose Hessians evaluated at any point of a closed convex cone have Lorentzian signature. This class is a generalization to the remarkable class of Lorentzian polynomials. We prove that hyperbolic polynomials and conic stable polynomials belong to this class, and the set of polynomials with Lorentzian signature is closed. Finally, we develop a method for computing permanents of nonsingular matrices which belong to a class that includes nonsingular kk-locally singular matrices via hyperbolic programming

    Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees

    Full text link
    Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group){Throughout the paper, we use segmentation, clustering, and grouping, and their verb forms, interchangeably.} high-dimensional structural data such as those (approximately) lying on subspaces {We follow {liu2010robust} and use the term "subspace" to denote both linear subspaces and affine subspaces. There is a trivial conversion between linear subspaces and affine subspaces as mentioned therein.} or low-dimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, techniques proposed in this vein often considerably boost the performance in subspace settings where traditional SC can fail. Despite the success, there are fundamental problems that have been left unsolved: the spectrum property of the learned affinity matrix cannot be gauged in advance, and there is often one ugly symmetrization step that post-processes the affinity for SC input. Hence we advocate to enforce the symmetric positive semidefinite constraint explicitly during learning (Low-Rank Representation with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it can be solved in an exquisite scheme efficiently instead of general-purpose SDP solvers that usually scale up poorly. We provide rigorous mathematical derivations to show that, in its canonical form, LRR-PSD is equivalent to the recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting future research. As per the computational cost, our proposal is at most comparable to that of LRR, if not less. We validate our theoretic analysis and optimization scheme by experiments on both synthetic and real data sets.Comment: 10 pages, 4 figures. Accepted by ICDM Workshop on Optimization Based Methods for Emerging Data Mining Problems (OEDM), 2010. Main proof simplified and typos corrected. Experimental data slightly adde
    corecore