165 research outputs found

    Nonpositive Eigenvalues of the Adjacency Matrix and Lower Bounds for Laplacian Eigenvalues

    Get PDF
    Let NPO(k)NPO(k) be the smallest number nn such that the adjacency matrix of any undirected graph with nn vertices or more has at least kk nonpositive eigenvalues. We show that NPO(k)NPO(k) is well-defined and prove that the values of NPO(k)NPO(k) for k=1,2,3,4,5k=1,2,3,4,5 are 1,3,6,10,161,3,6,10,16 respectively. In addition, we prove that for all k5k \geq 5, R(k,k+1)NPO(k)>TkR(k,k+1) \ge NPO(k) > T_k, in which R(k,k+1)R(k,k+1) is the Ramsey number for kk and k+1k+1, and TkT_k is the kthk^{th} triangular number. This implies new lower bounds for eigenvalues of Laplacian matrices: the kk-th largest eigenvalue is bounded from below by the NPO(k)NPO(k)-th largest degree, which generalizes some prior results.Comment: 23 pages, 12 figure

    A nodal domain theorem and a higher-order Cheeger inequality for the graph pp-Laplacian

    Get PDF
    We consider the nonlinear graph pp-Laplacian and its set of eigenvalues and associated eigenfunctions of this operator defined by a variational principle. We prove a nodal domain theorem for the graph pp-Laplacian for any p1p\geq 1. While for p>1p>1 the bounds on the number of weak and strong nodal domains are the same as for the linear graph Laplacian (p=2p=2), the behavior changes for p=1p=1. We show that the bounds are tight for p1p\geq 1 as the bounds are attained by the eigenfunctions of the graph pp-Laplacian on two graphs. Finally, using the properties of the nodal domains, we prove a higher-order Cheeger inequality for the graph pp-Laplacian for p>1p>1. If the eigenfunction associated to the kk-th variational eigenvalue of the graph pp-Laplacian has exactly kk strong nodal domains, then the higher order Cheeger inequality becomes tight as p1p\rightarrow 1

    Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods

    Full text link
    In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time O~(mlogκlog2(1/ϵ))\widetilde{O}\left(m\log \kappa \log^2 (1/\epsilon)\right) where ϵ\epsilon is the amount of error we are willing to tolerate. Here, κ\kappa represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever κ\kappa is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time O~(m3/2log(1/ϵ))\widetilde{O}(m^{3/2} \log (1/\epsilon)). In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201

    A Multiscale Pyramid Transform for Graph Signals

    Get PDF
    Multiscale transforms designed to process analog and discrete-time signals and images cannot be directly applied to analyze high-dimensional data residing on the vertices of a weighted graph, as they do not capture the intrinsic geometric structure of the underlying graph data domain. In this paper, we adapt the Laplacian pyramid transform for signals on Euclidean domains so that it can be used to analyze high-dimensional data residing on the vertices of a weighted graph. Our approach is to study existing methods and develop new methods for the four fundamental operations of graph downsampling, graph reduction, and filtering and interpolation of signals on graphs. Equipped with appropriate notions of these operations, we leverage the basic multiscale constructs and intuitions from classical signal processing to generate a transform that yields both a multiresolution of graphs and an associated multiresolution of a graph signal on the underlying sequence of graphs.Comment: 16 pages, 13 figure
    corecore