5 research outputs found

    Convex Graph Invariant Relaxations For Graph Edit Distance

    Get PDF
    The edit distance between two graphs is a widely used measure of similarity that evaluates the smallest number of vertex and edge deletions/insertions required to transform one graph to another. It is NP-hard to compute in general, and a large number of heuristics have been proposed for approximating this quantity. With few exceptions, these methods generally provide upper bounds on the edit distance between two graphs. In this paper, we propose a new family of computationally tractable convex relaxations for obtaining lower bounds on graph edit distance. These relaxations can be tailored to the structural properties of the particular graphs via convex graph invariants. Specific examples that we highlight in this paper include constraints on the graph spectrum as well as (tractable approximations of) the stability number and the maximum-cut values of graphs. We prove under suitable conditions that our relaxations are tight (i.e., exactly compute the graph edit distance) when one of the graphs consists of few eigenvalues. We also validate the utility of our framework on synthetic problems as well as real applications involving molecular structure comparison problems in chemistry.Comment: 27 pages, 7 figure

    A Convolutional Neural Network into graph space

    Full text link
    Convolutional neural networks (CNNs), in a few decades, have outperformed the existing state of the art methods in classification context. However, in the way they were formalised, CNNs are bound to operate on euclidean spaces. Indeed, convolution is a signal operation that are defined on euclidean spaces. This has restricted deep learning main use to euclidean-defined data such as sound or image. And yet, numerous computer application fields (among which network analysis, computational social science, chemo-informatics or computer graphics) induce non-euclideanly defined data such as graphs, networks or manifolds. In this paper we propose a new convolution neural network architecture, defined directly into graph space. Convolution and pooling operators are defined in graph domain. We show its usability in a back-propagation context. Experimental results show that our model performance is at state of the art level on simple tasks. It shows robustness with respect to graph domain changes and improvement with respect to other euclidean and non-euclidean convolutional architectures.Comment: arXiv admin note: text overlap with arXiv:1611.08402 by other author

    Convex graph invariant relaxations for graph edit distance

    Get PDF
    The edit distance between two graphs is a widely used measure of similarity that evaluates the smallest number of vertex and edge deletions/insertions required to transform one graph to another. It is NP-hard to compute in general, and a large number of heuristics have been proposed for approximating this quantity. With few exceptions, these methods generally provide upper bounds on the edit distance between two graphs. In this paper, we propose a new family of computationally tractable convex relaxations for obtaining lower bounds on graph edit distance. These relaxations can be tailored to the structural properties of the particular graphs via convex graph invariants. Specific examples that we highlight in this paper include constraints on the graph spectrum as well as (tractable approximations of) the stability number and the maximum-cut values of graphs. We prove under suitable conditions that our relaxations are tight (i.e., exactly compute the graph edit distance) when one of the graphs consists of few eigenvalues. We also validate the utility of our framework on synthetic problems as well as real applications involving molecular structure comparison problems in chemistry

    Convex Relaxations for Graph and Inverse Eigenvalue Problems

    Get PDF
    This thesis is concerned with presenting convex optimization based tractable solutions for three fundamental problems: 1. Planted subgraph problem: Given two graphs, identifying the subset of vertices of the larger graph corresponding to the smaller one. 2. Graph edit distance problem: Given two graphs, calculating the number of edge/vertex additions and deletions required to transform one graph into the other. 3. Affine inverse eigenvalue problem: Given a subspace ε ⊂ &#x1D54A;ⁿ and a vector of eigenvalues λ ∈ ℝⁿ, finding a symmetric matrix with spectrum λ contained in ε. These combinatorial and algebraic problems frequently arise in various application domains such as social networks, computational biology, chemoinformatics, and control theory. Nevertheless, exactly solving them in practice is only possible for very small instances due to their complexity. For each of these problems, we introduce convex relaxations which succeed in providing exact or approximate solutions in a computationally tractable manner. Our relaxations for the two graph problems are based on convex graph invariants, which are functions of graphs that do not depend on a particular labeling. One of these convex relaxations, coined the Schur-Horn orbitope, corresponds to the convex hull of all matrices with a given spectrum, and plays a prominent role in this thesis. Specifically, we utilize relaxations based on the Schur-Horn orbitope in the context of the planted subgraph problem and the graph edit distance problem. For both of these problems, we identify conditions under which the Schur-Horn orbitope based relaxations exactly solve the corresponding problem with overwhelming probability. Specifically, we demonstrate that these relaxations turn out to be particularly effective when the underlying graph has a spectrum comprised of few distinct eigenvalues with high multiplicities. In addition to relaxations based on the Schur-Horn orbitope, we also consider outer-approximations based on other convex graph invariants such as the stability number and the maximum-cut value for the graph edit distance problem. On the other hand, for the inverse eigenvalue problem, we investigate two relaxations arising from a sum of squares hierarchy. These relaxations have different approximation qualities, and accordingly induce different computational costs. We utilize our framework to generate solutions for, or certify unsolvability of the underlying inverse eigenvalue problem. We particularly emphasize the computational aspect of our relaxations throughout this thesis. We corroborate the utility of our methods with various numerical experiments.</p
    corecore