347 research outputs found
A simple yet effective baseline for non-attributed graph classification
Graphs are complex objects that do not lend themselves easily to typical
learning tasks. Recently, a range of approaches based on graph kernels or graph
neural networks have been developed for graph classification and for
representation learning on graphs in general. As the developed methodologies
become more sophisticated, it is important to understand which components of
the increasingly complex methods are necessary or most effective.
As a first step, we develop a simple yet meaningful graph representation, and
explore its effectiveness in graph classification. We test our baseline
representation for the graph classification task on a range of graph datasets.
Interestingly, this simple representation achieves similar performance as the
state-of-the-art graph kernels and graph neural networks for non-attributed
graph classification. Its performance on classifying attributed graphs is
slightly weaker as it does not incorporate attributes. However, given its
simplicity and efficiency, we believe that it still serves as an effective
baseline for attributed graph classification. Our graph representation is
efficient (linear-time) to compute. We also provide a simple connection with
the graph neural networks.
Note that these observations are only for the task of graph classification
while existing methods are often designed for a broader scope including node
embedding and link prediction. The results are also likely biased due to the
limited amount of benchmark datasets available. Nevertheless, the good
performance of our simple baseline calls for the development of new, more
comprehensive benchmark datasets so as to better evaluate and analyze different
graph learning methods. Furthermore, given the computational efficiency of our
graph summary, we believe that it is a good candidate as a baseline method for
future graph classification (or even other graph learning) studies.Comment: 13 pages. Shorter version appears at 2019 ICLR Workshop:
Representation Learning on Graphs and Manifolds. arXiv admin note: text
overlap with arXiv:1810.00826 by other author
FPT-Algorithms for Computing Gromov-Hausdorff and Interleaving Distances Between Trees
The Gromov-Hausdorff distance is a natural way to measure the distortion between two metric spaces. However, there has been only limited algorithmic development to compute or approximate this distance. We focus on computing the Gromov-Hausdorff distance between two metric trees. Roughly speaking, a metric tree is a metric space that can be realized by the shortest path metric on a tree. Any finite tree with positive edge weight can be viewed as a metric tree where the weight is treated as edge length and the metric is the induced shortest path metric in the tree. Previously, Agarwal et al. showed that even for trees with unit edge length, it is NP-hard to approximate the Gromov-Hausdorff distance between them within a factor of 3. In this paper, we present a fixed-parameter tractable (FPT) algorithm that can approximate the Gromov-Hausdorff distance between two general metric trees within a multiplicative factor of 14.
Interestingly, the development of our algorithm is made possible by a connection between the Gromov-Hausdorff distance for metric trees and the interleaving distance for the so-called merge trees. The merge trees arise in practice naturally as a simple yet meaningful topological summary (it is a variant of the Reeb graphs and contour trees), and are of independent interest. It turns out that an exact or approximation algorithm for the interleaving distance leads to an approximation algorithm for the Gromov-Hausdorff distance. One of the key contributions of our work is that we re-define the interleaving distance in a way that makes it easier to develop dynamic programming approaches to compute it. We then present a fixed-parameter tractable algorithm to compute the interleaving distance between two merge trees exactly, which ultimately leads to an FPT-algorithm to approximate the Gromov-Hausdorff distance between two metric trees. This exact FPT-algorithm to compute the interleaving distance between merge trees is of interest itself, as it is known that it is NP-hard to approximate it within a factor of 3, and previously the best known algorithm has an approximation factor of O(sqrt{n}) even for trees with unit edge length
Beyond Hartigan Consistency: Merge Distortion Metric for Hierarchical Clustering
Hierarchical clustering is a popular method for analyzing data which
associates a tree to a dataset. Hartigan consistency has been used extensively
as a framework to analyze such clustering algorithms from a statistical point
of view. Still, as we show in the paper, a tree which is Hartigan consistent
with a given density can look very different than the correct limit tree.
Specifically, Hartigan consistency permits two types of undesirable
configurations which we term over-segmentation and improper nesting. Moreover,
Hartigan consistency is a limit property and does not directly quantify
difference between trees.
In this paper we identify two limit properties, separation and minimality,
which address both over-segmentation and improper nesting and together imply
(but are not implied by) Hartigan consistency. We proceed to introduce a merge
distortion metric between hierarchical clusterings and show that convergence in
our distance implies both separation and minimality. We also prove that uniform
separation and minimality imply convergence in the merge distortion metric.
Furthermore, we show that our merge distortion metric is stable under
perturbations of the density.
Finally, we demonstrate applicability of these concepts by proving
convergence results for two clustering algorithms. First, we show convergence
(and hence separation and minimality) of the recent robust single linkage
algorithm of Chaudhuri and Dasgupta (2010). Second, we provide convergence
results on manifolds for topological split tree clustering
- …