44 research outputs found

    Fast depth-based subgraph kernels for unattributed graphs

    Get PDF
    In this paper, we investigate two fast subgraph kernels based on a depth-based representation of graph-structure. Both methods gauge depth information through a family of K-layer expansion subgraphs rooted at a vertex [1]. The first method commences by computing a centroid-based complexity trace for each graph, using a depth-based representation rooted at the centroid vertex that has minimum shortest path length variance to the remaining vertices [2]. This subgraph kernel is computed by measuring the Jensen-Shannon divergence between centroid-based complexity entropy traces. The second method, on the other hand, computes a depth-based representation around each vertex in turn. The corresponding subgraph kernel is computed using isomorphisms tests to compare the depth-based representation rooted at each vertex in turn. For graphs with n vertices, the time complexities for the two new kernels are O(n 2) and O(n 3), in contrast to O(n 6) for the classic Gärtner graph kernel [3]. Key to achieving this efficiency is that we compute the required Shannon entropy of the random walk for our kernels with O(n 2) operations. This computational strategy enables our subgraph kernels to easily scale up to graphs of reasonably large sizes and thus overcome the size limits arising in state-of-the-art graph kernels. Experiments on standard bioinformatics and computer vision graph datasets demonstrate the effectiveness and efficiency of our new subgraph kernels

    An R-convolution Graph Kernel based on Fast Discrete-Time Quantum Walk

    Get PDF
    In this paper, a novel R-convolution kernel, named the fast quantum walk kernel (FQWK), is proposed for unattributed graphs. In FQWK, the similarity of the neighborhood-pair substructure between two nodes is measured via the superposition amplitude of quantum walks between those nodes. The quantum interference in this kind of local substructures provides more information on the substructures so that FQWK can capture finer-grained local structural features of graphs. In addition, to efficiently compute the transition amplitudes of multi-step discrete-time quantum walks, a fast recursive method is designed. Thus compared with all the existing kernels based on the quantum walk, FQWK has the highest computation speed. Extensive experiments demonstrate that FQWK outperforms state-of-the-art graph kernels in terms of classification accuracy for unattributed graphs. Meanwhile, it can be applied to distinguish a larger family of graphs including cospectral graphs, regular graphs, and even strong regular graphs which are not distinguishable by classical walkbased methods

    A nested alignment graph kernel through the dynamic time warping framework

    Get PDF
    In this paper, we propose a novel nested alignment graph kernel drawing on depth-based complexity traces and the dynamic time warping framework. Specifically, for a pair of graphs, we commence by computing the depth-based complexity traces rooted at the centroid vertices. The resulting kernel for the graphs is defined by measuring the global alignment kernel, which is developed through the dynamic time warping framework, between the complexity traces. We show that the proposed kernel simultaneously considers the local and global graph characteristics in terms of the complexity traces, but also provides richer statistic measures by incorporating the whole spectrum of alignment costs between these traces. Our experiments demonstrate the effectiveness and efficiency of the proposed kernel

    Information Theoretic Graph Kernels

    Get PDF
    This thesis addresses the problems that arise in state-of-the-art structural learning methods for (hyper)graph classification or clustering, particularly focusing on developing novel information theoretic kernels for graphs. To this end, we commence in Chapter 3 by defining a family of Jensen-Shannon diffusion kernels, i.e., the information theoretic kernels, for (un)attributed graphs. We show that our kernels overcome the shortcomings of inefficiency (for the unattributed diffusion kernel) and discarding un-isomorphic substructures (for the attributed diffusion kernel) that arise in the R-convolution kernels. In Chapter 4, we present a novel framework of computing depth-based complexity traces rooted at the centroid vertices for graphs, which can be efficiently computed for graphs with large sizes. We show that our methods can characterize a graph in a higher dimensional complexity feature space than state-of-the-art complexity measures. In Chapter 5, we develop a novel unattributed graph kernel by matching the depth-based substructures in graphs, based on the contribution in Chapter 4. Unlike most existing graph kernels in the literature which merely enumerate similar substructure pairs of limited sizes, our method incorporates explicit local substructure correspondence into the process of kernelization. The new kernel thus overcomes the shortcoming of neglecting structural correspondence that arises in most state-of-the-art graph kernels. The novel methods developed in Chapters 3, 4, and 5 are only restricted to graphs. However, real-world data usually tends to be represented by higher order relationships (i.e., hypergraphs). To overcome the shortcoming, in Chapter 6 we present a new hypergraph kernel using substructure isomorphism tests. We show that our kernel limits tottering that arises in the existing walk and subtree based (hyper)graph kernels. In Chapter 7, we summarize the contributions of this thesis. Furthermore, we analyze the proposed methods. Finally, we give some suggestions for the future work

    Shift Aggregate Extract Networks

    Get PDF
    We introduce an architecture based on deep hierarchical decompositions to learn effective representations of large graphs. Our framework extends classic R-decompositions used in kernel methods, enabling nested "part-of-part" relations. Unlike recursive neural networks, which unroll a template on input graphs directly, we unroll a neural network template over the decomposition hierarchy, allowing us to deal with the high degree variability that typically characterize social network graphs. Deep hierarchical decompositions are also amenable to domain compression, a technique that reduces both space and time complexity by exploiting symmetries. We show empirically that our approach is competitive with current state-of-the-art graph classification methods, particularly when dealing with social network datasets

    QESK: Quantum-based Entropic Subtree Kernels for Graph Classification

    Full text link
    In this paper, we propose a novel graph kernel, namely the Quantum-based Entropic Subtree Kernel (QESK), for Graph Classification. To this end, we commence by computing the Average Mixing Matrix (AMM) of the Continuous-time Quantum Walk (CTQW) evolved on each graph structure. Moreover, we show how this AMM matrix can be employed to compute a series of entropic subtree representations associated with the classical Weisfeiler-Lehman (WL) algorithm. For a pair of graphs, the QESK kernel is defined by computing the exponentiation of the negative Euclidean distance between their entropic subtree representations, theoretically resulting in a positive definite graph kernel. We show that the proposed QESK kernel not only encapsulates complicated intrinsic quantum-based structural characteristics of graph structures through the CTQW, but also theoretically addresses the shortcoming of ignoring the effects of unshared substructures arising in state-of-the-art R-convolution graph kernels. Moreover, unlike the classical R-convolution kernels, the proposed QESK can discriminate the distinctions of isomorphic subtrees in terms of the global graph structures, theoretically explaining the effectiveness. Experiments indicate that the proposed QESK kernel can significantly outperform state-of-the-art graph kernels and graph deep learning methods for graph classification problems

    A Survey on Graph Kernels

    Get PDF
    Graph kernels have become an established and widely-used technique for solving classification tasks on graphs. This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years. We describe and categorize graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice. In an extensive experimental evaluation, we study the classification accuracy of a large suite of graph kernels on established benchmarks as well as new datasets. We compare the performance of popular kernels with several baseline methods and study the effect of applying a Gaussian RBF kernel to the metric induced by a graph kernel. In doing so, we find that simple baselines become competitive after this transformation on some datasets. Moreover, we study the extent to which existing graph kernels agree in their predictions (and prediction errors) and obtain a data-driven categorization of kernels as result. Finally, based on our experimental results, we derive a practitioner's guide to kernel-based graph classification

    Deep depth-based representations of graphs through deep learning networks

    Get PDF
    Graph-based representations are powerful tools in structural pattern recognition and machine learning. In this paper, we propose a framework of computing the deep depth-based representations for graph structures. Our work links the ideas of graph complexity measures and deep learning networks. Specifically, for a set of graphs, we commence by computing depth-based representations rooted at each vertex as vertex points. In order to identify an informative depth-based representation subset, we employ the well-known k-means method to identify M dominant centroids of the depth-based representation vectors as prototype representations. To overcome the burdensome computation of using depth-based representations for all graphs, we propose to use the prototype representations to train a deep autoencoder network, that is optimized using Stochastic Gradient Descent together with the Deep Belief Network for pretraining. By inputting the depth-based representations of vertices over all graphs to the trained deep network, we compute the deep representation for each vertex. The resulting deep depth-based representation of a graph is computed by averaging the deep representations of its complete set of vertices. We theoretically demonstrate that the deep depth-based representations of graphs not only reflect both the local and global characteristics of graphs through the depth-based representations, but also capture the main structural relationship and information content over all graphs under investigations. Experimental evaluations demonstrate the effectiveness of the proposed method

    Quantum kernels for unattributed graphs using discrete-time quantum walks

    Get PDF
    In this paper, we develop a new family of graph kernels where the graph structure is probed by means of a discrete-time quantum walk. Given a pair of graphs, we let a quantum walk evolve on each graph and compute a density matrix with each walk. With the density matrices for the pair of graphs to hand, the kernel between the graphs is defined as the negative exponential of the quantum Jensen–Shannon divergence between their density matrices. In order to cope with large graph structures, we propose to construct a sparser version of the original graphs using the simplification method introduced in Qiu and Hancock (2007). To this end, we compute the minimum spanning tree over the commute time matrix of a graph. This spanning tree representation minimizes the number of edges of the original graph while preserving most of its structural information. The kernel between two graphs is then computed on their respective minimum spanning trees. We evaluate the performance of the proposed kernels on several standard graph datasets and we demonstrate their effectiveness and efficiency

    HAQJSK: Hierarchical-Aligned Quantum Jensen-Shannon Kernels for Graph Classification

    Full text link
    In this work, we propose a family of novel quantum kernels, namely the Hierarchical Aligned Quantum Jensen-Shannon Kernels (HAQJSK), for un-attributed graphs. Different from most existing classical graph kernels, the proposed HAQJSK kernels can incorporate hierarchical aligned structure information between graphs and transform graphs of random sizes into fixed-sized aligned graph structures, i.e., the Hierarchical Transitive Aligned Adjacency Matrix of vertices and the Hierarchical Transitive Aligned Density Matrix of the Continuous-Time Quantum Walk (CTQW). For a pair of graphs to hand, the resulting HAQJSK kernels are defined by measuring the Quantum Jensen-Shannon Divergence (QJSD) between their transitive aligned graph structures. We show that the proposed HAQJSK kernels not only reflect richer intrinsic global graph characteristics in terms of the CTQW, but also address the drawback of neglecting structural correspondence information arising in most existing R-convolution kernels. Furthermore, unlike the previous Quantum Jensen-Shannon Kernels associated with the QJSD and the CTQW, the proposed HAQJSK kernels can simultaneously guarantee the properties of permutation invariant and positive definiteness, explaining the theoretical advantages of the HAQJSK kernels. Experiments indicate the effectiveness of the proposed kernels
    corecore