12 research outputs found

    Depth-based Hypergraph Complexity Traces from Directed Line Graphs

    Get PDF
    In this paper, we aim to characterize the structure of hypergraphs in terms of structural complexity measure. Measuring the complexity of a hypergraph in a straightforward way tends to be elusive since the hyperedges of a hypergraph may exhibit varying relational orders. We thus transform a hypergraph into a line graph which not only accurately reflects the multiple relationships exhibited by the hyperedges but is also easier to manipulate for complexity analysis. To locate the dominant substructure within a line graph, we identify a centroid vertex by computing the minimum variance of its shortest path lengths. A family of centroid expansion subgraphs of the line graph is then derived from the centroid vertex. We compute the depth-based complexity traces for the hypergraph by measuring either the directed or undirected entropies of its centroid expansion subgraphs. The resulting complexity traces provide a flexible framework that can be applied to both hypergraphs and graphs. We perform (hyper)graph classification in the principal component space of the complexity trace vectors. Experiments on (hyper)graph datasets abstracted from bioinformatics and computer vision data demonstrate the effectiveness and efficiency of the complexity traces.This work is supported by National Natural Science Foundation of China (Grant no. 61503422). This work is supported by the Open Projects Program of National Laboratory of Pattern Recognition. Francisco Escolano is supported by the project TIN2012-32839 of the Spanish Government. Edwin R. Hancock is supported by a Royal Society Wolfson Research Merit Award

    Information Theoretic Graph Kernels

    Get PDF
    This thesis addresses the problems that arise in state-of-the-art structural learning methods for (hyper)graph classification or clustering, particularly focusing on developing novel information theoretic kernels for graphs. To this end, we commence in Chapter 3 by defining a family of Jensen-Shannon diffusion kernels, i.e., the information theoretic kernels, for (un)attributed graphs. We show that our kernels overcome the shortcomings of inefficiency (for the unattributed diffusion kernel) and discarding un-isomorphic substructures (for the attributed diffusion kernel) that arise in the R-convolution kernels. In Chapter 4, we present a novel framework of computing depth-based complexity traces rooted at the centroid vertices for graphs, which can be efficiently computed for graphs with large sizes. We show that our methods can characterize a graph in a higher dimensional complexity feature space than state-of-the-art complexity measures. In Chapter 5, we develop a novel unattributed graph kernel by matching the depth-based substructures in graphs, based on the contribution in Chapter 4. Unlike most existing graph kernels in the literature which merely enumerate similar substructure pairs of limited sizes, our method incorporates explicit local substructure correspondence into the process of kernelization. The new kernel thus overcomes the shortcoming of neglecting structural correspondence that arises in most state-of-the-art graph kernels. The novel methods developed in Chapters 3, 4, and 5 are only restricted to graphs. However, real-world data usually tends to be represented by higher order relationships (i.e., hypergraphs). To overcome the shortcoming, in Chapter 6 we present a new hypergraph kernel using substructure isomorphism tests. We show that our kernel limits tottering that arises in the existing walk and subtree based (hyper)graph kernels. In Chapter 7, we summarize the contributions of this thesis. Furthermore, we analyze the proposed methods. Finally, we give some suggestions for the future work

    Deep depth-based representations of graphs through deep learning networks

    Get PDF
    Graph-based representations are powerful tools in structural pattern recognition and machine learning. In this paper, we propose a framework of computing the deep depth-based representations for graph structures. Our work links the ideas of graph complexity measures and deep learning networks. Specifically, for a set of graphs, we commence by computing depth-based representations rooted at each vertex as vertex points. In order to identify an informative depth-based representation subset, we employ the well-known k-means method to identify M dominant centroids of the depth-based representation vectors as prototype representations. To overcome the burdensome computation of using depth-based representations for all graphs, we propose to use the prototype representations to train a deep autoencoder network, that is optimized using Stochastic Gradient Descent together with the Deep Belief Network for pretraining. By inputting the depth-based representations of vertices over all graphs to the trained deep network, we compute the deep representation for each vertex. The resulting deep depth-based representation of a graph is computed by averaging the deep representations of its complete set of vertices. We theoretically demonstrate that the deep depth-based representations of graphs not only reflect both the local and global characteristics of graphs through the depth-based representations, but also capture the main structural relationship and information content over all graphs under investigations. Experimental evaluations demonstrate the effectiveness of the proposed method

    Fast depth-based subgraph kernels for unattributed graphs

    Get PDF
    In this paper, we investigate two fast subgraph kernels based on a depth-based representation of graph-structure. Both methods gauge depth information through a family of K-layer expansion subgraphs rooted at a vertex [1]. The first method commences by computing a centroid-based complexity trace for each graph, using a depth-based representation rooted at the centroid vertex that has minimum shortest path length variance to the remaining vertices [2]. This subgraph kernel is computed by measuring the Jensen-Shannon divergence between centroid-based complexity entropy traces. The second method, on the other hand, computes a depth-based representation around each vertex in turn. The corresponding subgraph kernel is computed using isomorphisms tests to compare the depth-based representation rooted at each vertex in turn. For graphs with n vertices, the time complexities for the two new kernels are O(n 2) and O(n 3), in contrast to O(n 6) for the classic Gärtner graph kernel [3]. Key to achieving this efficiency is that we compute the required Shannon entropy of the random walk for our kernels with O(n 2) operations. This computational strategy enables our subgraph kernels to easily scale up to graphs of reasonably large sizes and thus overcome the size limits arising in state-of-the-art graph kernels. Experiments on standard bioinformatics and computer vision graph datasets demonstrate the effectiveness and efficiency of our new subgraph kernels

    Graph Convolutional Neural Networks based on Quantum Vertex Saliency

    Full text link
    This paper proposes a new Quantum Spatial Graph Convolutional Neural Network (QSGCNN) model that can directly learn a classification function for graphs of arbitrary sizes. Unlike state-of-the-art Graph Convolutional Neural Network (GCNN) models, the proposed QSGCNN model incorporates the process of identifying transitive aligned vertices between graphs, and transforms arbitrary sized graphs into fixed-sized aligned vertex grid structures. In order to learn representative graph characteristics, a new quantum spatial graph convolution is proposed and employed to extract multi-scale vertex features, in terms of quantum information propagation between grid vertices of each graph. Since the quantum spatial convolution preserves the grid structures of the input vertices (i.e., the convolution layer does not change the original spatial sequence of vertices), the proposed QSGCNN model allows to directly employ the traditional convolutional neural network architecture to further learn from the global graph topology, providing an end-to-end deep learning architecture that integrates the graph representation and learning in the quantum spatial graph convolution layer and the traditional convolutional layer for graph classifications. We demonstrate the effectiveness of the proposed QSGCNN model in relation to existing state-of-the-art methods. The proposed QSGCNN model addresses the shortcomings of information loss and imprecise information representation arising in existing GCN models associated with the use of SortPooling or SumPooling layers. Experiments on benchmark graph classification datasets demonstrate the effectiveness of the proposed QSGCNN model

    ICR ANNUAL REPORT 2022 (Volume 29)[All Pages]

    Get PDF
    This Annual Report covers from 1 January to 31 December 202

    Developments in Structural Learning Using Ihara Coefficients and Hypergraph Representations

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore