57,590 research outputs found

    Learning Discrete Structures for Graph Neural Networks

    Get PDF
    Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin

    A tree-based kernel for graphs with continuous attributes

    Full text link
    The availability of graph data with node attributes that can be either discrete or real-valued is constantly increasing. While existing kernel methods are effective techniques for dealing with graphs having discrete node labels, their adaptation to non-discrete or continuous node attributes has been limited, mainly for computational issues. Recently, a few kernels especially tailored for this domain, and that trade predictive performance for computational efficiency, have been proposed. In this paper, we propose a graph kernel for complex and continuous nodes' attributes, whose features are tree structures extracted from specific graph visits. The kernel manages to keep the same complexity of state-of-the-art kernels while implicitly using a larger feature space. We further present an approximated variant of the kernel which reduces its complexity significantly. Experimental results obtained on six real-world datasets show that the kernel is the best performing one on most of them. Moreover, in most cases the approximated version reaches comparable performances to current state-of-the-art kernels in terms of classification accuracy while greatly shortening the running times.Comment: This work has been submitted to the IEEE Transactions on Neural Networks and Learning Systems for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Torsion Graph Neural Networks

    Full text link
    Geometric deep learning (GDL) models have demonstrated a great potential for the analysis of non-Euclidian data. They are developed to incorporate the geometric and topological information of non-Euclidian data into the end-to-end deep learning architectures. Motivated by the recent success of discrete Ricci curvature in graph neural network (GNNs), we propose TorGNN, an analytic Torsion enhanced Graph Neural Network model. The essential idea is to characterize graph local structures with an analytic torsion based weight formula. Mathematically, analytic torsion is a topological invariant that can distinguish spaces which are homotopy equivalent but not homeomorphic. In our TorGNN, for each edge, a corresponding local simplicial complex is identified, then the analytic torsion (for this local simplicial complex) is calculated, and further used as a weight (for this edge) in message-passing process. Our TorGNN model is validated on link prediction tasks from sixteen different types of networks and node classification tasks from three types of networks. It has been found that our TorGNN can achieve superior performance on both tasks, and outperform various state-of-the-art models. This demonstrates that analytic torsion is a highly efficient topological invariant in the characterization of graph structures and can significantly boost the performance of GNNs

    Learning neural algorithms with graph structures

    Get PDF
    Graph structures, like syntax trees, social networks, and programs, are ubiquitous in many real world applications including knowledge graph inference, chemistry and social network analysis. Over the past several decades, many expert-designed algorithms on graphs have been proposed with nice theoretical properties. However most of them are not data-driven, and will not benefit from the growing scale of available data. Recent advances in deep learning have shown strong empirical performances for images, texts and signals, typically with little domain knowledge. However the combinatorial and discrete nature of the graph data makes it non-trivial to apply neural networks in this domain. Based on the pros and cons of these two, this thesis will discuss several aspects on how to build a tight connection between neural networks and the classical algorithms for graphs. Specifically: - Algorithm inspired deep graph learning: The existing algorithms provide an inspiration of deep architecture design, for both the discriminative learning and generative modeling of graphs. Regarding the discriminative representation learning, we show how the graphical model inference algorithms can inspire the design of graph neural networks for chemistry and bioinformatics applications, and how to scale it up with the idea borrowed from steady states algorithms like PageRank; for generative modeling, we build an HSMM inspired neural segmental generative modeling for signal sequences; and for a class of graphs, we leverage the idea of attribute grammar for syntax trees to help regulate the deep networks. - Deep learning enhanced graph algorithms: the algorithm framework has procedures that can be enhanced by learnable deep network components. We demonstrate by learning the heuristic function in greedy algorithms with reinforcement learning for combinatorial optimization problems over graphs, such as vertex cover and max cut, and optimal touring problem for real world applications like fuzzing. - Towards Inductive reasoning with graph structures: As the algorithm structure generally provides a good inductive bias for the problem, we take an initial step towards inductive reasoning for such structure, where we make attempts to reason about the loop invariant for program verification and the reaction templates for retrosynthesis structured prediction.Ph.D

    DeepSphere: Efficient spherical Convolutional Neural Network with HEALPix sampling for cosmological applications

    Full text link
    Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning toolbox and have led to many breakthroughs in Artificial Intelligence. These networks have mostly been developed for regular Euclidean domains such as those supporting images, audio, or video. Because of their success, CNN-based methods are becoming increasingly popular in Cosmology. Cosmological data often comes as spherical maps, which make the use of the traditional CNNs more complicated. The commonly used pixelization scheme for spherical maps is the Hierarchical Equal Area isoLatitude Pixelisation (HEALPix). We present a spherical CNN for analysis of full and partial HEALPix maps, which we call DeepSphere. The spherical CNN is constructed by representing the sphere as a graph. Graphs are versatile data structures that can act as a discrete representation of a continuous manifold. Using the graph-based representation, we define many of the standard CNN operations, such as convolution and pooling. With filters restricted to being radial, our convolutions are equivariant to rotation on the sphere, and DeepSphere can be made invariant or equivariant to rotation. This way, DeepSphere is a special case of a graph CNN, tailored to the HEALPix sampling of the sphere. This approach is computationally more efficient than using spherical harmonics to perform convolutions. We demonstrate the method on a classification problem of weak lensing mass maps from two cosmological models and compare the performance of the CNN with that of two baseline classifiers. The results show that the performance of DeepSphere is always superior or equal to both of these baselines. For high noise levels and for data covering only a smaller fraction of the sphere, DeepSphere achieves typically 10% better classification accuracy than those baselines. Finally, we show how learned filters can be visualized to introspect the neural network.Comment: arXiv admin note: text overlap with arXiv:astro-ph/0409513 by other author
    • …
    corecore