118,088 research outputs found
Representation Learning for Spatial Graphs
Recently, the topic of graph representation learning has received plenty of
attention. Existing approaches usually focus on structural properties only and
thus they are not sufficient for those spatial graphs where the nodes are
associated with some spatial information. In this paper, we present the first
deep learning approach called s2vec for learning spatial graph representations,
which is based on denoising autoencoders framework (DAF). We evaluate the
learned representations on real datasets and the results verified the
effectiveness of s2vec when used for spatial clustering.Comment: 4 pages, 1 figure, conferenc
subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs
In this paper, we present subgraph2vec, a novel approach for learning latent
representations of rooted subgraphs from large graphs inspired by recent
advancements in Deep Learning and Graph Kernels. These latent representations
encode semantic substructure dependencies in a continuous vector space, which
is easily exploited by statistical models for tasks such as graph
classification, clustering, link prediction and community detection.
subgraph2vec leverages on local information obtained from neighbourhoods of
nodes to learn their latent representations in an unsupervised fashion. We
demonstrate that subgraph vectors learnt by our approach could be used in
conjunction with classifiers such as CNNs, SVMs and relational data clustering
algorithms to achieve significantly superior accuracies. Also, we show that the
subgraph vectors could be used for building a deep learning variant of
Weisfeiler-Lehman graph kernel. Our experiments on several benchmark and
large-scale real-world datasets reveal that subgraph2vec achieves significant
improvements in accuracies over existing graph kernels on both supervised and
unsupervised learning tasks. Specifically, on two realworld program analysis
tasks, namely, code clone and malware detection, subgraph2vec outperforms
state-of-the-art kernels by more than 17% and 4%, respectively
Graph-based State Representation for Deep Reinforcement Learning
Deep RL approaches build much of their success on the ability of the deep
neural network to generate useful internal representations. Nevertheless, they
suffer from a high sample-complexity and starting with a good input
representation can have a significant impact on the performance. In this paper,
we exploit the fact that the underlying Markov decision process (MDP)
represents a graph, which enables us to incorporate the topological information
for effective state representation learning.
Motivated by the recent success of node representations for several graph
analytical tasks we specifically investigate the capability of node
representation learning methods to effectively encode the topology of the
underlying MDP in Deep RL. To this end we perform a comparative analysis of
several models chosen from 4 different classes of representation learning
algorithms for policy learning in grid-world navigation tasks, which are
representative of a large class of RL problems. We find that all embedding
methods outperform the commonly used matrix representation of grid-world
environments in all of the studied cases. Moreoever, graph convolution based
methods are outperformed by simpler random walk based methods and graph linear
autoencoders
Building Graph Representations of Deep Vector Embeddings
Patterns stored within pre-trained deep neural networks compose large and
powerful descriptive languages that can be used for many different purposes.
Typically, deep network representations are implemented within vector embedding
spaces, which enables the use of traditional machine learning algorithms on top
of them. In this short paper we propose the construction of a graph embedding
space instead, introducing a methodology to transform the knowledge coded
within a deep convolutional network into a topological space (i.e. a network).
We outline how such graph can hold data instances, data features, relations
between instances and features, and relations among features. Finally, we
introduce some preliminary experiments to illustrate how the resultant graph
embedding space can be exploited through graph analytics algorithms.Comment: Accepted at the 2nd Workshop on Semantic Deep Learning (SemDeep-2
Deep Haar Scattering Networks
An orthogonal Haar scattering transform is a deep network, computed with a
hierarchy of additions, subtractions and absolute values, over pairs of
coefficients. It provides a simple mathematical model for unsupervised deep
network learning. It implements non-linear contractions, which are optimized
for classification, with an unsupervised pair matching algorithm, of polynomial
complexity. A structured Haar scattering over graph data computes permutation
invariant representations of groups of connected points in the graph. If the
graph connectivity is unknown, unsupervised Haar pair learning can provide a
consistent estimation of connected dyadic groups of points. Classification
results are given on image data bases, defined on regular grids or graphs, with
a connectivity which may be known or unknown
Feature Interaction-aware Graph Neural Networks
Inspired by the immense success of deep learning, graph neural networks
(GNNs) are widely used to learn powerful node representations and have
demonstrated promising performance on different graph learning tasks. However,
most real-world graphs often come with high-dimensional and sparse node
features, rendering the learned node representations from existing GNN
architectures less expressive. In this paper, we propose \textit{Feature
Interaction-aware Graph Neural Networks (FI-GNNs)}, a plug-and-play GNN
framework for learning node representations encoded with informative feature
interactions. Specifically, the proposed framework is able to highlight
informative feature interactions in a personalized manner and further learn
highly expressive node representations on feature-sparse graphs. Extensive
experiments on various datasets demonstrate the superior capability of FI-GNNs
for graph learning tasks
Deep Layered Learning in MIR
Deep learning has boosted the performance of many music information retrieval
(MIR) systems in recent years. Yet, the complex hierarchical arrangement of
music makes end-to-end learning hard for some MIR tasks - a very deep and
flexible processing chain is necessary to model some aspect of music audio.
Representations involving tones, chords, and rhythm are fundamental building
blocks of music. This paper discusses how these can be used as intermediate
targets and priors in MIR to deal with structurally complex learning problems,
with learning modules connected in a directed acyclic graph. It is suggested
that this strategy for inference, referred to as deep layered learning (DLL),
can help generalization by (1) - enforcing the validity and invariance of
intermediate representations during processing, and by (2) - letting the
inferred representations establish the musical organization to support
higher-level invariant processing. A background to modular music processing is
provided together with an overview of previous publications. Relevant concepts
from information processing, such as pruning, skip connections, and performance
supervision are reviewed within the context of DLL. A test is finally
performed, showing how layered learning affects pitch tracking. It is indicated
that especially offsets are easier to detect if guided by extracted framewise
fundamental frequencies.Comment: Submitted for publication. Feedback always welcom
A Regularized Attention Mechanism for Graph Attention Networks
Machine learning models that can exploit the inherent structure in data have
gained prominence. In particular, there is a surge in deep learning solutions
for graph-structured data, due to its wide-spread applicability in several
fields. Graph attention networks (GAT), a recent addition to the broad class of
feature learning models in graphs, utilizes the attention mechanism to
efficiently learn continuous vector representations for semi-supervised
learning problems. In this paper, we perform a detailed analysis of GAT models,
and present interesting insights into their behavior. In particular, we show
that the models are vulnerable to heterogeneous rogue nodes and hence propose
novel regularization strategies to improve the robustness of GAT models. Using
benchmark datasets, we demonstrate performance improvements on semi-supervised
learning, using the proposed robust variant of GAT
Deep Feature Learning for Graphs
This paper presents a general graph representation learning framework called
DeepGL for learning deep node and edge representations from large (attributed)
graphs. In particular, DeepGL begins by deriving a set of base features (e.g.,
graphlet features) and automatically learns a multi-layered hierarchical graph
representation where each successive layer leverages the output from the
previous layer to learn features of a higher-order. Contrary to previous work,
DeepGL learns relational functions (each representing a feature) that
generalize across-networks and therefore useful for graph-based transfer
learning tasks. Moreover, DeepGL naturally supports attributed graphs, learns
interpretable features, and is space-efficient (by learning sparse feature
vectors). In addition, DeepGL is expressive, flexible with many interchangeable
components, efficient with a time complexity of , and
scalable for large networks via an efficient parallel implementation. Compared
with the state-of-the-art method, DeepGL is (1) effective for across-network
transfer learning tasks and attributed graph representation learning, (2)
space-efficient requiring up to 6x less memory, (3) fast with up to 182x
speedup in runtime performance, and (4) accurate with an average improvement of
20% or more on many learning tasks
Heterogeneous Deep Graph Infomax
Graph representation learning is to learn universal node representations that
preserve both node attributes and structural information. The derived node
representations can be used to serve various downstream tasks, such as node
classification and node clustering. When a graph is heterogeneous, the problem
becomes more challenging than the homogeneous graph node learning problem.
Inspired by the emerging information theoretic-based learning algorithm, in
this paper we propose an unsupervised graph neural network Heterogeneous Deep
Graph Infomax (HDGI) for heterogeneous graph representation learning. We use
the meta-path structure to analyze the connections involving semantics in
heterogeneous graphs and utilize graph convolution module and semantic-level
attention mechanism to capture local representations. By maximizing
local-global mutual information, HDGI effectively learns high-level node
representations that can be utilized in downstream graph-related tasks.
Experiment results show that HDGI remarkably outperforms state-of-the-art
unsupervised graph representation learning methods on both classification and
clustering tasks. By feeding the learned representations into a parametric
model, such as logistic regression, we even achieve comparable performance in
node classification tasks when comparing with state-of-the-art supervised
end-to-end GNN models
- …