7 research outputs found
Reconstructing Markov processes from independent and anonymous experiments
a b s t r a c t We investigate the problem of exactly reconstructing, with high confidence and up to isomorphism, the ball of radius r centered at the starting state of a Markov process from independent and anonymous experiments. In an anonymous experiment, the states are visited according to the underlying transition probabilities, but no global state names are known: one can only recognize whether two states, reached within the same experiment, are the same. We prove quite tight bounds for such exact reconstruction in terms of both the number of experiments and their lengths
GraLSP: Graph Neural Networks with Local Structural Patterns
It is not until recently that graph neural networks (GNNs) are adopted to
perform graph representation learning, among which, those based on the
aggregation of features within the neighborhood of a node achieved great
success. However, despite such achievements, GNNs illustrate defects in
identifying some common structural patterns which, unfortunately, play
significant roles in various network phenomena. In this paper, we propose
GraLSP, a GNN framework which explicitly incorporates local structural patterns
into the neighborhood aggregation through random anonymous walks. Specifically,
we capture local graph structures via random anonymous walks, powerful and
flexible tools that represent structural patterns. The walks are then fed into
the feature aggregation, where we design various mechanisms to address the
impact of structural features, including adaptive receptive radius, attention
and amplification. In addition, we design objectives that capture similarities
between structures and are optimized jointly with node proximity objectives.
With the adequate leverage of structural patterns, our model is able to
outperform competitive counterparts in various prediction tasks in multiple
datasets
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
Graph representation learning has emerged as a powerful technique for
addressing real-world problems. Various downstream graph learning tasks have
benefited from its recent developments, such as node classification, similarity
search, and graph classification. However, prior arts on graph representation
learning focus on domain specific problems and train a dedicated model for each
graph dataset, which is usually non-transferable to out-of-domain data.
Inspired by the recent advances in pre-training from natural language
processing and computer vision, we design Graph Contrastive Coding (GCC) -- a
self-supervised graph neural network pre-training framework -- to capture the
universal network topological properties across multiple networks. We design
GCC's pre-training task as subgraph instance discrimination in and across
networks and leverage contrastive learning to empower graph neural networks to
learn the intrinsic and transferable structural representations. We conduct
extensive experiments on three graph learning tasks and ten graph datasets. The
results show that GCC pre-trained on a collection of diverse datasets can
achieve competitive or better performance to its task-specific and
trained-from-scratch counterparts. This suggests that the pre-training and
fine-tuning paradigm presents great potential for graph representation
learning.Comment: 11 pages, 5 figures, to appear in KDD 2020 proceeding
Are Graph Convolutional Networks Fully Exploiting Graph Structure?
Graph Convolutional Networks (GCNs) generalize the idea of deep convolutional
networks to graphs, and achieve state-of-the-art results on many graph related
tasks. GCNs rely on the graph structure to define an aggregation strategy where
each node updates its representation by combining information from its
neighbours. In this paper we formalize four levels of structural information
injection, and use them to show that GCNs ignore important long-range
dependencies embedded in the overall topology of a graph. Our proposal includes
a novel regularization technique based on random walks with restart, called
RWRReg, which encourages the network to encode long-range information into the
node embeddings. RWRReg is further supported by our theoretical analysis, which
demonstrates that random walks with restart empower aggregation-based
strategies (i.e., the Weisfeiler-Leman algorithm) with long-range information.
We conduct an extensive experimental analysis studying the change in
performance of several state-of-the-art models given by the four levels of
structural information injection, on both transductive and inductive tasks. The
results show that the lack of long-range structural information greatly affects
performance on all considered models, and that the information extracted by
random walks with restart, and exploited by RWRReg, gives an average accuracy
improvement of more than on all considered tasks
A Network Science perspective of Graph Convolutional Networks: A survey
The mining and exploitation of graph structural information have been the
focal points in the study of complex networks. Traditional structural measures
in Network Science focus on the analysis and modelling of complex networks from
the perspective of network structure, such as the centrality measures, the
clustering coefficient, and motifs and graphlets, and they have become basic
tools for studying and understanding graphs. In comparison, graph neural
networks, especially graph convolutional networks (GCNs), are particularly
effective at integrating node features into graph structures via neighbourhood
aggregation and message passing, and have been shown to significantly improve
the performances in a variety of learning tasks. These two classes of methods
are, however, typically treated separately with limited references to each
other. In this work, aiming to establish relationships between them, we provide
a network science perspective of GCNs. Our novel taxonomy classifies GCNs from
three structural information angles, i.e., the layer-wise message aggregation
scope, the message content, and the overall learning scope. Moreover, as a
prerequisite for reviewing GCNs via a network science perspective, we also
summarise traditional structural measures and propose a new taxonomy for them.
Finally and most importantly, we draw connections between traditional
structural approaches and graph convolutional networks, and discuss potential
directions for future research
Reconstructing Markov processes from independent and anonymous experiments
We investigate the problem of exactly reconstructing, with high confidence and up to isomorphism, the ball of radius r centered at the starting state of a Markov process from independent and anonymous experiments. In an anonymous experiment, the states are visited according to the underlying transition probabilities, but no global state names are known: one can only recognize whether two states, reached within the same experiment, are the same. We prove quite tight bounds for such exact reconstruction in terms of both the number of experiments and their lengths. Keywords: Graph reconstruction; Random walk; Markov process; Local algorithm