584 research outputs found
A Degeneracy Framework for Scalable Graph Autoencoders
In this paper, we present a general framework to scale graph autoencoders
(AE) and graph variational autoencoders (VAE). This framework leverages graph
degeneracy concepts to train models only from a dense subset of nodes instead
of using the entire graph. Together with a simple yet effective propagation
mechanism, our approach significantly improves scalability and training speed
while preserving performance. We evaluate and discuss our method on several
variants of existing graph AE and VAE, providing the first application of these
models to large graphs with up to millions of nodes and edges. We achieve
empirically competitive results w.r.t. several popular scalable node embedding
methods, which emphasizes the relevance of pursuing further research towards
more scalable graph AE and VAE.Comment: International Joint Conference on Artificial Intelligence (IJCAI
2019
Recommended from our members
Null Ain't Dull: New Perspectives on Motor Cortex.
Classical work has viewed primary motor cortex (M1) as a controller of muscle and body dynamics. A recent brain-computer interface (BCI) experiment suggests a new, complementary perspective: M1 is itself a dynamical system under active control of other circuits
Gravity-Inspired Graph Autoencoders for Directed Link Prediction
Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged
as powerful node embedding methods. In particular, graph AE and VAE were
successfully leveraged to tackle the challenging link prediction problem,
aiming at figuring out whether some pairs of nodes from a graph are connected
by unobserved edges. However, these models focus on undirected graphs and
therefore ignore the potential direction of the link, which is limiting for
numerous real-life applications. In this paper, we extend the graph AE and VAE
frameworks to address link prediction in directed graphs. We present a new
gravity-inspired decoder scheme that can effectively reconstruct directed
graphs from a node embedding. We empirically evaluate our method on three
different directed link prediction tasks, for which standard graph AE and VAE
perform poorly. We achieve competitive results on three real-world graphs,
outperforming several popular baselines.Comment: ACM International Conference on Information and Knowledge Management
(CIKM 2019
Recommended from our members
Neuroscience out of control: control-theoretic perspectives on neural circuit dynamics.
A major challenge in systems neuroscience is to understand how the dynamics of neural circuits give rise to behaviour. Analysis of complex dynamical systems is also at the heart of control engineering, where it is central to the design of robust control strategies. Although a rich engineering literature has grown over decades to facilitate the analysis of such systems, little of it has percolated into neuroscience so far. Here, we give a brief introduction to a number of core control-theoretic concepts that provide useful perspectives on neural circuit dynamics. We introduce important mathematical tools related to these concepts, and establish connections to neural circuit analysis, focusing on a number of themes that have arisen from the modern 'state-space' view on neural population dynamics
Sampling-based probabilistic inference emerges from learning in neural circuits with a cost on reliability
Neural responses in the cortex change over time both systematically, due to
ongoing plasticity and learning, and seemingly randomly, due to various sources
of noise and variability. Most previous work considered each of these
processes, learning and variability, in isolation -- here we study neural
networks exhibiting both and show that their interaction leads to the emergence
of powerful computational properties. We trained neural networks on classical
unsupervised learning tasks, in which the objective was to represent their
inputs in an efficient, easily decodable form, with an additional cost for
neural reliability which we derived from basic biophysical considerations. This
cost on reliability introduced a tradeoff between energetically cheap but
inaccurate representations and energetically costly but accurate ones. Despite
the learning tasks being non-probabilistic, the networks solved this tradeoff
by developing a probabilistic representation: neural variability represented
samples from statistically appropriate posterior distributions that would
result from performing probabilistic inference over their inputs. We provide an
analytical understanding of this result by revealing a connection between the
cost of reliability, and the objective for a state-of-the-art Bayesian
inference strategy: variational autoencoders. We show that the same cost leads
to the emergence of increasingly accurate probabilistic representations as
networks become more complex, from single-layer feed-forward, through
multi-layer feed-forward, to recurrent architectures. Our results provide
insights into why neural responses in sensory areas show signatures of
sampling-based probabilistic representations, and may inform future deep
learning algorithms and their implementation in stochastic low-precision
computing systems
Nonnormal amplification in random balanced neuronal networks
In dynamical models of cortical networks, the recurrent connectivity can
amplify the input given to the network in two distinct ways. One is induced by
the presence of near-critical eigenvalues in the connectivity matrix W,
producing large but slow activity fluctuations along the corresponding
eigenvectors (dynamical slowing). The other relies on W being nonnormal, which
allows the network activity to make large but fast excursions along specific
directions. Here we investigate the tradeoff between nonnormal amplification
and dynamical slowing in the spontaneous activity of large random neuronal
networks composed of excitatory and inhibitory neurons. We use a Schur
decomposition of W to separate the two amplification mechanisms. Assuming
linear stochastic dynamics, we derive an exact expression for the expected
amount of purely nonnormal amplification. We find that amplification is very
limited if dynamical slowing must be kept weak. We conclude that, to achieve
strong transient amplification with little slowing, the connectivity must be
structured. We show that unidirectional connections between neurons of the same
type together with reciprocal connections between neurons of different types,
allow for amplification already in the fast dynamical regime. Finally, our
results also shed light on the differences between balanced networks in which
inhibition exactly cancels excitation, and those where inhibition dominates.Comment: 13 pages, 7 figure
- …