32,409 research outputs found
The exact evaluation of hexagonal spin-networks and topological quantum neural networks
The physical scalar product between spin-networks has been shown to be a
fundamental tool in the theory of topological quantum neural networks (TQNN),
which are quantum neural networks previously introduced by the authors in the
context of quantum machine learning. However, the effective evaluation of the
scalar product remains a bottleneck for the applicability of the theory. We
introduce an algorithm for the evaluation of the physical scalar product
defined by Noui and Perez between spin-network with hexagonal shape. By means
of recoupling theory and the properties of the Haar integration we obtain an
efficient algorithm, and provide several proofs regarding the main steps. We
investigate the behavior of the TQNN evaluations on certain classes of
spin-networks with the classical and quantum recoupling. All results can be
independently reproduced through the "idea.deploy"
framework~\href{https://github.com/lullimat/idea.deploy}{\nolinkurl{https://github.com/lullimat/idea.deploy}}Comment: 15 pages (2 columns, 12+3), 16 figures. Comments are welcome
Topological Deep Learning: Going Beyond Graph Data
Topological deep learning is a rapidly growing field that pertains to the
development of deep learning models for data supported on topological domains
such as simplicial complexes, cell complexes, and hypergraphs, which generalize
many domains encountered in scientific computations. In this paper, we present
a unifying deep learning framework built upon a richer data structure that
includes widely adopted topological domains.
Specifically, we first introduce combinatorial complexes, a novel type of
topological domain. Combinatorial complexes can be seen as generalizations of
graphs that maintain certain desirable properties. Similar to hypergraphs,
combinatorial complexes impose no constraints on the set of relations. In
addition, combinatorial complexes permit the construction of hierarchical
higher-order relations, analogous to those found in simplicial and cell
complexes. Thus, combinatorial complexes generalize and combine useful traits
of both hypergraphs and cell complexes, which have emerged as two promising
abstractions that facilitate the generalization of graph neural networks to
topological spaces.
Second, building upon combinatorial complexes and their rich combinatorial
and algebraic structure, we develop a general class of message-passing
combinatorial complex neural networks (CCNNs), focusing primarily on
attention-based CCNNs. We characterize permutation and orientation
equivariances of CCNNs, and discuss pooling and unpooling operations within
CCNNs in detail.
Third, we evaluate the performance of CCNNs on tasks related to mesh shape
analysis and graph learning. Our experiments demonstrate that CCNNs have
competitive performance as compared to state-of-the-art deep learning models
specifically tailored to the same tasks. Our findings demonstrate the
advantages of incorporating higher-order relations into deep learning models in
different applications
General Theory of Topological Explanations and Explanatory Asymmetry
In this paper, I present a general theory of topological explanations, and illustrate its fruitfulness by showing how it accounts for explanatory asymmetry. My argument is developed in three steps. In the first step, I show what it is for some topological property A to explain some physical or dynamical property B. Based on that, I derive three key criteria of successful topological explanations: a criterion concerning the facticity of topological explanations, i.e. what makes it true of a particular system; a criterion for describing counterfactual dependencies in two explanatory modes, i.e. the vertical and the horizontal; and, finally, a third perspectival one that tells us when to use the vertical and when to use the horizontal mode. In the second step, I show how this general theory of topological explanations accounts for explanatory asymmetry in both the vertical and horizontal explanatory modes. Finally, in the third step, I argue that this theory is universally applicable across biological sciences, which helps to unify essential concepts of biological networks
Developmental time windows for axon growth influence neuronal network topology
Early brain connectivity development consists of multiple stages: birth of
neurons, their migration and the subsequent growth of axons and dendrites. Each
stage occurs within a certain period of time depending on types of neurons and
cortical layers. Forming synapses between neurons either by growing axons
starting at similar times for all neurons (much-overlapped time windows) or at
different time points (less-overlapped) may affect the topological and spatial
properties of neuronal networks. Here, we explore the extreme cases of axon
formation especially concerning short-distance connectivity during early
development, either starting at the same time for all neurons (parallel, i.e.
maximally-overlapped time windows) or occurring for each neuron separately one
neuron after another (serial, i.e. no overlaps in time windows). For both
cases, the number of potential and established synapses remained comparable.
Topological and spatial properties, however, differed: neurons that started
axon growth early on in serial growth achieved higher out-degrees, higher local
efficiency, and longer axon lengths while neurons demonstrated more homogeneous
connectivity patterns for parallel growth. Second, connection probability
decreased more rapidly with distance between neurons for parallel growth than
for serial growth. Third, bidirectional connections were more numerous for
parallel growth. Finally, we tested our predictions with C. elegans data.
Together, this indicates that time windows for axon growth influence the
topological and spatial properties of neuronal networks opening the possibility
to a posteriori estimate developmental mechanisms based on network properties
of a developed network.Comment: Biol Cybern. 2015 Jan 30. [Epub ahead of print
Topological exploration of artificial neuronal network dynamics
One of the paramount challenges in neuroscience is to understand the dynamics
of individual neurons and how they give rise to network dynamics when
interconnected. Historically, researchers have resorted to graph theory,
statistics, and statistical mechanics to describe the spatiotemporal structure
of such network dynamics. Our novel approach employs tools from algebraic
topology to characterize the global properties of network structure and
dynamics.
We propose a method based on persistent homology to automatically classify
network dynamics using topological features of spaces built from various
spike-train distances. We investigate the efficacy of our method by simulating
activity in three small artificial neural networks with different sets of
parameters, giving rise to dynamics that can be classified into four regimes.
We then compute three measures of spike train similarity and use persistent
homology to extract topological features that are fundamentally different from
those used in traditional methods. Our results show that a machine learning
classifier trained on these features can accurately predict the regime of the
network it was trained on and also generalize to other networks that were not
presented during training. Moreover, we demonstrate that using features
extracted from multiple spike-train distances systematically improves the
performance of our method
- …