5,136 research outputs found

    The role of symmetry in neural networks and their Laplacian spectra

    Get PDF
    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    Surface Networks

    Full text link
    We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry. Recent works have developed models that exploit the intrinsic geometry of manifolds and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants, which learn from the local metric tensor via the Laplacian operator. Despite offering excellent sample complexity and built-in invariances, intrinsic geometry alone is invariant to isometric deformations, making it unsuitable for many applications. To overcome this limitation, we propose several upgrades to GNNs to leverage extrinsic differential geometry properties of three-dimensional surfaces, increasing its modeling power. In particular, we propose to exploit the Dirac operator, whose spectrum detects principal curvature directions --- this is in stark contrast with the classical Laplace operator, which directly measures mean curvature. We coin the resulting models \emph{Surface Networks (SN)}. We prove that these models define shape representations that are stable to deformation and to discretization, and we demonstrate the efficiency and versatility of SNs on two challenging tasks: temporal prediction of mesh deformations under non-linear dynamics and generative models using a variational autoencoder framework with encoders/decoders given by SNs

    Bayesian Semi-supervised Learning with Graph Gaussian Processes

    Get PDF
    We propose a data-efficient Gaussian process-based Bayesian approach to the semi-supervised learning problem on graphs. The proposed model shows extremely competitive performance when compared to the state-of-the-art graph neural networks on semi-supervised learning benchmark experiments, and outperforms the neural networks in active learning experiments where labels are scarce. Furthermore, the model does not require a validation data set for early stopping to control over-fitting. Our model can be viewed as an instance of empirical distribution regression weighted locally by network connectivity. We further motivate the intuitive construction of the model with a Bayesian linear model interpretation where the node features are filtered by an operator related to the graph Laplacian. The method can be easily implemented by adapting off-the-shelf scalable variational inference algorithms for Gaussian processes.Comment: To appear in NIPS 2018 Fixed an error in Figure 2. The previous arxiv version contains two identical sub-figure
    • …
    corecore