37 research outputs found

    Geometric deep learning and equivariant neural networks

    Get PDF
    We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge equivariant neural networks. We develop gauge equivariant convolutional neural networks on arbitrary manifolds M using principal bundles with structure group K and equivariant maps between sections of associated vector bundles. We also discuss group equivariant neural networks for homogeneous spaces M= G/ K , which are instead equivariant with respect to the global symmetry G on M . Group equivariant layers can be interpreted as intertwiners between induced representations of G, and we show their relation to gauge equivariant convolutional layers. We analyze several applications of this formalism, including semantic segmentation and object detection networks. We also discuss the case of spherical networks in great detail, corresponding to the case M= S2= SO (3) / SO (2) . Here we emphasize the use of Fourier analysis involving Wigner matrices, spherical harmonics and Clebsch–Gordan coefficients for G= SO (3) , illustrating the power of representation theory for deep learning

    A General Theory of Equivariant CNNs on Homogeneous Spaces

    Get PDF
    We present a general theory of Group equivariant Convolutional Neural Networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere. Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields. The theory enables a systematic classification of all existing G-CNNs in terms of their symmetry group, base space, and field type. We also consider a fundamental question: what is the most general kind of equivariant linear map between feature spaces (fields) of given types? Following Mackey, we show that such maps correspond one-to-one with convolutions using equivariant kernels, and characterize the space of such kernels

    Mathematical Foundations of Equivariant Neural Networks

    Get PDF
    Deep learning has revolutionized industry and academic research. Over the past decade, neural networks have been used to solve a multitude of previously unsolved problems and to significantly improve the state of the art on other tasks. However, training a neural network typically requires large amounts of data and computational resources. This is not only costly, it also prevents deep learning from being used for applications in which data is scarce. It is therefore important to simplify the learning task by incorporating inductive biases - prior knowledge and assumptions - into the neural network design.Geometric deep learning aims to reduce the amount of information that neural networks have to learn, by taking advantage of geometric properties in data. In particular, equivariant neural networks use symmetries to reduce the complexity of a learning task. Symmetries are properties that do not change under certain transformations. For example, rotation-equivariant neural networks trained to identify tumors in medical images are not sensitive to the orientation of a tumor within an image. Another example is graph neural networks, i.e., permutation-equivariant neural networks that operate on graphs, such as molecules or social networks. Permuting the ordering of vertices and edges either transforms the output of a graph neural network in a predictable way (equivariance), or has no effect on the output (invariance).In this thesis we study a fiber bundle theoretic framework for equivariant neural networks. Fiber bundles are often used in mathematics and theoretical physics to model nontrivial geometries, and offer a geometric approach to symmetry. This framework connects to many different areas of mathematics, including Fourier analysis, representation theory, and gauge theory, thus providing a large set of tools for analyzing equivariant neural networks

    General E(2)-Equivariant Steerable CNNs

    Get PDF
    corecore