5 research outputs found

    Model-Centric Data Manifold: The Data Through the Eyes of the Model

    Get PDF
    We show that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold structure on data. Such structure comes via the \sl local data matrix, a variation of the Fisher information matrix, where the role of the model parameters is taken by the data variables. We obtain a foliation of the data domain, and we show that the dataset on which the model is trained lies on a leaf, the \sl data leaf, whose dimension is bounded by the number of classification labels. We validate our results with some experiments with the MNIST dataset: paths on the data leaf connect valid images, while other leaves cover noisy images

    Challenges and opportunities in machine learning for geometry

    Get PDF
    Over the past few decades, the mathematical community has accumulated a significant amount of pure mathematical data, which has been analyzed through supervised, semi-supervised, and unsupervised machine learning techniques with remarkable results, e.g., artificial neural networks, support vector machines, and principal component analysis. Therefore, we consider as disruptive the use of machine learning algorithms to study mathematical structures, enabling the formulation of conjectures via numerical algorithms. In this paper, we review the latest applications of machine learning in the field of geometry. Artificial intelligence can help in mathematical problem solving, and we predict a blossoming of machine learning applications during the next years in the field of geometry. As a contribution, we propose a new method for extracting geometric information from the point cloud and reconstruct a 2D or a 3D model, based on the novel concept of generalized asymptotes.Agencia Estatal de Investigació

    Horizontal Flows and Manifold Stochastics in Geometric Deep Learning

    No full text
    We introduce two constructions in geometric deep learning for 1) transporting orientation-dependent convolutional filters over a manifold in a continuous way and thereby defining a convolution operator that naturally incorporates the rotational effect of holonomy; and 2) allowing efficient evaluation of manifold convolution layers by sampling manifold valued random variables that center around a weighted diffusion mean. Both methods are inspired by stochastics on manifolds and geometric statistics, and provide examples of how stochastic methods -- here horizontal frame bundle flows and non-linear bridge sampling schemes, can be used in geometric deep learning. We outline the theoretical foundation of the two methods, discuss their relation to Euclidean deep networks and existing methodology in geometric deep learning, and establish important properties of the proposed constructions

    Coordinate Independent Convolutional Networks -- Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds

    Get PDF
    Motivated by the vast success of deep convolutional networks, there is a great interest in generalizing convolutions to non-Euclidean manifolds. A major complication in comparison to flat spaces is that it is unclear in which alignment a convolution kernel should be applied on a manifold. The underlying reason for this ambiguity is that general manifolds do not come with a canonical choice of reference frames (gauge). Kernels and features therefore have to be expressed relative to arbitrary coordinates. We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent. A simultaneous demand for coordinate independence and weight sharing is shown to result in a requirement on the network to be equivariant under local gauge transformations (changes of local reference frames). The ambiguity of reference frames depends thereby on the G-structure of the manifold, such that the necessary level of gauge equivariance is prescribed by the corresponding structure group G. Coordinate independent convolutions are proven to be equivariant w.r.t. those isometries that are symmetries of the G-structure. The resulting theory is formulated in a coordinate free fashion in terms of fiber bundles. To exemplify the design of coordinate independent convolutions, we implement a convolutional network on the M\"obius strip. The generality of our differential geometric formulation of convolutional networks is demonstrated by an extensive literature review which explains a large number of Euclidean CNNs, spherical CNNs and CNNs on general surfaces as specific instances of coordinate independent convolutions.Comment: The implementation of orientation independent M\"obius convolutions is publicly available at https://github.com/mauriceweiler/MobiusCNN
    corecore