42 research outputs found

    Regression-Based Elastic Metric Learning on Shape Spaces of Elastic Curves

    Full text link
    We propose a metric learning paradigm, Regression-based Elastic Metric Learning (REML), which optimizes the elastic metric for geodesic regression on the manifold of discrete curves. Geodesic regression is most accurate when the chosen metric models the data trajectory close to a geodesic on the discrete curve manifold. When tested on cell shape trajectories, regression with REML's learned metric has better predictive power than with the conventionally used square-root-velocity (SRV) metric.Comment: 4 pages, 2 figures, derivations in appendi

    A General Framework for Robust G-Invariance in G-Equivariant Networks

    Full text link
    We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks (GG-CNNs), which we call the GG-triple-correlation (GG-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps - such as the max - are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the GG-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max GG-Pooling in GG-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for GG-CNNs defined on both commutative and non-commutative groups - SO(2)SO(2), O(2)O(2), SO(3)SO(3), and O(3)O(3) (discretized as the cyclic C8C8, dihedral D16D16, chiral octahedral OO and full octahedral OhO_h groups) - acting on R2\mathbb{R}^2 and R3\mathbb{R}^3 on both GG-MNIST and GG-ModelNet10 datasets

    Defining a mean on Lie group

    Get PDF
    National audienceThis master thesis explores the properties of three different definitions of the mean on a Lie group : the Riemannian Center of Mass, the Riemannian exponential barycenter and the group exponential barycenter.Cette thèse de master étudie trois différentes définitions de la moyenne sur un groupe de Lie : le centre de masse riemannien, le barycentre exponentiel riemannien et le barycentre exponentiel de groupe

    Statistics on Lie groups : a need to go beyond the pseudo-Riemannian framework

    Get PDF
    Abstract. Lie groups appear in many fields from Medical Imaging to Robotics. In Medical Imaging and particularly in Computational Anatomy, an organ's shape is often modeled as the deformation of a reference shape, in other words: as an element of a Lie group. In this framework, if one wants to model the variability of the human anatomy, e.g. in order to help diagnosis of diseases, one needs to perform statistics on Lie groups. A Lie group G is a manifold that carries an additional group structure. Statistics on Riemannian manifolds have been well studied with the pioneer work of Fréchet, Karcher and Kendall [1, 2, 3, 4] followed by other

    Architectures of Topological Deep Learning: A Survey on Topological Neural Networks

    Full text link
    The natural world is full of complex systems characterized by intricate relations between their components: from social interactions between individuals in a social network to electrostatic interactions between atoms in a protein. Topological Deep Learning (TDL) provides a comprehensive framework to process and extract knowledge from data associated with these systems, such as predicting the social community to which an individual belongs or predicting whether a protein can be a reasonable target for drug development. TDL has demonstrated theoretical and practical advantages that hold the promise of breaking ground in the applied sciences and beyond. However, the rapid growth of the TDL literature has also led to a lack of unification in notation and language across Topological Neural Network (TNN) architectures. This presents a real obstacle for building upon existing works and for deploying TNNs to new real-world problems. To address this issue, we provide an accessible introduction to TDL, and compare the recently published TNNs using a unified mathematical and graphical notation. Through an intuitive and critical review of the emerging field of TDL, we extract valuable insights into current challenges and exciting opportunities for future development

    A survey of mathematical structures for extending 2D neurogeometry to 3D image processing

    Get PDF
    International audienceIn the era of big data, one may apply generic learning algorithms for medical computer vision. But such algorithms are often "black-boxes" and as such, hard to interpret. We still need new constructive models, which could eventually feed the big data framework. Where can one find inspiration for new models in medical computer vision? The emerging field of Neurogeometry provides innovative ideas.Neurogeometry models the visual cortex through modern Differential Geometry: the neuronal architecture is represented as a sub-Riemannianmanifold R2 x S1. On the one hand, Neurogeometry explains visual phenomena like human perceptual completion. On the other hand, it provides efficient algorithms for computer vision. Examples of applications are image completion (in-painting) and crossing-preserving smoothing. In medical image computer vision, Neurogeometry is less known although some algorithms exist. One reason is that one often deals with 3D images, whereas Neurogeometry is essentially 2D (our retina is 2D). Moreover, the generalization of (2D)-Neurogeometry to 3D is not straight-forward from the mathematical point of view. This article presents the theoretical framework of a 3D-Neurogeometry inspired by the 2D case. We survey the mathematical structures and a standard frame for algorithms in 3D- Neurogeometry. The aim of the paper is to provide a "theoretical toolbox" and inspiration for new algorithms in 3D medical computer vision

    Statistics on Lie groups : a need to go beyond the pseudo-Riemannian framework

    Get PDF
    International audienceLie groups appear in many fields from Medical Imaging to Robotics. In Medical Imaging and particularly in Computational Anatomy, an organ's shape is often modeled as the deformation of a reference shape, in other words: as an element of a Lie group. In this framework, if one wants to model the variability of the human anatomy, e.g. in order to help diagnosis of diseases, one needs to perform statistics on Lie groups. A Lie group G is a manifold that carries an additional group structure. Statistics on Riemannian manifolds have been well studied with the pioneer work of Fréchet, Karcher and Kendall [1, 2, 3, 4] followed by others [5, 6, 7, 8, 9]. In order to use such a Riemannian structure for statistics on Lie groups, one needs to define a Riemannian metric that is compatible with the group structure, i.e a bi-invariant metric. However, it is well known that general Lie groups which cannot be decomposed into the direct product of compact and abelian groups do not admit a bi-invariant metric. One may wonder if removing the positivity of the metric, thus asking only for a bi-invariant pseudo-Riemannian metric, would be sufficient for most of the groups used in Computational Anatomy. In this paper, we provide an algorithmic procedure that constructs bi-invariant pseudo-metrics on a given Lie group G . The procedure relies on a classification theorem of Medina and Revoy. However in doing so, we prove that most Lie groups do not admit any bi-invariant (pseudo-) metric. We conclude that the (pseudo-) Riemannian setting is not the richest setting if one wants to perform statistics on Lie groups. One may have to rely on another framework, such as affine connection space
    corecore