310 research outputs found

    Using Gromov-Wasserstein distance to explore sets of networks

    Get PDF
    In many fields such as social sciences or biology, relations between data or variables are presented as networks. To compare these networks, a meaningful notion of distance between networks is highly desired. The aim of this Master thesis is to study, implement, and apply one Gromov-Wasserstein type of distance introduced by F.Memoli (2011) in his paper "Gromov-Wasserstein Distances and the Metric Approach to Object Matching" to study sets of complex networks. Taking into account theoretical underpinnings introduced in this paper we represent some real world networks as metric measure spaces and compare them on basis of Gromov-Wasserstein distance

    Learning Generative Models across Incomparable Spaces

    Full text link
    Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML

    On Nonrigid Shape Similarity and Correspondence

    Full text link
    An important operation in geometry processing is finding the correspondences between pairs of shapes. The Gromov-Hausdorff distance, a measure of dissimilarity between metric spaces, has been found to be highly useful for nonrigid shape comparison. Here, we explore the applicability of related shape similarity measures to the problem of shape correspondence, adopting spectral type distances. We propose to evaluate the spectral kernel distance, the spectral embedding distance and the novel spectral quasi-conformal distance, comparing the manifolds from different viewpoints. By matching the shapes in the spectral domain, important attributes of surface structure are being aligned. For the purpose of testing our ideas, we introduce a fully automatic framework for finding intrinsic correspondence between two shapes. The proposed method achieves state-of-the-art results on the Princeton isometric shape matching protocol applied, as usual, to the TOSCA and SCAPE benchmarks

    Gromov-Monge quasi-metrics and distance distributions

    Full text link
    Applications in data science, shape analysis and object classification frequently require maps between metric spaces which preserve geometry as faithfully as possible. In this paper, we combine the Monge formulation of optimal transport with the Gromov-Hausdorff distance construction to define a measure of the minimum amount of geometric distortion required to map one metric measure space onto another. We show that the resulting quantity, called Gromov-Monge distance, defines an extended quasi-metric on the space of isomorphism classes of metric measure spaces and that it can be promoted to a true metric on certain subclasses of mm-spaces. We also give precise comparisons between Gromov-Monge distance and several other metrics which have appeared previously, such as the Gromov-Wasserstein metric and the continuous Procrustes metric of Lipman, Al-Aifari and Daubechies. Finally, we derive polynomial-time computable lower bounds for Gromov-Monge distance. These lower bounds are expressed in terms of distance distributions, which are classical invariants of metric measure spaces summarizing the volume growth of metric balls. In the second half of the paper, which may be of independent interest, we study the discriminative power of these lower bounds for simple subclasses of metric measure spaces. We first consider the case of planar curves, where we give a counterexample to the Curve Histogram Conjecture of Brinkman and Olver. Our results on plane curves are then generalized to higher dimensional manifolds, where we prove some sphere characterization theorems for the distance distribution invariant. Finally, we consider several inverse problems on recovering a metric graph from a collection of localized versions of distance distributions. Results are derived by establishing connections with concepts from the fields of computational geometry and topological data analysis.Comment: Version 2: Added many new results and improved expositio

    SHREC'16: partial matching of deformable shapes

    Get PDF
    Matching deformable 3D shapes under partiality transformations is a challenging problem that has received limited focus in the computer vision and graphics communities. With this benchmark, we explore and thoroughly investigate the robustness of existing matching methods in this challenging task. Participants are asked to provide a point-to-point correspondence (either sparse or dense) between deformable shapes undergoing different kinds of partiality transformations, resulting in a total of 400 matching problems to be solved for each method - making this benchmark the biggest and most challenging of its kind. Five matching algorithms were evaluated in the contest; this paper presents the details of the dataset, the adopted evaluation measures, and shows thorough comparisons among all competing methods

    Hybrid Gromov-Wasserstein Embedding for Capsule Learning

    Full text link
    Capsule networks (CapsNets) aim to parse images into a hierarchy of objects, parts, and their relations using a two-step process involving part-whole transformation and hierarchical component routing. However, this hierarchical relationship modeling is computationally expensive, which has limited the wider use of CapsNet despite its potential advantages. The current state of CapsNet models primarily focuses on comparing their performance with capsule baselines, falling short of achieving the same level of proficiency as deep CNN variants in intricate tasks. To address this limitation, we present an efficient approach for learning capsules that surpasses canonical baseline models and even demonstrates superior performance compared to high-performing convolution models. Our contribution can be outlined in two aspects: firstly, we introduce a group of subcapsules onto which an input vector is projected. Subsequently, we present the Hybrid Gromov-Wasserstein framework, which initially quantifies the dissimilarity between the input and the components modeled by the subcapsules, followed by determining their alignment degree through optimal transport. This innovative mechanism capitalizes on new insights into defining alignment between the input and subcapsules, based on the similarity of their respective component distributions. This approach enhances CapsNets' capacity to learn from intricate, high-dimensional data while retaining their interpretability and hierarchical structure. Our proposed model offers two distinct advantages: (i) its lightweight nature facilitates the application of capsules to more intricate vision tasks, including object detection; (ii) it outperforms baseline approaches in these demanding tasks
    corecore