67 research outputs found

    Learning Generative Models across Incomparable Spaces

    Full text link
    Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML

    Comparing Morse Complexes Using Optimal Transport: An Experimental Study

    Full text link
    Morse complexes and Morse-Smale complexes are topological descriptors popular in topology-based visualization. Comparing these complexes plays an important role in their applications in feature correspondences, feature tracking, symmetry detection, and uncertainty visualization. Leveraging recent advances in optimal transport, we apply a class of optimal transport distances to the comparative analysis of Morse complexes. Contrasting with existing comparative measures, such distances are easy and efficient to compute, and naturally provide structural matching between Morse complexes. We perform an experimental study involving scientific simulation datasets and discuss the effectiveness of these distances as comparative measures for Morse complexes. We also provide an initial guideline for choosing the optimal transport distances under various data assumptions.Comment: IEEE Visualization Conference (IEEE VIS) Short Paper, accepted, 2023; supplementary materials: http://www.sci.utah.edu/~beiwang/publications/GWMC_VIS_Short_BeiWang_2023_Supplement.pd

    Learning Graphons via Structured Gromov-Wasserstein Barycenters

    Full text link
    We propose a novel and principled method to learn a nonparametric graph model called graphon, which is defined in an infinite-dimensional space and represents arbitrary-size graphs. Based on the weak regularity lemma from the theory of graphons, we leverage a step function to approximate a graphon. We show that the cut distance of graphons can be relaxed to the Gromov-Wasserstein distance of their step functions. Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs. Furthermore, we develop several enhancements and extensions of the basic algorithm, e.g.e.g., the smoothed Gromov-Wasserstein barycenter for guaranteeing the continuity of the learned graphons and the mixed Gromov-Wasserstein barycenters for learning multiple structured graphons. The proposed approach overcomes drawbacks of prior state-of-the-art methods, and outperforms them on both synthetic and real-world data. The code is available at https://github.com/HongtengXu/SGWB-Graphon

    Distances and Isomorphism between Networks and the Stability of Network Invariants

    Full text link
    We develop the theoretical foundations of a network distance that has recently been applied to various subfields of topological data analysis, namely persistent homology and hierarchical clustering. While this network distance has previously appeared in the context of finite networks, we extend the setting to that of compact networks. The main challenge in this new setting is the lack of an easy notion of sampling from compact networks; we solve this problem in the process of obtaining our results. The generality of our setting means that we automatically establish results for exotic objects such as directed metric spaces and Finsler manifolds. We identify readily computable network invariants and establish their quantitative stability under this network distance. We also discuss the computational complexity involved in precisely computing this distance, and develop easily-computable lower bounds by using the identified invariants. By constructing a wide range of explicit examples, we show that these lower bounds are effective in distinguishing between networks. Finally, we provide a simple algorithm that computes a lower bound on the distance between two networks in polynomial time and illustrate our metric and invariant constructions on a database of random networks and a database of simulated hippocampal networks
    • …
    corecore