61 research outputs found
Balanced Coarsening for Multilevel Hypergraph Partitioning via Wasserstein Discrepancy
We propose a balanced coarsening scheme for multilevel hypergraph
partitioning. In addition, an initial partitioning algorithm is designed to
improve the quality of k-way hypergraph partitioning. By assigning vertex
weights through the LPT algorithm, we generate a prior hypergraph under a
relaxed balance constraint. With the prior hypergraph, we have defined the
Wasserstein discrepancy to coordinate the optimal transport of coarsening
process. And the optimal transport matrix is solved by Sinkhorn algorithm. Our
coarsening scheme fully takes into account the minimization of connectivity
metric (objective function). For the initial partitioning stage, we define a
normalized cut function induced by Fiedler vector, which is theoretically
proved to be a concave function. Thereby, a three-point algorithm is designed
to find the best cut under the balance constraint
Comparing Morse Complexes Using Optimal Transport: An Experimental Study
Morse complexes and Morse-Smale complexes are topological descriptors popular
in topology-based visualization. Comparing these complexes plays an important
role in their applications in feature correspondences, feature tracking,
symmetry detection, and uncertainty visualization. Leveraging recent advances
in optimal transport, we apply a class of optimal transport distances to the
comparative analysis of Morse complexes. Contrasting with existing comparative
measures, such distances are easy and efficient to compute, and naturally
provide structural matching between Morse complexes. We perform an experimental
study involving scientific simulation datasets and discuss the effectiveness of
these distances as comparative measures for Morse complexes. We also provide an
initial guideline for choosing the optimal transport distances under various
data assumptions.Comment: IEEE Visualization Conference (IEEE VIS) Short Paper, accepted, 2023;
supplementary materials:
http://www.sci.utah.edu/~beiwang/publications/GWMC_VIS_Short_BeiWang_2023_Supplement.pd
Image-to-Image Retrieval by Learning Similarity between Scene Graphs
As a scene graph compactly summarizes the high-level content of an image in a
structured and symbolic manner, the similarity between scene graphs of two
images reflects the relevance of their contents. Based on this idea, we propose
a novel approach for image-to-image retrieval using scene graph similarity
measured by graph neural networks. In our approach, graph neural networks are
trained to predict the proxy image relevance measure, computed from
human-annotated captions using a pre-trained sentence similarity model. We
collect and publish the dataset for image relevance measured by human
annotators to evaluate retrieval algorithms. The collected dataset shows that
our method agrees well with the human perception of image similarity than other
competitive baselines.Comment: Accepted to AAAI 202
Hybrid Gromov-Wasserstein Embedding for Capsule Learning
Capsule networks (CapsNets) aim to parse images into a hierarchy of objects,
parts, and their relations using a two-step process involving part-whole
transformation and hierarchical component routing. However, this hierarchical
relationship modeling is computationally expensive, which has limited the wider
use of CapsNet despite its potential advantages. The current state of CapsNet
models primarily focuses on comparing their performance with capsule baselines,
falling short of achieving the same level of proficiency as deep CNN variants
in intricate tasks. To address this limitation, we present an efficient
approach for learning capsules that surpasses canonical baseline models and
even demonstrates superior performance compared to high-performing convolution
models. Our contribution can be outlined in two aspects: firstly, we introduce
a group of subcapsules onto which an input vector is projected. Subsequently,
we present the Hybrid Gromov-Wasserstein framework, which initially quantifies
the dissimilarity between the input and the components modeled by the
subcapsules, followed by determining their alignment degree through optimal
transport. This innovative mechanism capitalizes on new insights into defining
alignment between the input and subcapsules, based on the similarity of their
respective component distributions. This approach enhances CapsNets' capacity
to learn from intricate, high-dimensional data while retaining their
interpretability and hierarchical structure. Our proposed model offers two
distinct advantages: (i) its lightweight nature facilitates the application of
capsules to more intricate vision tasks, including object detection; (ii) it
outperforms baseline approaches in these demanding tasks
Multi-Marginal Gromov-Wasserstein Transport and Barycenters
Gromov-Wasserstein (GW) distances are combinations of Gromov-Hausdorff and
Wasserstein distances that allow the comparison of two different metric measure
spaces (mm-spaces). Due to their invariance under measure- and
distance-preserving transformations, they are well suited for many applications
in graph and shape analysis. In this paper, we introduce the concept of
multi-marginal GW transport between a set of mm-spaces as well as its
regularized and unbalanced versions. As a special case, we discuss
multi-marginal fused variants, which combine the structure information of an
mm-space with label information from an additional label space. To tackle the
new formulations numerically, we consider the bi-convex relaxation of the
multi-marginal GW problem, which is tight in the balanced case if the cost
function is conditionally negative definite. The relaxed model can be solved by
an alternating minimization, where each step can be performed by a
multi-marginal Sinkhorn scheme. We show relations of our multi-marginal GW
problem to (unbalanced, fused) GW barycenters and present various numerical
results, which indicate the potential of the concept
Sliced Multi-Marginal Optimal Transport
Multi-marginal optimal transport enables one to compare multiple probability
measures, which increasingly finds application in multi-task learning problems.
One practical limitation of multi-marginal transport is computational
scalability in the number of measures, samples and dimensionality. In this
work, we propose a multi-marginal optimal transport paradigm based on random
one-dimensional projections, whose (generalized) distance we term the sliced
multi-marginal Wasserstein distance. To construct this distance, we introduce a
characterization of the one-dimensional multi-marginal Kantorovich problem and
use it to highlight a number of properties of the sliced multi-marginal
Wasserstein distance. In particular, we show that (i) the sliced multi-marginal
Wasserstein distance is a (generalized) metric that induces the same topology
as the standard Wasserstein distance, (ii) it admits a dimension-free sample
complexity, (iii) it is tightly connected with the problem of barycentric
averaging under the sliced-Wasserstein metric. We conclude by illustrating the
sliced multi-marginal Wasserstein on multi-task density estimation and
multi-dynamics reinforcement learning problems
- …