16 research outputs found
The Equivalence of Fourier-based and Wasserstein Metrics on Imaging Problems
We investigate properties of some extensions of a class of Fourier-based
probability metrics, originally introduced to study convergence to equilibrium
for the solution to the spatially homogeneous Boltzmann equation. At difference
with the original one, the new Fourier-based metrics are well-defined also for
probability distributions with different centers of mass, and for discrete
probability measures supported over a regular grid. Among other properties, it
is shown that, in the discrete setting, these new Fourier-based metrics are
equivalent either to the Euclidean-Wasserstein distance , or to the
Kantorovich-Wasserstein distance , with explicit constants of equivalence.
Numerical results then show that in benchmark problems of image processing,
Fourier metrics provide a better runtime with respect to Wasserstein ones.Comment: 18 pages, 2 figures, 1 tabl
An Interior-Point-Inspired algorithm for Linear Programs arising in Discrete Optimal Transport
Discrete Optimal Transport problems give rise to very large linear programs
(LP) with a particular structure of the constraint matrix. In this paper we
present a hybrid algorithm that mixes an interior point method (IPM) and column
generation, specialized for the LP originating from the Kantorovich Optimal
Transport problem. Knowing that optimal solutions of such problems display a
high degree of sparsity, we propose a column-generation-like technique to force
all intermediate iterates to be as sparse as possible. The algorithm is
implemented nearly matrix-free. Indeed, most of the computations avoid forming
the huge matrices involved and solve the Newton system using only a much
smaller Schur complement of the normal equations. We prove theoretical results
about the sparsity pattern of the optimal solution, exploiting the graph
structure of the underlying problem. We use these results to mix iterative and
direct linear solvers efficiently, in a way that avoids producing
preconditioners or factorizations with excessive fill-in and at the same time
guaranteeing a low number of conjugate gradient iterations. We compare the
proposed method with two state-of-the-art solvers and show that it can compete
with the best network optimization tools in terms of computational time and
memory usage. We perform experiments with problems reaching more than four
billion variables and demonstrate the robustness of the proposed method