2,667 research outputs found

    The Monge problem with vanishing gradient penalization: Vortices and asymptotic profile

    Get PDF
    We investigate the approximation of the Monge problem (minimizing \int\_Ω\Omega |T (x) -- x| dμ\mu(x) among the vector-valued maps T with prescribed image measure T \# μ\mu) by adding a vanishing Dirichlet energy, namely ϵ\epsilon \int\_Ω\Omega |DT |^2. We study the Γ\Gamma-convergence as ϵ\epsilon →\rightarrow 0, proving a density result for Sobolev (or Lipschitz) transport maps in the class of transport plans. In a certain two-dimensional framework that we analyze in details, when no optimal plan is induced by an H ^1 map, we study the selected limit map, which is a new "special" Monge transport, possibly different from the monotone one, and we find the precise asymptotics of the optimal cost depending on ϵ\epsilon, where the leading term is of order ϵ\epsilon| log ϵ\epsilon|

    Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance

    Full text link
    The Wasserstein distance between two probability measures on a metric space is a measure of closeness with applications in statistics, probability, and machine learning. In this work, we consider the fundamental question of how quickly the empirical measure obtained from nn independent samples from μ\mu approaches μ\mu in the Wasserstein distance of any order. We prove sharp asymptotic and finite-sample results for this rate of convergence for general measures on general compact metric spaces. Our finite-sample results show the existence of multi-scale behavior, where measures can exhibit radically different rates of convergence as nn grows

    Convergence of Smoothed Empirical Measures with Applications to Entropy Estimation

    Full text link
    This paper studies convergence of empirical measures smoothed by a Gaussian kernel. Specifically, consider approximating P∗NσP\ast\mathcal{N}_\sigma, for Nσ≜N(0,σ2Id)\mathcal{N}_\sigma\triangleq\mathcal{N}(0,\sigma^2 \mathrm{I}_d), by P^n∗Nσ\hat{P}_n\ast\mathcal{N}_\sigma, where P^n\hat{P}_n is the empirical measure, under different statistical distances. The convergence is examined in terms of the Wasserstein distance, total variation (TV), Kullback-Leibler (KL) divergence, and χ2\chi^2-divergence. We show that the approximation error under the TV distance and 1-Wasserstein distance (W1\mathsf{W}_1) converges at rate eO(d)n−12e^{O(d)}n^{-\frac{1}{2}} in remarkable contrast to a typical n−1dn^{-\frac{1}{d}} rate for unsmoothed W1\mathsf{W}_1 (and d≥3d\ge 3). For the KL divergence, squared 2-Wasserstein distance (W22\mathsf{W}_2^2), and χ2\chi^2-divergence, the convergence rate is eO(d)n−1e^{O(d)}n^{-1}, but only if PP achieves finite input-output χ2\chi^2 mutual information across the additive white Gaussian noise channel. If the latter condition is not met, the rate changes to ω(n−1)\omega(n^{-1}) for the KL divergence and W22\mathsf{W}_2^2, while the χ2\chi^2-divergence becomes infinite - a curious dichotomy. As a main application we consider estimating the differential entropy h(P∗Nσ)h(P\ast\mathcal{N}_\sigma) in the high-dimensional regime. The distribution PP is unknown but nn i.i.d samples from it are available. We first show that any good estimator of h(P∗Nσ)h(P\ast\mathcal{N}_\sigma) must have sample complexity that is exponential in dd. Using the empirical approximation results we then show that the absolute-error risk of the plug-in estimator converges at the parametric rate eO(d)n−12e^{O(d)}n^{-\frac{1}{2}}, thus establishing the minimax rate-optimality of the plug-in. Numerical results that demonstrate a significant empirical superiority of the plug-in approach to general-purpose differential entropy estimators are provided.Comment: arXiv admin note: substantial text overlap with arXiv:1810.1158

    Total Generalized Variation for Manifold-valued Data

    Full text link
    In this paper we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data in a discrete setting. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present two possible concrete instances that fulfill the proposed axioms. We provide well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data
    • …
    corecore