2,667 research outputs found
The Monge problem with vanishing gradient penalization: Vortices and asymptotic profile
We investigate the approximation of the Monge problem (minimizing
\int\_ |T (x) -- x| d(x) among the vector-valued maps T with
prescribed image measure T \# ) by adding a vanishing Dirichlet energy,
namely \int\_ |DT |^2. We study the -convergence as
0, proving a density result for Sobolev (or Lipschitz)
transport maps in the class of transport plans. In a certain two-dimensional
framework that we analyze in details, when no optimal plan is induced by an H
^1 map, we study the selected limit map, which is a new "special" Monge
transport, possibly different from the monotone one, and we find the precise
asymptotics of the optimal cost depending on , where the leading term
is of order | log |
Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance
The Wasserstein distance between two probability measures on a metric space
is a measure of closeness with applications in statistics, probability, and
machine learning. In this work, we consider the fundamental question of how
quickly the empirical measure obtained from independent samples from
approaches in the Wasserstein distance of any order. We prove sharp
asymptotic and finite-sample results for this rate of convergence for general
measures on general compact metric spaces. Our finite-sample results show the
existence of multi-scale behavior, where measures can exhibit radically
different rates of convergence as grows
Convergence of Smoothed Empirical Measures with Applications to Entropy Estimation
This paper studies convergence of empirical measures smoothed by a Gaussian
kernel. Specifically, consider approximating , for
, by
, where is the empirical measure,
under different statistical distances. The convergence is examined in terms of
the Wasserstein distance, total variation (TV), Kullback-Leibler (KL)
divergence, and -divergence. We show that the approximation error under
the TV distance and 1-Wasserstein distance () converges at rate
in remarkable contrast to a typical
rate for unsmoothed (and ). For the
KL divergence, squared 2-Wasserstein distance (), and
-divergence, the convergence rate is , but only if
achieves finite input-output mutual information across the additive
white Gaussian noise channel. If the latter condition is not met, the rate
changes to for the KL divergence and , while
the -divergence becomes infinite - a curious dichotomy. As a main
application we consider estimating the differential entropy
in the high-dimensional regime. The distribution
is unknown but i.i.d samples from it are available. We first show that
any good estimator of must have sample complexity
that is exponential in . Using the empirical approximation results we then
show that the absolute-error risk of the plug-in estimator converges at the
parametric rate , thus establishing the minimax
rate-optimality of the plug-in. Numerical results that demonstrate a
significant empirical superiority of the plug-in approach to general-purpose
differential entropy estimators are provided.Comment: arXiv admin note: substantial text overlap with arXiv:1810.1158
Total Generalized Variation for Manifold-valued Data
In this paper we introduce the notion of second-order total generalized
variation (TGV) regularization for manifold-valued data in a discrete setting.
We provide an axiomatic approach to formalize reasonable generalizations of TGV
to the manifold setting and present two possible concrete instances that
fulfill the proposed axioms. We provide well-posedness results and present
algorithms for a numerical realization of these generalizations to the manifold
setup. Further, we provide experimental results for synthetic and real data to
further underpin the proposed generalization numerically and show its potential
for applications with manifold-valued data
- …