9,110 research outputs found
Handling congestion in crowd motion modeling
We address here the issue of congestion in the modeling of crowd motion, in
the non-smooth framework: contacts between people are not anticipated and
avoided, they actually occur, and they are explicitly taken into account in the
model. We limit our approach to very basic principles in terms of behavior, to
focus on the particular problems raised by the non-smooth character of the
models. We consider that individuals tend to move according to a desired, or
spontanous, velocity. We account for congestion by assuming that the evolution
realizes at each time an instantaneous balance between individual tendencies
and global constraints (overlapping is forbidden): the actual velocity is
defined as the closest to the desired velocity among all admissible ones, in a
least square sense. We develop those principles in the microscopic and
macroscopic settings, and we present how the framework of Wasserstein distance
between measures allows to recover the sweeping process nature of the problem
on the macroscopic level, which makes it possible to obtain existence results
in spite of the non-smooth character of the evolution process. Micro and macro
approaches are compared, and we investigate the similarities together with deep
differences of those two levels of description
Sliced Wasserstein Distance for Learning Gaussian Mixture Models
Gaussian mixture models (GMM) are powerful parametric tools with many
applications in machine learning and computer vision. Expectation maximization
(EM) is the most popular algorithm for estimating the GMM parameters. However,
EM guarantees only convergence to a stationary point of the log-likelihood
function, which could be arbitrarily worse than the optimal solution. Inspired
by the relationship between the negative log-likelihood function and the
Kullback-Leibler (KL) divergence, we propose an alternative formulation for
estimating the GMM parameters using the sliced Wasserstein distance, which
gives rise to a new algorithm. Specifically, we propose minimizing the
sliced-Wasserstein distance between the mixture model and the data distribution
with respect to the GMM parameters. In contrast to the KL-divergence, the
energy landscape for the sliced-Wasserstein distance is more well-behaved and
therefore more suitable for a stochastic gradient descent scheme to obtain the
optimal GMM parameters. We show that our formulation results in parameter
estimates that are more robust to random initializations and demonstrate that
it can estimate high-dimensional data distributions more faithfully than the EM
algorithm
A Smoothed Dual Approach for Variational Wasserstein Problems
Variational problems that involve Wasserstein distances have been recently
proposed to summarize and learn from probability measures. Despite being
conceptually simple, such problems are computationally challenging because they
involve minimizing over quantities (Wasserstein distances) that are themselves
hard to compute. We show that the dual formulation of Wasserstein variational
problems introduced recently by Carlier et al. (2014) can be regularized using
an entropic smoothing, which leads to smooth, differentiable, convex
optimization problems that are simpler to implement and numerically more
stable. We illustrate the versatility of this approach by applying it to the
computation of Wasserstein barycenters and gradient flows of spacial
regularization functionals
A macroscopic crowd motion model of gradient flow type
A simple model to handle the flow of people in emergency evacuation
situations is considered: at every point x, the velocity U(x) that individuals
at x would like to realize is given. Yet, the incompressibility constraint
prevents this velocity field to be realized and the actual velocity is the
projection of the desired one onto the set of admissible velocities. Instead of
looking at a microscopic setting (where individuals are represented by rigid
discs), here the macroscopic approach is investigated, where the unknwon is the
evolution of the density . If a gradient structure is given, say U is the
opposite of the gradient of D where D is, for instance, the distance to the
exit door, the problem is presented as a Gradient Flow in the Wasserstein space
of probability measures. The functional which gives the Gradient Flow is
neither finitely valued (since it takes into account the constraints on the
density), nor geodesically convex, which requires for an ad-hoc study of the
convergence of a discrete scheme
Geometry Helps to Compare Persistence Diagrams
Exploiting geometric structure to improve the asymptotic complexity of
discrete assignment problems is a well-studied subject. In contrast, the
practical advantages of using geometry for such problems have not been
explored. We implement geometric variants of the Hopcroft--Karp algorithm for
bottleneck matching (based on previous work by Efrat el al.) and of the auction
algorithm by Bertsekas for Wasserstein distance computation. Both
implementations use k-d trees to replace a linear scan with a geometric
proximity query. Our interest in this problem stems from the desire to compute
distances between persistence diagrams, a problem that comes up frequently in
topological data analysis. We show that our geometric matching algorithms lead
to a substantial performance gain, both in running time and in memory
consumption, over their purely combinatorial counterparts. Moreover, our
implementation significantly outperforms the only other implementation
available for comparing persistence diagrams.Comment: 20 pages, 10 figures; extended version of paper published in ALENEX
201
- …
