15,276 research outputs found
Toward Mesh-Invariant 3D Generative Deep Learning with Geometric Measures
3D generative modeling is accelerating as the technology allowing the capture
of geometric data is developing. However, the acquired data is often
inconsistent, resulting in unregistered meshes or point clouds. Many generative
learning algorithms require correspondence between each point when comparing
the predicted shape and the target shape. We propose an architecture able to
cope with different parameterizations, even during the training phase. In
particular, our loss function is built upon a kernel-based metric over a
representation of meshes using geometric measures such as currents and
varifolds. The latter allows to implement an efficient dissimilarity measure
with many desirable properties such as robustness to resampling of the mesh or
point cloud. We demonstrate the efficiency and resilience of our model with a
generative learning task of human faces
Dual-to-kernel learning with ideals
In this paper, we propose a theory which unifies kernel learning and symbolic
algebraic methods. We show that both worlds are inherently dual to each other,
and we use this duality to combine the structure-awareness of algebraic methods
with the efficiency and generality of kernels. The main idea lies in relating
polynomial rings to feature space, and ideals to manifolds, then exploiting
this generative-discriminative duality on kernel matrices. We illustrate this
by proposing two algorithms, IPCA and AVICA, for simultaneous manifold and
feature learning, and test their accuracy on synthetic and real world data.Comment: 15 pages, 1 figur
Learning Generative Models with Sinkhorn Divergences
The ability to compare two degenerate probability distributions (i.e. two
probability distributions supported on two distinct low-dimensional manifolds
living in a much higher-dimensional space) is a crucial problem arising in the
estimation of generative models for high-dimensional observations such as those
arising in computer vision or natural language. It is known that optimal
transport metrics can represent a cure for this problem, since they were
specifically designed as an alternative to information divergences to handle
such problematic scenarios. Unfortunately, training generative machines using
OT raises formidable computational and statistical challenges, because of (i)
the computational burden of evaluating OT losses, (ii) the instability and lack
of smoothness of these losses, (iii) the difficulty to estimate robustly these
losses and their gradients in high dimension. This paper presents the first
tractable computational method to train large scale generative models using an
optimal transport loss, and tackles these three issues by relying on two key
ideas: (a) entropic smoothing, which turns the original OT loss into one that
can be computed using Sinkhorn fixed point iterations; (b) algorithmic
(automatic) differentiation of these iterations. These two approximations
result in a robust and differentiable approximation of the OT loss with
streamlined GPU execution. Entropic smoothing generates a family of losses
interpolating between Wasserstein (OT) and Maximum Mean Discrepancy (MMD), thus
allowing to find a sweet spot leveraging the geometry of OT and the favorable
high-dimensional sample complexity of MMD which comes with unbiased gradient
estimates. The resulting computational architecture complements nicely standard
deep network generative models by a stack of extra layers implementing the loss
function
Stochastic Algorithm For Parameter Estimation For Dense Deformable Template Mixture Model
Estimating probabilistic deformable template models is a new approach in the
fields of computer vision and probabilistic atlases in computational anatomy. A
first coherent statistical framework modelling the variability as a hidden
random variable has been given by Allassonni\`ere, Amit and Trouv\'e in [1] in
simple and mixture of deformable template models. A consistent stochastic
algorithm has been introduced in [2] to face the problem encountered in [1] for
the convergence of the estimation algorithm for the one component model in the
presence of noise. We propose here to go on in this direction of using some
"SAEM-like" algorithm to approximate the MAP estimator in the general Bayesian
setting of mixture of deformable template model. We also prove the convergence
of this algorithm toward a critical point of the penalised likelihood of the
observations and illustrate this with handwritten digit images
- …