72 research outputs found
3D statistical facial reconstruction
The aim of craniofacial reconstruction is to produce a likeness of a face
from the skull. Few works in computerized assisted facial reconstruction have
been done in the past, due to poor machine performances and data availability,
and major works are manually reconstructions. In this paper, we present an
approach to build 3D statistical models of the skull and the face with soft
tissues from the skull of one individual. Results on real data are presented
and seem promising
Statistical skull models from 3D X-ray images
We present 2 statistical models of the skull and mandible built upon an
elastic registration method of 3D meshes. The aim of this work is to relate
degrees of freedom of skull anatomy, as static relations are of main interest
for anthropology and legal medicine. Statistical models can effectively provide
reconstructions together with statistical precision. In our applications,
patient-specific meshes of the skull and the mandible are high-density meshes,
extracted from 3D CT scans. All our patient-specific meshes are registrated in
a subject-shared reference system using our 3D-to-3D elastic matching
algorithm. Registration is based upon the minimization of a distance between
the high density mesh and a shared low density mesh, defined on the vertexes,
in a multi resolution approach. A Principal Component analysis is performed on
the normalised registrated data to build a statistical linear model of the
skull and mandible shape variation. The accuracy of the reconstruction is under
the millimetre in the shape space (after rigid registration). Reconstruction
errors for Scan data of tests individuals are below registration noise. To take
in count the articulated aspect of the skull in our model, Kernel Principal
Component Analysis is applied, extracting a non-linear parameter associated
with mandible position, therefore building a statistical articulated 3D model
of the skull.Comment: Proceedings of the Second International Conference on Reconstruction
of Soft Facial Parts RSFP'200
Craniofacial reconstruction as a prediction problem using a Latent Root Regression model
International audienceIn this paper, we present a computer-assisted method for facial reconstruction. This method provides an estimation of the facial shape associated with unidentified skeletal remains. Current computer-assisted methods using a statistical framework rely on a common set of extracted points located on the bone and soft-tissue surfaces. Most of the facial reconstruction methods then consist of predicting the position of the soft-tissue surface points, when the positions of the bone surface points are known. We propose to use Latent Root Regression for prediction. The results obtained are then compared to those given by Principal Components Analysis linear models. In conjunction, we have evaluated the influence of the number of skull landmarks used. Anatomical skull landmarks are completed iteratively by points located upon geodesics which link these anatomical landmarks, thus enabling us to artificially increase the number of skull points. Facial points are obtained using a mesh-matching algorithm between a common reference mesh and individual soft-tissue surface meshes. The proposed method is validated in term of accuracy, based on a leave-one-out cross-validation test applied to a homogeneous database. Accuracy measures are obtained by computing the distance between the original face surface and its reconstruction. Finally, these results are discussed referring to current computer-assisted reconstruction facial techniques
Singleshot : a scalable Tucker tensor decomposition
International audienceThis paper introduces a new approach for the scalable Tucker decomposition problem. Given a tensor X , the algorithm proposed, named Singleshot, allows to perform the inference task by processing one subtensor drawn from X at a time. The key principle of our approach is based on the recursive computations of the gradient and on cyclic update of the latent factors involving only one single step of gradient descent. We further improve the computational efficiency of Singleshot by proposing an inexact gradient version named Singleshotinexact. The two algorithms are backed with theoretical guarantees of convergence and convergence rates under mild conditions. The scalabilty of the proposed approaches, which can be easily extended to handle some common constraints encountered in tensor decomposition (e.g non-negativity), is proven via numerical experiments on both synthetic and real data sets
Stochastic gradient descent with gradient estimator for categorical features
Categorical data are present in key areas such as health or supply chain, and
this data require specific treatment. In order to apply recent machine learning
models on such data, encoding is needed. In order to build interpretable
models, one-hot encoding is still a very good solution, but such encoding
creates sparse data. Gradient estimators are not suited for sparse data: the
gradient is mainly considered as zero while it simply does not always exists,
thus a novel gradient estimator is introduced. We show what this estimator
minimizes in theory and show its efficiency on different datasets with multiple
model architectures. This new estimator performs better than common estimators
under similar settings. A real world retail dataset is also released after
anonymization. Overall, the aim of this paper is to thoroughly consider
categorical data and adapt models and optimizers to these key features
Screening Sinkhorn Algorithm for Regularized Optimal Transport
International audienceWe introduce in this paper a novel strategy for efficiently approximating the Sinkhorn distance between two discrete measures. After identifying neglectable components of the dual solution of the regularized Sinkhorn problem, we propose to screen those components by directly setting them at that value before entering the Sinkhorn problem. This allows us to solve a smaller Sinkhorn problem while ensuring approximation with provable guarantees. More formally, the approach is based on a new formulation of dual of Sinkhorn divergence problem and on the KKT optimality conditions of this problem, which enable identification of dual components to be screened. This new analysis leads to the Screenkhorn algorithm. We illustrate the efficiency of Screenkhorn on complex tasks such as dimensionality reduction and domain adaptation involving regularized optimal transport
Heterogeneous Wasserstein Discrepancy for Incomparable Distributions
Optimal Transport (OT) metrics allow for defining discrepancies between two
probability measures. Wasserstein distance is for longer the celebrated
OT-distance frequently-used in the literature, which seeks probability
distributions to be supported on the metric space. Because of
its high computational complexity, several approximate Wasserstein distances
have been proposed based on entropy regularization or on slicing, and
one-dimensional Wassserstein computation. In this paper, we propose a novel
extension of Wasserstein distance to compare two incomparable distributions,
that hinges on the idea of , embeddings, and
on computing the closed-form Wassertein distance between the sliced
distributions. We provide a theoretical analysis of this new divergence, called
, and we show that it
preserves several interesting properties including rotation-invariance. We show
that the embeddings involved in HWD can be efficiently learned. Finally, we
provide a large set of experiments illustrating the behavior of HWD as a
divergence in the context of generative modeling and in query framework
Gaussian-Smoothed Sliced Probability Divergences
Gaussian smoothed sliced Wasserstein distance has been recently introduced
for comparing probability distributions, while preserving privacy on the data.
It has been shown that it provides performances similar to its non-smoothed
(non-private) counterpart. However, the computationaland statistical properties
of such a metric have not yet been well-established. This work investigates the
theoretical properties of this distance as well as those of generalized
versions denoted as Gaussian-smoothed sliced divergences. We first show that
smoothing and slicing preserve the metric property and the weak topology. To
study the sample complexity of such divergences, we then introduce
the double empirical distribution for the
smoothed-projected . The distribution is a result of a
double sampling process: one from sampling according to the origin distribution
and the second according to the convolution of the projection of on
the unit sphere and the Gaussian smoothing. We particularly focus on the
Gaussian smoothed sliced Wasserstein distance and prove that it converges with
a rate . We also derive other properties, including continuity, of
different divergences with respect to the smoothing parameter. We support our
theoretical findings with empirical studies in the context of
privacy-preserving domain adaptation.Comment: arXiv admin note: substantial text overlap with arXiv:2110.1052
- …
