10 research outputs found
Computing a high-dimensional euclidean embedding from an arbitrary smooth riemannian metric
International audienceThis article presents a new method to compute a self-intersection free high-dimensional Euclidean embedding (SIFHDE) for surfaces and volumes equipped with an arbitrary Riemannian metric. It is already known that given a high-dimensional (high-d) embedding, one can easily compute an anisotropic Voronoi diagram by back-mapping it to 3D space. We show here how to solve the inverse problem, i.e., given an input metric, compute a smooth intersection-free high-d embedding of the input such that the pullback metric of the embedding matches the input metric. Our numerical solution mechanism matches the deformation gradient of the 3D → higher-d mapping with the given Riemannian metric. We demonstrate applications of the method, by being used to construct anisotropic Restricted Voronoi Diagram (RVD) and anisotropic meshing, that are otherwise extremely difficult to compute. In the SIFHDE-space constructed by our algorithm, difficult 3D anisotropic computations are replaced with simple Euclidean computations, resulting in an isotropic RVD and its dual mesh on this high-d embedding. The results are compared with the state-ofthe-art in anisotropic surface and volume meshings using several examples and evaluation metrics
Fitting a manifold to data in the presence of large noise
We assume that is a -dimensional -smooth submanifold of
. Let be the convex hull of and be the unit ball.
We assume that We also
suppose that has volume (-dimensional Hausdorff measure) less or equal
to , reach (i.e., normal injectivity radius) greater or equal to .
Moreover, we assume that is -exposed, that is, tangent to every
point there is a closed ball of radius that contains . Let
be independent random variables sampled from uniform
distribution on and
be a sequence of i.i.d Gaussian random variables in
that are independent of and have mean zero and
covariance We assume that we are given the noisy sample points
, given by
Let be real numbers and . Given points ,
, we produce a -smooth function which zero set is a
manifold such that the Hausdorff distance between
and is at most and has reach that is
bounded below by with probability at least Assuming and all the other parameters are positive constants
independent of , the number of the needed arithmetic operations is
polynomial in . In the present work, we allow the noise magnitude
to be an arbitrarily large constant, thus overcoming a drawback of previous
work
Deep Learning for Inverse Problems: Performance Characterizations, Learning Algorithms, and Applications
Deep learning models have witnessed immense empirical success over the last decade. However, in spite of their widespread adoption, a profound understanding of the generalization behaviour of these over-parameterized architectures is still missing. In this thesis, we provide one such way via a data-dependent characterizations of the generalization capability of deep neural networks based data representations. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the cardinality of the training set, and the Lipschitz properties of a deep neural network.
We then specialize our analysis to a specific class of model based regression problems, namely the inverse problems. These problems often come with well defined forward operators that map variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. We offer a generalisation error bound that -- apart from the other factors -- depends on the Jacobian of the composition of the forward operator with the neural network.
Motivated by our analysis, we then propose a `plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also provide a method allowing us to tightly upper bound the norms of the Jacobians of the relevant operators that is much more {computationally} efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing setup and inverse problems that are of interest in the biomedical imaging setup
New Analysis of Manifold Embeddings and Signal Recovery from Compressive Measurements
Compressive Sensing (CS) exploits the surprising fact that the information
contained in a sparse signal can be preserved in a small number of compressive,
often random linear measurements of that signal. Strong theoretical guarantees
have been established concerning the embedding of a sparse signal family under
a random measurement operator and on the accuracy to which sparse signals can
be recovered from noisy compressive measurements. In this paper, we address
similar questions in the context of a different modeling framework. Instead of
sparse models, we focus on the broad class of manifold models, which can arise
in both parametric and non-parametric signal families. Using tools from the
theory of empirical processes, we improve upon previous results concerning the
embedding of low-dimensional manifolds under random measurement operators. We
also establish both deterministic and probabilistic instance-optimal bounds in
for manifold-based signal recovery and parameter estimation from noisy
compressive measurements. In line with analogous results for sparsity-based CS,
we conclude that much stronger bounds are possible in the probabilistic
setting. Our work supports the growing evidence that manifold-based models can
be used with high accuracy in compressive signal processing.Comment: arXiv admin note: substantial text overlap with arXiv:1002.124