10 research outputs found

    Computing a high-dimensional euclidean embedding from an arbitrary smooth riemannian metric

    Get PDF
    International audienceThis article presents a new method to compute a self-intersection free high-dimensional Euclidean embedding (SIFHDE) for surfaces and volumes equipped with an arbitrary Riemannian metric. It is already known that given a high-dimensional (high-d) embedding, one can easily compute an anisotropic Voronoi diagram by back-mapping it to 3D space. We show here how to solve the inverse problem, i.e., given an input metric, compute a smooth intersection-free high-d embedding of the input such that the pullback metric of the embedding matches the input metric. Our numerical solution mechanism matches the deformation gradient of the 3D → higher-d mapping with the given Riemannian metric. We demonstrate applications of the method, by being used to construct anisotropic Restricted Voronoi Diagram (RVD) and anisotropic meshing, that are otherwise extremely difficult to compute. In the SIFHDE-space constructed by our algorithm, difficult 3D anisotropic computations are replaced with simple Euclidean computations, resulting in an isotropic RVD and its dual mesh on this high-d embedding. The results are compared with the state-ofthe-art in anisotropic surface and volume meshings using several examples and evaluation metrics

    Fitting a manifold to data in the presence of large noise

    Full text link
    We assume that M0M_0 is a dd-dimensional C2,1C^{2,1}-smooth submanifold of RnR^n. Let K0K_0 be the convex hull of M0,M_0, and B1n(0)B^n_1(0) be the unit ball. We assume that M0K0B1n(0). M_0 \subseteq \partial K_0 \subseteq B^n_1(0). We also suppose that M0M_0 has volume (dd-dimensional Hausdorff measure) less or equal to VV, reach (i.e., normal injectivity radius) greater or equal to τ\tau. Moreover, we assume that M0M_0 is RR-exposed, that is, tangent to every point xMx \in M there is a closed ball of radius RR that contains MM. Let x1,,xNx_1, \dots, x_N be independent random variables sampled from uniform distribution on M0M_0 and ζ1,,ζN\zeta_1, \dots, \zeta_N be a sequence of i.i.d Gaussian random variables in RnR^n that are independent of x1,,xNx_1, \dots, x_N and have mean zero and covariance σ2In.\sigma^2 I_n. We assume that we are given the noisy sample points yiy_i, given by yi=xi+ζi, for i=1,2,,N. y_i = x_i + \zeta_i,\quad \hbox{ for }i = 1, 2, \dots,N. Let ϵ,η>0\epsilon,\eta>0 be real numbers and k2k\geq 2. Given points yiy_i, i=1,2,,Ni=1,2,\dots,N, we produce a CkC^k-smooth function which zero set is a manifold MrecRnM_{rec}\subseteq R^n such that the Hausdorff distance between MrecM_{rec} and M0M_0 is at most ϵ \epsilon and MrecM_{rec} has reach that is bounded below by cτ/d6c\tau/d^6 with probability at least 1η.1 - \eta. Assuming d<cloglognd < c \sqrt{\log \log n} and all the other parameters are positive constants independent of nn, the number of the needed arithmetic operations is polynomial in nn. In the present work, we allow the noise magnitude σ\sigma to be an arbitrarily large constant, thus overcoming a drawback of previous work

    Deep Learning for Inverse Problems: Performance Characterizations, Learning Algorithms, and Applications

    Get PDF
    Deep learning models have witnessed immense empirical success over the last decade. However, in spite of their widespread adoption, a profound understanding of the generalization behaviour of these over-parameterized architectures is still missing. In this thesis, we provide one such way via a data-dependent characterizations of the generalization capability of deep neural networks based data representations. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the cardinality of the training set, and the Lipschitz properties of a deep neural network. We then specialize our analysis to a specific class of model based regression problems, namely the inverse problems. These problems often come with well defined forward operators that map variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. We offer a generalisation error bound that -- apart from the other factors -- depends on the Jacobian of the composition of the forward operator with the neural network. Motivated by our analysis, we then propose a `plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also provide a method allowing us to tightly upper bound the norms of the Jacobians of the relevant operators that is much more {computationally} efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing setup and inverse problems that are of interest in the biomedical imaging setup

    New Analysis of Manifold Embeddings and Signal Recovery from Compressive Measurements

    Get PDF
    Compressive Sensing (CS) exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive, often random linear measurements of that signal. Strong theoretical guarantees have been established concerning the embedding of a sparse signal family under a random measurement operator and on the accuracy to which sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Using tools from the theory of empirical processes, we improve upon previous results concerning the embedding of low-dimensional manifolds under random measurement operators. We also establish both deterministic and probabilistic instance-optimal bounds in 2\ell_2 for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing evidence that manifold-based models can be used with high accuracy in compressive signal processing.Comment: arXiv admin note: substantial text overlap with arXiv:1002.124
    corecore