1,707 research outputs found

    Uniform Gaussian bounds for subelliptic heat kernels and an application to the total variation flow of graphs over Carnot groups

    Get PDF
    In this paper we study heat kernels associated to a Carnot group GG, endowed with a family of collapsing left-invariant Riemannian metrics \sigma_\e which converge in the Gromov-Hausdorff sense to a sub-Riemannian structure on GG as \e\to 0. The main new contribution are Gaussian-type bounds on the heat kernel for the \sigma_\e metrics which are stable as \e\to 0 and extend the previous time-independent estimates in \cite{CiMa-F}. As an application we study well posedness of the total variation flow of graph surfaces over a bounded domain in (G,\s_\e). We establish interior and boundary gradient estimates, and develop a Schauder theory which are stable as \e\to 0. As a consequence we obtain long time existence of smooth solutions of the sub-Riemannian flow (\e=0), which in turn yield sub-Riemannian minimal surfaces as tt\to \infty.Comment: We have corrected a few typos and added a few more details to the proof of the Gaussian estimate

    Highly corrupted image inpainting through hypoelliptic diffusion

    Get PDF
    We present a new image inpainting algorithm, the Averaging and Hypoelliptic Evolution (AHE) algorithm, inspired by the one presented in [SIAM J. Imaging Sci., vol. 7, no. 2, pp. 669--695, 2014] and based upon a semi-discrete variation of the Citti-Petitot-Sarti model of the primary visual cortex V1. The AHE algorithm is based on a suitable combination of sub-Riemannian hypoelliptic diffusion and ad-hoc local averaging techniques. In particular, we focus on reconstructing highly corrupted images (i.e. where more than the 80% of the image is missing), for which we obtain reconstructions comparable with the state-of-the-art.Comment: 15 pages, 10 figure

    An Infinitesimal Probabilistic Model for Principal Component Analysis of Manifold Valued Data

    Full text link
    We provide a probabilistic and infinitesimal view of how the principal component analysis procedure (PCA) can be generalized to analysis of nonlinear manifold valued data. Starting with the probabilistic PCA interpretation of the Euclidean PCA procedure, we show how PCA can be generalized to manifolds in an intrinsic way that does not resort to linearization of the data space. The underlying probability model is constructed by mapping a Euclidean stochastic process to the manifold using stochastic development of Euclidean semimartingales. The construction uses a connection and bundles of covariant tensors to allow global transport of principal eigenvectors, and the model is thereby an example of how principal fiber bundles can be used to handle the lack of global coordinate system and orientations that characterizes manifold valued statistics. We show how curvature implies non-integrability of the equivalent of Euclidean principal subspaces, and how the stochastic flows provide an alternative to explicit construction of such subspaces. We describe estimation procedures for inference of parameters and prediction of principal components, and we give examples of properties of the model on embedded surfaces

    Principal Sub-manifolds

    Full text link
    We revisit the problem of finding principal components to the multivariate datasets, that lie on an embedded nonlinear Riemannian manifold within the higher-dimensional space. Our aim is to extend the geometric interpretation of PCA, while being able to capture the non-geodesic form of variation in the data. We introduce the concept of a principal sub-manifold, a manifold passing through the center of the data, and at any point on the manifold, it moves in the direction of the highest curvature in the space spanned by eigenvectors of the local tangent space PCA. Compared to the recent work in the case where the sub-manifold is of dimension one (Panaretos, Pham and Yao 2014)--essentially a curve lying on the manifold attempting to capture the one-dimensional variation--the current setting is much more general. The principal sub-manifold is therefore an extension of the principal flow, accommodating to capture the higher dimensional variation in the data. We show the principal sub-manifold yields the usual principal components in Euclidean space. By means of examples, we illustrate how to find, use and interpret principal sub-manifold with an extension of using it in shape analysis

    Active Contour Models for Manifold Valued Image Segmentation

    Full text link
    Image segmentation is the process of partitioning a image into different regions or groups based on some characteristics like color, texture, motion or shape etc. Active contours is a popular variational method for object segmentation in images, in which the user initializes a contour which evolves in order to optimize an objective function designed such that the desired object boundary is the optimal solution. Recently, imaging modalities that produce Manifold valued images have come up, for example, DT-MRI images, vector fields. The traditional active contour model does not work on such images. In this paper, we generalize the active contour model to work on Manifold valued images. As expected, our algorithm detects regions with similar Manifold values in the image. Our algorithm also produces expected results on usual gray-scale images, since these are nothing but trivial examples of Manifold valued images. As another application of our general active contour model, we perform texture segmentation on gray-scale images by first creating an appropriate Manifold valued image. We demonstrate segmentation results for manifold valued images and texture images

    Differential geometric regularization for supervised learning of classifiers

    Full text link
    We study the problem of supervised learning for both binary and multiclass classification from a unified geometric perspective. In particular, we propose a geometric regularization technique to find the submanifold corresponding to an estimator of the class probability P(y|\vec x). The regularization term measures the volume of this submanifold, based on the intuition that overfitting produces rapid local oscillations and hence large volume of the estimator. This technique can be applied to regularize any classification function that satisfies two requirements: firstly, an estimator of the class probability can be obtained; secondly, first and second derivatives of the class probability estimator can be calculated. In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification.http://proceedings.mlr.press/v48/baia16.pdfPublished versio
    corecore