1,261 research outputs found

    Geodesic Distance Function Learning via Heat Flow on Vector Fields

    Full text link
    Learning a distance function or metric on a given data manifold is of great importance in machine learning and pattern recognition. Many of the previous works first embed the manifold to Euclidean space and then learn the distance function. However, such a scheme might not faithfully preserve the distance function if the original manifold is not Euclidean. Note that the distance function on a manifold can always be well-defined. In this paper, we propose to learn the distance function directly on the manifold without embedding. We first provide a theoretical characterization of the distance function by its gradient field. Based on our theoretical analysis, we propose to first learn the gradient field of the distance function and then learn the distance function itself. Specifically, we set the gradient field of a local distance function as an initial vector field. Then we transport it to the whole manifold via heat flow on vector fields. Finally, the geodesic distance function can be obtained by requiring its gradient field to be close to the normalized vector field. Experimental results on both synthetic and real data demonstrate the effectiveness of our proposed algorithm

    Density ridge manifold traversal

    Get PDF
    The density ridge framework for estimating principal curves and surfaces has in a number of recent works been shown to capture manifold structure in data in an intuitive and effective manner. However, to date there exists no efficient way to traverse these manifolds as defined by density ridges. This is unfortunate, as manifold traversal is an important problem for example for shape estimation in medical imaging, or in general for being able to characterize and understand state transitions or local variability over the data manifold. In this paper, we remedy this situation by introducing a novel manifold traversal algorithm based on geodesics within the density ridge approach. The traversal is executed in a subspace capturing the intrinsic dimensionality of the data using dimensionality reduction techniques such as principal component analysis or kernel entropy component analysis. A mapping back to the ambient space is obtained by training a neural network. We compare against maximum mean discrepancy traversal, a recent approach, and obtain promising results
    • …
    corecore