23,326 research outputs found

    Pattern vectors from algebraic graph theory

    Get PDF
    Graphstructures have proven computationally cumbersome for pattern analysis. The reason for this is that, before graphs can be converted to pattern vectors, correspondences must be established between the nodes of structures which are potentially of different size. To overcome this problem, in this paper, we turn to the spectral decomposition of the Laplacian matrix. We show how the elements of the spectral matrix for the Laplacian can be used to construct symmetric polynomials that are permutation invariants. The coefficients of these polynomials can be used as graph features which can be encoded in a vectorial manner. We extend this representation to graphs in which there are unary attributes on the nodes and binary attributes on the edges by using the spectral decomposition of a Hermitian property matrix that can be viewed as a complex analogue of the Laplacian. To embed the graphs in a pattern space, we explore whether the vectors of invariants can be embedded in a low- dimensional space using a number of alternative strategies, including principal components analysis ( PCA), multidimensional scaling ( MDS), and locality preserving projection ( LPP). Experimentally, we demonstrate that the embeddings result in well- defined graph clusters. Our experiments with the spectral representation involve both synthetic and real- world data. The experiments with synthetic data demonstrate that the distances between spectral feature vectors can be used to discriminate between graphs on the basis of their structure. The real- world experiments show that the method can be used to locate clusters of graphs

    Diffeomorphic density matching by optimal information transport

    Full text link
    We address the following problem: given two smooth densities on a manifold, find an optimal diffeomorphism that transforms one density into the other. Our framework builds on connections between the Fisher-Rao information metric on the space of probability densities and right-invariant metrics on the infinite-dimensional manifold of diffeomorphisms. This optimal information transport, and modifications thereof, allows us to construct numerical algorithms for density matching. The algorithms are inherently more efficient than those based on optimal mass transport or diffeomorphic registration. Our methods have applications in medical image registration, texture mapping, image morphing, non-uniform random sampling, and mesh adaptivity. Some of these applications are illustrated in examples.Comment: 35 page

    Lagrangian matching invariants for fibred four-manifolds: I

    Full text link
    In a pair of papers, we construct invariants for smooth four-manifolds equipped with `broken fibrations' - the singular Lefschetz fibrations of Auroux, Donaldson and Katzarkov - generalising the Donaldson-Smith invariants for Lefschetz fibrations. The `Lagrangian matching invariants' are designed to be comparable with the Seiberg-Witten invariants of the underlying four-manifold. They fit into a field theory which assigns Floer homology groups to fibred 3-manifolds. The invariants are derived from moduli spaces of pseudo-holomorphic sections of relative Hilbert schemes of points on the fibres, subject to Lagrangian boundary conditions. Part I is devoted to the symplectic geometry of these Lagrangians.Comment: 72 pages, 4 figures. v.2 - numerous small corrections and clarification

    Learning shape correspondence with anisotropic convolutional neural networks

    Get PDF
    Establishing correspondence between shapes is a fundamental problem in geometry processing, arising in a wide variety of applications. The problem is especially difficult in the setting of non-isometric deformations, as well as in the presence of topological noise and missing parts, mainly due to the limited capability to model such deformations axiomatically. Several recent works showed that invariance to complex shape transformations can be learned from examples. In this paper, we introduce an intrinsic convolutional neural network architecture based on anisotropic diffusion kernels, which we term Anisotropic Convolutional Neural Network (ACNN). In our construction, we generalize convolutions to non-Euclidean domains by constructing a set of oriented anisotropic diffusion kernels, creating in this way a local intrinsic polar representation of the data (`patch'), which is then correlated with a filter. Several cascades of such filters, linear, and non-linear operators are stacked to form a deep neural network whose parameters are learned by minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes in very challenging settings, achieving state-of-the-art results on some of the most difficult recent correspondence benchmarks
    • …
    corecore