1,217 research outputs found

    The Partial View Heat Kernel descriptor for 3D object representation

    Full text link

    Learning shape correspondence with anisotropic convolutional neural networks

    Get PDF
    Establishing correspondence between shapes is a fundamental problem in geometry processing, arising in a wide variety of applications. The problem is especially difficult in the setting of non-isometric deformations, as well as in the presence of topological noise and missing parts, mainly due to the limited capability to model such deformations axiomatically. Several recent works showed that invariance to complex shape transformations can be learned from examples. In this paper, we introduce an intrinsic convolutional neural network architecture based on anisotropic diffusion kernels, which we term Anisotropic Convolutional Neural Network (ACNN). In our construction, we generalize convolutions to non-Euclidean domains by constructing a set of oriented anisotropic diffusion kernels, creating in this way a local intrinsic polar representation of the data (`patch'), which is then correlated with a filter. Several cascades of such filters, linear, and non-linear operators are stacked to form a deep neural network whose parameters are learned by minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes in very challenging settings, achieving state-of-the-art results on some of the most difficult recent correspondence benchmarks

    Progressive Shape-Distribution-Encoder for 3D Shape Retrieval

    Get PDF
    Since there are complex geometric variations with 3D shapes, extracting efficient 3D shape features is one of the most challenging tasks in shape matching and retrieval. In this paper, we propose a deep shape descriptor by learning shape distributions at different diffusion time via a progressive shape-distribution-encoder (PSDE). First, we develop a shape distribution representation with the kernel density estimator to characterize the intrinsic geometry structures of 3D shapes. Then, we propose to learn a deep shape feature through an unsupervised PSDE. Specially, the unsupervised PSDE aims at modeling the complex non-linear transform of the estimated shape distributions between consecutive diffusion time. In order to characterize the intrinsic structures of 3D shapes more efficiently, we stack multiple PSDEs to form a network structure. Finally, we concatenate all neurons in the middle hidden layers of the unsupervised PSDE network to form an unsupervised shape descriptor for retrieval. Furthermore, by imposing an additional constraint on the outputs of all hidden layers, we propose a supervised PSDE to form a supervised shape descriptor, where for each hidden layer the similarity between a pair of outputs from the same class is as small as possible and the similarity between a pair of outputs from different classes is as large as possible. The proposed method is evaluated on three benchmark 3D shape datasets with large geometric variations, i.e., McGill, SHREC’10 ShapeGoogle and SHREC’14 Human datasets, and the experimental results demonstrate the superiority of the proposed method to the existing approaches
    • …
    corecore