863 research outputs found

    Learning shape correspondence with anisotropic convolutional neural networks

    Get PDF
    Establishing correspondence between shapes is a fundamental problem in geometry processing, arising in a wide variety of applications. The problem is especially difficult in the setting of non-isometric deformations, as well as in the presence of topological noise and missing parts, mainly due to the limited capability to model such deformations axiomatically. Several recent works showed that invariance to complex shape transformations can be learned from examples. In this paper, we introduce an intrinsic convolutional neural network architecture based on anisotropic diffusion kernels, which we term Anisotropic Convolutional Neural Network (ACNN). In our construction, we generalize convolutions to non-Euclidean domains by constructing a set of oriented anisotropic diffusion kernels, creating in this way a local intrinsic polar representation of the data (`patch'), which is then correlated with a filter. Several cascades of such filters, linear, and non-linear operators are stacked to form a deep neural network whose parameters are learned by minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes in very challenging settings, achieving state-of-the-art results on some of the most difficult recent correspondence benchmarks

    Semantic Cross-View Matching

    Full text link
    Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results

    Spectral Geometric Methods for Deformable 3D Shape Retrieval

    Get PDF
    As 3D applications ranging from medical imaging to industrial design continue to grow, so does the importance of developing robust 3D shape retrieval systems. A key issue in developing an accurate shape retrieval algorithm is to design an efficient shape descriptor for which an index can be built, and similarity queries can be answered efficiently. While the overwhelming majority of prior work on 3D shape analysis has concentrated primarily on rigid shape retrieval, many real objects such as articulated motions of humans are nonrigid and hence can exhibit a variety of poses and deformations. In this thesis, we present novel spectral geometric methods for analyzing and distinguishing between deformable 3D shapes. First, we comprehensively review recent shape descriptors based on the spectral decomposition of the Laplace-Beltrami operator, which provides a rich set of eigenbases that are invariant to intrinsic isometries. Then we provide a general and flexible framework for the analysis and design of shape signatures from the spectral graph wavelet perspective. In a bid to capture the global and local geometry, we propose a multiresolution shape signature based on a cubic spline wavelet generating kernel. This signature delivers best-in-class shape retrieval performance. Second, we investigate the ambiguity modeling of codebook for the densely distributed low-level shape descriptors. Inspired by the ability of spatial cues to improve discrimination between shapes, we also propose to adopt the isocontours of the second eigenfunction of the Laplace-Beltrami operator to perform surface partition, which can significantly ameliorate the retrieval performance of the time-scaled local descriptors. To further enhance the shape retrieval accuracy, we introduce an intrinsic spatial pyramid matching approach. Extensive experiments are carried out on two 3D shape benchmarks to assess the performance of the proposed spectral geometric approaches in comparison with state-of-the-art methods

    Geometric and Photometric Data Fusion in Non-Rigid Shape Analysis

    Get PDF
    In this paper, we explore the use of the diffusion geometry framework for the fusion of geometric and photometric information in local and global shape descriptors. Our construction is based on the definition of a diffusion process on the shape manifold embedded into a high-dimensional space where the embedding coordinates represent the photometric information. Experimental results show that such data fusion is useful in coping with different challenges of shape analysis where pure geometric and pure photometric methods fai
    corecore