28,606 research outputs found

    Symmetry and Fourier descriptor : a hybrid feature for NURBS based B-Rep models retrieval

    Get PDF
    International audienceAs the number of models in 3D databases grows, an efficient 3D models indexing mechanism and a similarity measure to ease model retrieval are necessary. In this paper, we present a query-by-model framework for NURBS based B-Rep models retrieval that combines partial symmetry of the object and the Fourier shape descriptor of canonical 2D projections of the 3D models. In fact, most objects are composed by similar parts up to an isometry. By detecting the dominant partial symmetry of a given NURBS based B-Rep model, we define two canonical planes from which the Fourier descriptors are extracted to measure the similarity among 3D models

    Quasi Spin Images

    Get PDF
    The increasing adoption of 3D capturing equipment, now also found in mobile devices, means that 3D content is increasingly prevalent. Common operations on such data, including 3D object recognition and retrieval, are based on the measurement of similarity between 3D objects. A common way to measure object similarity is through local shape descriptors, which aim to do part-to-part matching by describing portions of an object's shape. The Spin Image is one of the local descriptors most suitable for use in scenes with high degrees of clutter and occlusion but its practical use has been hampered by high computational demands. The rise in processing power of the GPU represents an opportunity to significantly improve the generation and comparison performance of descriptors, such as the Spin Image, thereby increasing the practical applicability of methods making use of it. In this paper we introduce a GPU-based Quasi Spin Image (QSI) algorithm, a variation of the original Spin Image, and show that a speedup of an order of magnitude relative to a reference CPU implementation can be achieved in terms of the image generation rate. In addition, the QSI is noise free, can be computed consistently, and a preliminary evaluation shows it correlates well relative to the original Spin Image

    Geometric Approaches for 3D Shape Denoising and Retrieval

    Get PDF
    A key issue in developing an accurate 3D shape recognition system is to design an efficient shape descriptor for which an index can be built, and similarity queries can be answered efficiently. While the overwhelming majority of prior work on 3D shape analysis has concentrated primarily on rigid shape retrieval, many real objects such as articulated motions of humans are nonrigid and hence can exhibit a variety of poses and deformations. Motivated by the recent surge of interest in content-based analysis of 3D objects in computeraided design and multimedia computing, we develop in this thesis a unified theoretical and computational framework for 3D shape denoising and retrieval by incorporating insights gained from algebraic graph theory and spectral geometry. We first present a regularized kernel diffusion for 3D shape denoising by solving partial differential equations in the weighted graph-theoretic framework. Then, we introduce a computationally fast approach for surface denoising using the vertexcentered finite volume method coupled with the mesh covariance fractional anisotropy. Additionally, we propose a spectral-geometric shape skeleton for 3D object recognition based on the second eigenfunction of the Laplace-Beltrami operator in a bid to capture the global and local geometry of 3D shapes. To further enhance the 3D shape retrieval accuracy, we introduce a graph matching approach by assigning geometric features to each endpoint of the shape skeleton. Extensive experiments are carried out on two 3D shape benchmarks to assess the performance of the proposed shape retrieval framework in comparison with state-of-the-art methods. The experimental results show that the proposed shape descriptor delivers best-in-class shape retrieval performance

    Sketch-based 3D Shape Retrieval using Convolutional Neural Networks

    Full text link
    Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.Comment: CVPR 201
    • …
    corecore