3 research outputs found

    Towards Recognizing of 3D Models Using A Single Image

    Get PDF
    As 3D data is getting more popular, techniques for retrieving a particular 3D model are necessary. We want to recognize a 3D model from a single photograph; as any user can easily get an image of a model he/she would like to find, requesting by an image is indeed simple and natural. However, a 2D intensity image is relative to viewpoint, texture and lighting condition and thus matching with a 3D geometric model is very challenging. This paper proposes a first step towards matching a 2D image to models, based on features repeatable in 2D images and in depth images (generated from 3D models); we show their independence to textures and lighting. Then, the detected features are matched to recognize 3D models by combining HOG (Histogram Of Gradients) descriptors and repeatability scores. The proposed methods reaches a recognition rate of 72% among 12 3D objects categories, and outperforms classical feature detection techniques for recognizing 3D models using a single image

    3D object retrieval and segmentation: various approaches including 2D poisson histograms and 3D electrical charge distributions.

    Get PDF
    Nowadays 3D models play an important role in many applications: viz. games, cultural heritage, medical imaging etc. Due to the fast growth in the number of available 3D models, understanding, searching and retrieving such models have become interesting fields within computer vision. In order to search and retrieve 3D models, we present two different approaches: one is based on solving the Poisson Equation over 2D silhouettes of the models. This method uses 60 different silhouettes, which are automatically extracted from different viewangles. Solving the Poisson equation for each silhouette assigns a number to each pixel as its signature. Accumulating these signatures generates a final histogram-based descriptor for each silhouette, which we call a SilPH (Silhouette Poisson Histogram). For the second approach, we propose two new robust shape descriptors based on the distribution of charge density on the surface of a 3D model. The Finite Element Method is used to calculate the charge density on each triangular face of each model as a local feature. Then we utilize the Bag-of-Features and concentric sphere frameworks to perform global matching using these local features. In addition to examining the retrieval accuracy of the descriptors in comparison to the state-of-the-art approaches, the retrieval speeds as well as robustness to noise and deformation on different datasets are investigated. On the other hand, to understand new complex models, we have also utilized distribution of electrical charge for proposing a system to decompose models into meaningful parts. Our robust, efficient and fully-automatic segmentation approach is able to specify the segments attached to the main part of a model as well as locating the boundary parts of the segments. The segmentation ability of the proposed system is examined on the standard datasets and its timing and accuracy are compared with the existing state-of-the-art approaches

    3D Object Retrieval Using Salient Views

    No full text
    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points [4, 3]. The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from [6] are then used to calculate the similarity between two 3D objects. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method [6], and our method achieves a 15-fold speedup in the feature extraction computation time
    corecore