12,944 research outputs found

    SHREC'16 Track: 3D Sketch-Based 3D Shape Retrieval

    Get PDF
    Sketch-based 3D shape retrieval has unique representation availability of the queries and vast applications. Therefore, it has received more and more attentions in the research community of content-based 3D object retrieval. However, sketch-based 3D shape retrieval is a challenging research topic due to the semantic gap existing between the inaccurate representation of sketches and accurate representation of 3D models. In order to enrich and advance the study of sketch-based 3D shape retrieval, we initialize the research on 3D sketch-based 3D model retrieval and collect a 3D sketch dataset based on a developed 3D sketching interface which facilitates us to draw 3D sketches in the air while standing in front of a Microsoft Kinect. The objective of this track is to evaluate the performance of different 3D sketch-based 3D model retrieval algorithms using the hand-drawn 3D sketch query dataset and a generic 3D model target dataset. The benchmark contains 300 sketches that are evenly divided into 30 classes, as well as 1 258 3D models that are classified into 90 classes. In this track, nine runs have been submitted by five groups and their retrieval performance has been evaluated using seven commonly used retrieval performance metrics. We wish this benchmark, the comparative evaluation results and the corresponding evaluation code will further promote sketch-based 3D shape retrieval and its applications

    Sketch-based 3D Shape Retrieval using Convolutional Neural Networks

    Full text link
    Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.Comment: CVPR 201

    Structure-Aware 3D VR Sketch to 3D Shape Retrieval

    Full text link
    We study the practical task of fine-grained 3D-VR-sketch-based 3D shape retrieval. This task is of particular interest as 2D sketches were shown to be effective queries for 2D images. However, due to the domain gap, it remains hard to achieve strong performance in 3D shape retrieval from 2D sketches. Recent work demonstrated the advantage of 3D VR sketching on this task. In our work, we focus on the challenge caused by inherent inaccuracies in 3D VR sketches. We observe that retrieval results obtained with a triplet loss with a fixed margin value, commonly used for retrieval tasks, contain many irrelevant shapes and often just one or few with a similar structure to the query. To mitigate this problem, we for the first time draw a connection between adaptive margin values and shape similarities. In particular, we propose to use a triplet loss with an adaptive margin value driven by a "fitting gap", which is the similarity of two shapes under structure-preserving deformations. We also conduct a user study which confirms that this fitting gap is indeed a suitable criterion to evaluate the structural similarity of shapes. Furthermore, we introduce a dataset of 202 VR sketches for 202 3D shapes drawn from memory rather than from observation. The code and data are available at https://github.com/Rowl1ng/Structure-Aware-VR-Sketch-Shape-Retrieval.Comment: Accepted by 3DV 202

    Semantic Similarity Metric Learning for Sketch-Based 3D Shape Retrieval

    Get PDF
    Since the development of the touch screen technology makes sketches simple to draw and obtain, sketch-based 3D shape retrieval has received increasing attention in the community of computer vision and graphics in recent years. The main challenge is the big domain discrepancy between 2D sketches and 3D shapes. Most existing works tried to simultaneously map sketches and 3D shapes into a joint feature embedding space, which has a low efficiency and high computational cost. In this paper, we propose a novel semantic similarity metric learning method based on a teacher-student strategy for sketch-based 3D shape retrieval. We first extract the pre-learned semantic features of 3D shapes from the teacher network and then use them to guide the feature learning of 2D sketches in the student network. The experiment results show that our method has a better retrieval performance

    Multi-view pairwise relationship learning for sketch based 3D shape retrieval

    Full text link
    © 2017 IEEE. Recent progress in sketch-based 3D shape retrieval creates a novel and user-friendly way to explore massive 3D shapes on the Internet. However, current methods on this topic rely on designing invariant features for both sketches and 3D shapes, or complex matching strategies. Therefore, they suffer from problems like arbitrary drawings and inconsistent viewpoints. To tackle this problem, we propose a probabilistic framework based on Multi-View Pairwise Relationship (MVPR) learning. Our framework includes multiple views of 3D shapes as the intermediate layer between sketches and 3D shapes, and transforms the original retrieval problem into the form of inferring pairwise relationship between sketches and views. We accomplish pairwise relationship inference by a novel MVPR net, which can automatically predict and merge the pairwise relationships between a sketch and multiple views, thus freeing us from exhaustively selecting the best view of 3D shapes. We also propose to learn robust features for sketches and views via fine-tuning pre-trained networks. Extensive experiments on a large dataset demonstrate that the proposed method can outperform state-of-the-art methods significantly

    Web3D learning framework for 3D shape retrieval based on hybrid convolutional neural networks

    Get PDF
    With the rapid development of Web3D technologies, sketch-based model retrieval has become an increasingly important challenge, while the application of Virtual Reality and 3D technologies has made shape retrieval of furniture over a web browser feasible. In this paper, we propose a learning framework for shape retrieval based on two Siamese VGG-16 Convolutional Neural Networks (CNNs), and a CNN-based hybrid learning algorithm to select the best view for a shape. In this algorithm, the AlexNet and VGG-16 CNN architectures are used to perform classification tasks and to extract features, respectively. In addition, a feature fusion method is used to measure the similarity relation of the output features from the two Siamese networks. The proposed framework can provide new alternatives for furniture retrieval in the Web3D environment. The primary innovation is in the employment of deep learning methods to solve the challenge of obtaining the best view of 3D furniture, and to address cross-domain feature learning problems. We conduct an experiment to verify the feasibility of the framework and the results show our approach to be superior in comparison to many mainstream state-of-the-art approaches

    Sketch-based 3D Object Retrieval Using Two Views and Visual Part Alignment

    Get PDF
    International audienceHand drawn figures are the imprints of shapes in human's mind. How a human expresses a shape is a consequence of how he or she visualizes it. A query-by-sketch 3D object retrieval application is closely tied to this concept from two aspects. First, describing sketches must involve elements in a figure that matter most to a human. Second, the representative 2D projection of the target 3D objects must be limited to ''the canonical views'' from a human cognition perspective. We advocate for these two rules by presenting a new approach for sketch-based 3D object retrieval that describes a 2D shape by the visual protruding parts of its silhouette. Furthermore, the proposed approach computes estimations of ''part occlusion'' and ''symmetry'' in 2D shapes in a new paradigm for viewpoint selection that represents 3D objects by only the two views corresponding to the minimum value of each
    • …
    corecore