33 research outputs found

    DifferSketching: How Differently Do People Sketch 3D Objects?

    Full text link
    Multiple sketch datasets have been proposed to understand how people draw 3D objects. However, such datasets are often of small scale and cover a small set of objects or categories. In addition, these datasets contain freehand sketches mostly from expert users, making it difficult to compare the drawings by expert and novice users, while such comparisons are critical in informing more effective sketch-based interfaces for either user groups. These observations motivate us to analyze how differently people with and without adequate drawing skills sketch 3D objects. We invited 70 novice users and 38 expert users to sketch 136 3D objects, which were presented as 362 images rendered from multiple views. This leads to a new dataset of 3,620 freehand multi-view sketches, which are registered with their corresponding 3D objects under certain views. Our dataset is an order of magnitude larger than the existing datasets. We analyze the collected data at three levels, i.e., sketch-level, stroke-level, and pixel-level, under both spatial and temporal characteristics, and within and across groups of creators. We found that the drawings by professionals and novices show significant differences at stroke-level, both intrinsically and extrinsically. We demonstrate the usefulness of our dataset in two applications: (i) freehand-style sketch synthesis, and (ii) posing it as a potential benchmark for sketch-based 3D reconstruction. Our dataset and code are available at https://chufengxiao.github.io/DifferSketching/.Comment: SIGGRAPH Asia 2022 (Journal Track

    Image-driven unsupervised 3D model co-segmentation

    Get PDF
    Segmentation of 3D models is a fundamental task in computer graphics and vision. Geometric methods usually lead to non-semantic and fragmentary segmentations. Learning techniques require a large amount of labeled training data. In this paper, we explore the feasibility of 3D model segmentation by taking advantage of the huge number of easy-to-obtain 2D realistic images available on the Internet. The regional color exhibited in images provides information that is valuable for segmentation. To transfer the segmentations, we first filter out inappropriate images with several criteria. The views of these images are estimated by our proposed texture-invariant view estimation Siamese network. The training samples are generated by rendering-based synthesis without laborious labeling. Subsequently, we transfer and merge the segmentations produced by each individual image by applying registration and a graph-based aggregation strategy. The final result is obtained by combining all segmentations within the 3D model set. Our qualitative and quantitative experimental results on several model categories validate effectiveness of our proposed method

    An evaluation of canonical forms for non-rigid 3D shape retrieval

    Get PDF
    Canonical forms attempt to factor out a non-rigid shape’s pose, giving a pose-neutral shape. This opens up the possibility of using methods originally designed for rigid shape retrieval for the task of non-rigid shape retrieval. We extend our recent benchmark for testing canonical form algorithms. Our new benchmark is used to evaluate a greater number of state-of-the-art canonical forms, on five recent non-rigid retrieval datasets, within two different retrieval frameworks. A total of fifteen different canonical form methods are compared. We find that the difference in retrieval accuracy between different canonical form methods is small, but varies significantly across different datasets. We also find that efficiency is the main difference between the methods
    corecore