1,165 research outputs found

    Explorative Study on Asymmetric Sketch Interactions for Object Retrieval in Virtual Reality

    Get PDF
    Drawing tools for Virtual Reality (VR) enable users to model 3D designs from within the virtual environment itself. These tools employ sketching and sculpting techniques known from desktop-based interfaces and apply them to hand-based controller interaction. While these techniques allow for mid-air sketching of basic shapes, it remains difficult for users to create detailed and comprehensive 3D models. Our work focuses on supporting the user in designing the virtual environment around them by enhancing sketch-based interfaces with a supporting system for interactive model retrieval. An immersed user can query a database containing detailed 3D models and replace them with the virtual environment through sketching. To understand supportive sketching within a virtual environment, we made an explorative comparison between asymmetric methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. Our work shows that different patterns emerge when users interact with 3D sketches rather than 2D sketches to compensate for different results from the retrieval system. In particular, the user adopts strategies when drawing on canvas of different sizes or using a physical device instead of a virtual canvas. While we pose our work as a retrieval problem for 3D models of chairs, our results can be extrapolated to other sketching tasks for virtual environments

    User-adaptive sketch-based 3D CAD model retrieval

    Get PDF
    3D CAD models are an important digital resource in the manufacturing industry. 3D CAD model retrieval has become a key technology in product lifecycle management enabling the reuse of existing design data. In this paper, we propose a new method to retrieve 3D CAD models based on 2D pen-based sketch inputs. Sketching is a common and convenient method for communicating design intent during early stages of product design, e.g., conceptual design. However, converting sketched information into precise 3D engineering models is cumbersome, and much of this effort can be avoided by reuse of existing data. To achieve this purpose, we present a user-adaptive sketch-based retrieval method in this paper. The contributions of this work are twofold. Firstly, we propose a statistical measure for CAD model retrieval: the measure is based on sketch similarity and accounts for users’ drawing habits. Secondly, for 3D CAD models in the database, we propose a sketch generation pipeline that represents each 3D CAD model by a small yet sufficient set of sketches that are perceptually similar to human drawings. User studies and experiments that demonstrate the effectiveness of the proposed method in the design process are presented

    Dense 3D Object Reconstruction from a Single Depth View

    Get PDF
    In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded/missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.Comment: TPAMI 2018. Code and data are available at: https://github.com/Yang7879/3D-RecGAN-extended. This article extends from arXiv:1708.0796

    Free-form Shape Modeling in XR: A Systematic Review

    Full text link
    Shape modeling research in Computer Graphics has been an active area for decades. The ability to create and edit complex 3D shapes has been of key importance in Computer-Aided Design, Animation, Architecture, and Entertainment. With the growing popularity of Virtual and Augmented Reality, new applications and tools have been developed for artistic content creation; real-time interactive shape modeling has become increasingly important for a continuum of virtual and augmented reality environments (eXtended Reality (XR)). Shape modeling in XR opens new possibilities for intuitive design and shape modeling in an accessible way. Artificial Intelligence (AI) approaches generating shape information from text prompts are set to change how artists create and edit 3D models. There has been a substantial body of research on interactive 3D shape modeling. However, there is no recent extensive review of the existing techniques and what AI shape generation means for shape modeling in interactive XR environments. In this state-of-the-art paper, we fill this research gap in the literature by surveying free-form shape modeling work in XR, with a focus on sculpting and 3D sketching, the most intuitive forms of free-form shape modeling. We classify and discuss these works across five dimensions: contribution of the articles, domain setting, interaction tool, auto-completion, and collaborative designing. The paper concludes by discussing the disconnect between interactive 3D sculpting and sketching and how this will likely evolve with the prevalence of AI shape-generation tools in the future
    • …
    corecore