23 research outputs found

    A Survey of Sketch Based Modeling Systems

    Get PDF

    Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches

    Full text link
    The rapid development of AR/VR brings tremendous demands for 3D content. While the widely-used Computer-Aided Design (CAD) method requires a time-consuming and labor-intensive modeling process, sketch-based 3D modeling offers a potential solution as a natural form of computer-human interaction. However, the sparsity and ambiguity of sketches make it challenging to generate high-fidelity content reflecting creators' ideas. Precise drawing from multiple views or strategic step-by-step drawings is often required to tackle the challenge but is not friendly to novice users. In this work, we introduce a novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information. Specifically, we introduce a lightweight generation network for efficient inference in real-time and a structural-aware adversarial training approach with a Stroke Enhancement Module (SEM) to capture the structural information to facilitate learning of the realistic and fine-detailed shape structures for high-fidelity performance. Extensive experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets

    Efficient sketch-based creation of detailed character models through data-driven mesh deformations

    Get PDF
    Creation of detailed character models is a very challenging task in animation production. Sketch-based character model creation from a 3D template provides a promising solution. However, how to quickly find correct correspondences between user's drawn sketches and the 3D template model, how to efficiently deform the 3D template model to exactly match user's drawn sketches, and realize real-time interactive modeling is still an open topic. In this paper, we propose a new approach and develop a user interface to effectively tackle this problem. Our proposed approach includes using user's drawn sketches to retrieve a most similar 3D template model from our dataset and marrying human's perception and interactions with computer's highly efficient computing to extract occluding and silhouette contours of the 3D template model and find correct correspondences quickly. We then combine skeleton-based deformation and mesh editing to deform the 3D template model to fit user's drawn sketches and create new and detailed 3D character models. The results presented in this paper demonstrate the effectiveness and advantages of our proposed approach and usefulness of our developed user interface

    An Investigation into 2D and 3D Shapes Perception

    Get PDF
    Numerous research results support the finding that a product\u27s visual appearance is important. In particular, end products or services that have a direct product-user interaction have to be developed in accordance with the taste of the user and the market. The user-centered product should be designed and created according to both the technical requirements and the customer needs, but it should also differentiate itself from the competition. The shape of a product, whether we observe it in 2D or 3D, should communicate various intangible meanings. Our aim was to investigate whether the meaning of bipolar adjectives varies when observing samples of 2D and 3D shapes. The study was conducted using 2D shape contour samples and interactive 3D extruded models that could be rotated in virtual space. In order to determine the relationship between the shape and the user\u27s perception of it, Kansei engineering methodology was used. We collected data with Semantic differential survey using five level Likert scale. The results revealed minor deviations in the users’ perceptions of the 2D and 3D sample shapes

    Modeling 3D animals from a side-view sketch

    Get PDF
    Shape Modeling International 2014International audienceUsing 2D contour sketches as input is an attractive solution for easing the creation of 3D models. This paper tackles the problem of creating 3D models of animals from a single, side-view sketch. We use the a priori assumptions of smoothness and structural symmetry of the animal about the sagittal plane to inform the 3D reconstruction. Our contributions include methods for identifying and inferring the contours of shape parts from the input sketch, a method for identifying the hierarchy of these structural parts including the detection of approximate symmetric pairs, and a hierarchical algorithm for positioning and blending these parts into a consistent 3D implicit-surface-based model. We validate this pipeline by showing that a number of plausible animal shapes can be automatically constructed from a single sketch

    Sketch-based character prototyping by deformation

    Get PDF
    Master'sMASTER OF SCIENC

    Reality3DSketch: Rapid 3D Modeling of Objects from Single Freehand Sketches

    Full text link
    The emerging trend of AR/VR places great demands on 3D content. However, most existing software requires expertise and is difficult for novice users to use. In this paper, we aim to create sketch-based modeling tools for user-friendly 3D modeling. We introduce Reality3DSketch with a novel application of an immersive 3D modeling experience, in which a user can capture the surrounding scene using a monocular RGB camera and can draw a single sketch of an object in the real-time reconstructed 3D scene. A 3D object is generated and placed in the desired location, enabled by our novel neural network with the input of a single sketch. Our neural network can predict the pose of a drawing and can turn a single sketch into a 3D model with view and structural awareness, which addresses the challenge of sparse sketch input and view ambiguity. We conducted extensive experiments synthetic and real-world datasets and achieved state-of-the-art (SOTA) results in both sketch view estimation and 3D modeling performance. According to our user study, our method of performing 3D modeling in a scene is >>5x faster than conventional methods. Users are also more satisfied with the generated 3D model than the results of existing methods.Comment: IEEE Transactions on MultiMedi

    Fast character modeling with sketch-based PDE surfaces

    Get PDF
    © 2020, The Author(s). Virtual characters are 3D geometric models of characters. They have a lot of applications in multimedia. In this paper, we propose a new physics-based deformation method and efficient character modelling framework for creation of detailed 3D virtual character models. Our proposed physics-based deformation method uses PDE surfaces. Here PDE is the abbreviation of Partial Differential Equation, and PDE surfaces are defined as sculpting force-driven shape representations of interpolation surfaces. Interpolation surfaces are obtained by interpolating key cross-section profile curves and the sculpting force-driven shape representation uses an analytical solution to a vector-valued partial differential equation involving sculpting forces to quickly obtain deformed shapes. Our proposed character modelling framework consists of global modeling and local modeling. The global modeling is also called model building, which is a process of creating a whole character model quickly with sketch-guided and template-based modeling techniques. The local modeling produces local details efficiently to improve the realism of the created character model with four shape manipulation techniques. The sketch-guided global modeling generates a character model from three different levels of sketched profile curves called primary, secondary and key cross-section curves in three orthographic views. The template-based global modeling obtains a new character model by deforming a template model to match the three different levels of profile curves. Four shape manipulation techniques for local modeling are investigated and integrated into the new modelling framework. They include: partial differential equation-based shape manipulation, generalized elliptic curve-driven shape manipulation, sketch assisted shape manipulation, and template-based shape manipulation. These new local modeling techniques have both global and local shape control functions and are efficient in local shape manipulation. The final character models are represented with a collection of surfaces, which are modeled with two types of geometric entities: generalized elliptic curves (GECs) and partial differential equation-based surfaces. Our experiments indicate that the proposed modeling approach can build detailed and realistic character models easily and quickly
    corecore