1,382 research outputs found

    Multi-View Sketch-based FreeForm Modeling

    Get PDF
    International audienceFor the generation of freeform 3D models, one of the most intuitive solution is to use sketch-based modeling environments. Unfortunately, since the user interface relies upon the analyze of sketches in order to determine which action is requested by the user, the possible amount of different operations can be limited. In this paper, we present a 3D sketching system based on multiple views. Each view is specialized on a component of the modeling process (like the skeleton, the profile, etc.), and is based on specific sketching interactions. With this approach, an user could improve its understanding of the modeling process and per- form a larger range of modeling operations. Key words: Sketch-based 3D Modelin

    Neural networks based recognition of 3D freeform surface from 2D sketch

    Get PDF
    In this paper, the Back Propagation (BP) network and Radial Basis Function (RBF) neural network are employed to recognize and reconstruct 3D freeform surface from 2D freehand sketch. Some tests and comparison experiments have been made to evaluate the performance for the reconstruction of freeform surfaces of both networks using simulation data. The experimental results show that both BP and RBF based freeform surface reconstruction methods are feasible; and the RBF network performed better. The RBF average point error between the reconstructed 3D surface data and the desired 3D surface data is less than 0.05 over all our 75 test sample data

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Progressive surface modeling scheme from unorganised curves

    Get PDF
    This paper presents a novel surface modelling scheme to construct a freeform surface progressively from unorganised curves representing the boundary and interior characteristic curves. The approach can construct a base surface model from four ordinary or composite boundary curves and support incremental surface updating from interior characteristic curves, some of which may not be on the final surface. The base surface is first constructed as a regular Coons surface and upon receiving an interior curve sketch, it is then updated. With this progressive modelling scheme, a final surface with multiple sub-surfaces can be obtained from a set of unorganised curves and transferred to commercial surface modelling software for detailed modification. The approach has been tested with examples based on 3D motion sketches; it is capable of dealing with unorganised design curves for surface modelling in conceptual design. Its limitations have been discussed

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Sketching-out virtual humans: A smart interface for human modelling and animation

    Get PDF
    In this paper, we present a fast and intuitive interface for sketching out 3D virtual humans and animation. The user draws stick figure key frames first and chooses one for “fleshing-out” with freehand body contours. The system automatically constructs a plausible 3D skin surface from the rendered figure, and maps it onto the posed stick figures to produce the 3D character animation. A “creative model-based method” is developed, which performs a human perception process to generate 3D human bodies of various body sizes, shapes and fat distributions. In this approach, an anatomical 3D generic model has been created with three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially through rigid morphing, fatness morphing, and surface fitting to match the original 2D sketch. An auto-beautification function is also offered to regularise the 3D asymmetrical bodies from users’ imperfect figure sketches. Our current system delivers character animation in various forms, including articulated figure animation, 3D mesh model animation, 2D contour figure animation, and even 2D NPR animation with personalised drawing styles. The system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks

    Full text link
    We propose a method for reconstructing 3D shapes from 2D sketches in the form of line drawings. Our method takes as input a single sketch, or multiple sketches, and outputs a dense point cloud representing a 3D reconstruction of the input sketch(es). The point cloud is then converted into a polygon mesh. At the heart of our method lies a deep, encoder-decoder network. The encoder converts the sketch into a compact representation encoding shape information. The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints. The multi-view maps are then consolidated into a 3D point cloud by solving an optimization problem that fuses depth and normals across all viewpoints. Based on our experiments, compared to other methods, such as volumetric networks, our architecture offers several advantages, including more faithful reconstruction, higher output surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
    corecore