27 research outputs found

    A Survey of Sketch Based Modeling Systems

    Get PDF

    Efficient sketch-based creation of detailed character models through data-driven mesh deformations

    Get PDF
    Creation of detailed character models is a very challenging task in animation production. Sketch-based character model creation from a 3D template provides a promising solution. However, how to quickly find correct correspondences between user's drawn sketches and the 3D template model, how to efficiently deform the 3D template model to exactly match user's drawn sketches, and realize real-time interactive modeling is still an open topic. In this paper, we propose a new approach and develop a user interface to effectively tackle this problem. Our proposed approach includes using user's drawn sketches to retrieve a most similar 3D template model from our dataset and marrying human's perception and interactions with computer's highly efficient computing to extract occluding and silhouette contours of the 3D template model and find correct correspondences quickly. We then combine skeleton-based deformation and mesh editing to deform the 3D template model to fit user's drawn sketches and create new and detailed 3D character models. The results presented in this paper demonstrate the effectiveness and advantages of our proposed approach and usefulness of our developed user interface

    Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches

    Full text link
    The rapid development of AR/VR brings tremendous demands for 3D content. While the widely-used Computer-Aided Design (CAD) method requires a time-consuming and labor-intensive modeling process, sketch-based 3D modeling offers a potential solution as a natural form of computer-human interaction. However, the sparsity and ambiguity of sketches make it challenging to generate high-fidelity content reflecting creators' ideas. Precise drawing from multiple views or strategic step-by-step drawings is often required to tackle the challenge but is not friendly to novice users. In this work, we introduce a novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information. Specifically, we introduce a lightweight generation network for efficient inference in real-time and a structural-aware adversarial training approach with a Stroke Enhancement Module (SEM) to capture the structural information to facilitate learning of the realistic and fine-detailed shape structures for high-fidelity performance. Extensive experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets

    Sketch-based modeling with a differentiable renderer

    Get PDF
    © 2020 The Authors. Computer Animation and Virtual Worlds published by John Wiley & Sons, Ltd. Sketch-based modeling aims to recover three-dimensional (3D) shape from two-dimensional line drawings. However, due to the sparsity and ambiguity of the sketch, it is extremely challenging for computers to interpret line drawings of physical objects. Most conventional systems are restricted to specific scenarios such as recovering for specific shapes, which are not conducive to generalize. Recent progress of deep learning methods have sparked new ideas for solving computer vision and pattern recognition issues. In this work, we present an end-to-end learning framework to predict 3D shape from line drawings. Our approach is based on a two-steps strategy, it converts the sketch image to its normal image, then recover the 3D shape subsequently. A differentiable renderer is proposed and incorporated into this framework, it allows the integration of the rendering pipeline with neural networks. Experimental results show our method outperforms the state-of-art, which demonstrates that our framework is able to cope with the challenges in single sketch-based 3D shape modeling

    Single Sketch Image based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning

    Get PDF
    Efficient car shape design is a challenging problem in both the automotive industry and the computer animation/games industry. In this paper, we present a system to reconstruct the 3D car shape from a single 2D sketchimage. To learn the correlation between 2D sketches and 3D cars, we propose a Variational Autoencoder deepneural network that takes a 2D sketch and generates a set of multi-view depth and mask images, which forma more effective representation comparing to 3D meshes, and can be effectively fused to generate a 3D carshape. Since global models like deep learning have limited capacity to reconstruct fine-detail features, wepropose a local lazy learning approach that constructs a small subspace based on a few relevant car samples inthe database. Due to the small size of such a subspace, fine details can be represented effectively with a smallnumber of parameters. With a low-cost optimization process, a high-quality car shape with detailed featuresis created. Experimental results show that the system performs consistently to create highly realistic cars ofsubstantially different shape and topology

    Nested Explorative Maps: A new 3D canvas for conceptual design in architecture

    Get PDF
    International audienceIn this digital age, architects still need to alternate between paper sketches and 3D modeling software for their designs. Indeed, while 3D models enable to explore different views, creating them at very early stages might reduce creativity since they do not allow to superpose several tentative designs nor to refine them progressively, as sketches do. To enable exploratory design in 3D, we introduce Nested Explorative Maps, a new system dedicated to interactive design in architecture. Our model enables coarse to fine sketching of nested architectural structures, enabling to progressively sketch a 3D building from floor plan to interior design, thanks to a series of nested maps able to spread in 3D. Each map allows the visual representation of uncertainty as well as the interactive exploration of the alternative, tentative options. We validate the model through a user study conducted with professional architects, enabling us to highlight the potential of Nested Explorative Maps for conceptual design in architecture.En cette ère du numérique, les architectes doivent encore alterner entre le croquis papier et logiciels de modélisation 3D afin de réaliser leurs conceptions. En effet, les modèles 3D permettent d’explorer différentes vues mais leur création à un stade très précoce peut impliquer une perte de la créativité car ils ne permettent pas de superposer plusieurs plans provisoires ni de les affiner progressivement, comme le font les esquisses. Pour permettre la conception exploratoire dans l'espace 3D, nous présentons Nested Explorative Maps, un nouveau système dédié à la conception interactive en architecture. Notre modèle permet de dessiner du grossier aux détails des structures architecturales imbriquées, afin de dessiner progressivement un bâtiment en 3D, du plan à la décoration intérieure, grâce à une série de cartes imbriquées capables de se répandre en 3D. Chaque carte permet de représenter visuellement l’incertitude et d’explorer de manière interactive les différentes options possibles. Une étude utilisateur réalisée auprès d'architectes professionnels nous a permis de valider notre modèle et de mettre en évidence le potentiel des cartes exploratoires imbriquées pour la conception conceptuelle en architecture

    SmartCanvas: Context-inferred Interpretation of Sketches for Preparatory Design Studies

    Get PDF
    In early or preparatory design stages, an architect or designer sketches out rough ideas, not only about the object or structure being considered, but its relation to its spatial context. This is an iterative process, where the sketches are not only the primary means for testing and refining ideas, but also for communicating among a design team and to clients. Hence, sketching is the preferred media for artists and designers during the early stages of design, albeit with a major drawback: sketches are 2D and effects such as view perturbations or object movement are not supported, thereby inhibiting the design process. We present an interactive system that allows for the creation of a 3D abstraction of a designed space, built primarily by sketching in 2D within the context of an anchoring design or photograph. The system is progressive in the sense that the interpretations are refined as the user continues sketching. As a key technical enabler, we reformulate the sketch interpretation process as a selection optimization from a set of context-generated canvas planes in order to retrieve a regular arrangement of planes. We demonstrate our system (available at http:/geometry.cs.ucl.ac.uk/projects/2016/smartcanvas/) with a wide range of sketches and design studies

    Sketch2CAD: Sequential CAD Modeling by Sketching in Context

    Get PDF
    International audienceWe present a sketch-based CAD modeling system, where users create objects incrementally by sketching the desired shape edits, which our system automatically translates to CAD operations. Our approach is motivated by the close similarities between the steps industrial designers follow to draw 3D shapes, and the operations CAD modeling systems offer to create similar shapes. To overcome the strong ambiguity with parsing 2D sketches, we observe that in a sketching sequence, each step makes sense and can be interpreted in the context of what has been drawn before. In our system, this context corresponds to a partial CAD model, inferred in the previous steps, which we feed along with the input sketch to a deep neural network in charge of interpreting how the model should be modified by that sketch. Our deep network architecture then recognizes the intended CAD operation and segments the sketch accordingly, such that a subsequent optimization estimates the parameters of the operation that best fit the segmented sketch strokes. Since there exists no datasets of paired sketching and CAD mod-eling sequences, we train our system by generating synthetic sequences of CAD operations that we render as line drawings. We present a proof of concept realization of our algorithm supporting four frequently used CAD operations. Using our system, participants are able to quickly model a large and diverse set of objects, demonstrating Sketch2CAD to be an alternate way of interacting with current CAD modeling systems

    Efficient sketch-based 3D character modelling.

    Get PDF
    Sketch-based modelling (SBM) has undergone substantial research over the past two decades. In the early days, researchers aimed at developing techniques useful for modelling of architectural and mechanical models through sketching. With the advancement of technology used in designing visual effects for film, TV and games, the demand for highly realistic 3D character models has skyrocketed. To allow artists to create 3D character models quickly, researchers have proposed several techniques for efficient character modelling from sketched feature curves. Moreover several research groups have developed 3D shape databases to retrieve 3D models from sketched inputs. Unfortunately, the current state of the art in sketch-based organic modelling (3D character modelling) contains a lot of gaps and limitations. To bridge the gaps and improve the current sketch-based modelling techniques, this research aims to develop an approach allowing direct and interactive modelling of 3D characters from sketched feature curves, and also make use of 3D shape databases to guide the artist to create his / her desired models. The research involved finding a fusion between 3D shape retrieval, shape manipulation, and shape reconstruction / generation techniques backed by an extensive literature review, experimentation and results. The outcome of this research involved devising a novel and improved technique for sketch-based modelling, the creation of a software interface that allows the artist to quickly and easily create realistic 3D character models with comparatively less effort and learning. The proposed research work provides the tools to draw 3D shape primitives and manipulate them using simple gestures which leads to a better modelling experience than the existing state of the art SBM systems
    corecore