133,926 research outputs found

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Sketching as a solid modeling tool

    Get PDF
    Journal ArticleThis paper describes 'Quick-sketch', a 2-d and 3d modeling tool for pen based computers. Users of this system define a model by simple pen strokes drawn directly on the screen of a pen-based PC. Lines, circles, arcs, or B-spline curves are automatically distinguished and interpreted from these strokes. The system also automatically determines relations, such as right angles, tangencies, symmetry, and parallelism, from the sketch input. These relationships are then used to clean up the drawing by making the approximate relationships exact. Constraints are established to maintain the relationships during further editing. A constraint maintenance system, which is based on gestural manipulation and soft constraints, is employed in this system. Several techniques for sketch based definitions of 3d objects are provided as well, including extrusion, surface of revolution, ruled surfaces and sweep. Features can be sketched on the surface of a 3d object, using the same 2d and 3d techniques. This way, objects of medium complexity can be sketched in seconds. The system can be viewed as a front-end to more sophisticated modeling, rendering or animation environments, serving as a hand sketching tool in the preliminary design phase

    OpenAlea 2.0: Architecture of an integrated modeling environment on the web

    Get PDF
    International audiencePlant modeling is based on the use of a diverse set of design paradigms (L-systems, visual programming, imperative languages or sketch-based interfaces). In this poster, the architecture of a new multi-paradigm and integrated modeling environment is presented. This desktop application will become a distributed web application, allowing to run simulations on a cloud computing system and share virtual experiments on the web. The modeling environment will run on a web browser using HTML5 and WebGL technologies

    A Sketch-based Rapid Modeling Method for Crime Scene Presentation

    Get PDF
    The reconstruction of crime scene plays an important role in digital forensic application. This article integrates computer graphics, sketch-based retrieval and virtual reality (VR) techniques to develop a low-cost and rapid 3D crime scene presentation approach, which can be used by investigators to analyze and simulate the criminal process. First, we constructed a collection of 3D models for indoor crime scenes using various popular techniques, including laser scanning, image-based modeling and geometric modeling. Second, to quickly obtain an object of interest from the 3D model database, a sketch-based retrieval method was proposed. Finally, a rapid modeling system that integrates our database and retrieval algorithm was developed to quickly build a digital crime scene. For practical use, an interactive real-time virtual roaming application was developed in Unity 3D and a low-cost VR head-mounted display (HMD). Practical cases have been implemented to demonstrate the feasibility and availability of our method

    Modeling Sketching Primitives to Support Freehand Drawing Based on Context Awareness

    Get PDF
    Freehand drawing is an easy and intuitive method for thinking input and output. In sketch based interface, there lack support for natural sketching with drawing cues, like overlapping, overlooping, hatching, etc. which happen frequently in physical pen and paper. In this paper, we analyze some characters of drawing cues in sketch based interface and describe the different types of sketching primitives. An improved sketch information model is given and the idea is to present and record design thinking during freehand drawing process with individuality and diversification. The interaction model based on context is developed which can guide and help new sketch-based interface development. New applications with different context contents can be easily derived from it and developed further. Our approach can support the tasks that are common across applications, requiring the designer to only provide support for the application-specific tasks. It is capable of and applicable for modeling various sketching interfaces and applications. Finally, we illustrate the general operations of the system by examples in different applications

    DeepSketchHair: Deep Sketch-based 3D Hair Modeling

    Full text link
    We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art

    Interactive sketching of urban procedural models

    Get PDF
    International audience3D modeling remains a notoriously difficult task for novices despite significant research effort to provide intuitive and automated systems. We tackle this problem by combining the strengths of two popular domains: sketch-based modeling and procedural modeling. On the one hand, sketch-based modeling exploits our ability to draw but requires detailed, unambiguous drawings to achieve complex models. On the other hand, procedural modeling automates the creation of precise and detailed geometry but requires the tedious definition and parameterization of procedural models. Our system uses a collection of simple procedural grammars, called snippets, as building blocks to turn sketches into realistic 3D models. We use a machine learning approach to solve the inverse problem of finding the procedural model that best explains a user sketch. We use non-photorealistic rendering to generate artificial data for training con-volutional neural networks capable of quickly recognizing the procedural rule intended by a sketch and estimating its parameters. We integrate our algorithm in a coarse-to-fine urban modeling system that allows users to create rich buildings by successively sketching the building mass, roof, facades, windows, and ornaments. A user study shows that by using our approach non-expert users can generate complex buildings in just a few minutes

    Improving Visualization Skills in Engineering Education

    Get PDF
    This article analyzes the importance of visualization skills in engineering education. It proposes a dual approach based on computer graphics applications using both Web-based graphic applications and a sketch based modeling system. It addresses the importance of spatial abilities in the context of engineering education and the available techniques for evaluating these abilities from a psychological point of view. It then reviews some Web resources conceived to help students improve their spatial abilities and presents two educational applications. Finally, it presents a pilot study carried out at La Laguna University
    • …
    corecore