583 research outputs found
DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling
Face modeling has been paid much attention in the field of visual computing.
There exist many scenarios, including cartoon characters, avatars for social
media, 3D face caricatures as well as face-related art and design, where
low-cost interactive face modeling is a popular approach especially among
amateur users. In this paper, we propose a deep learning based sketching system
for 3D face and caricature modeling. This system has a labor-efficient
sketching interface, that allows the user to draw freehand imprecise yet
expressive 2D lines representing the contours of facial features. A novel CNN
based deep regression network is designed for inferring 3D face models from 2D
sketches. Our network fuses both CNN and shape based features of the input
sketch, and has two independent branches of fully connected layers generating
independent subsets of coefficients for a bilinear face representation. Our
system also supports gesture based interactions for users to further manipulate
initial face models. Both user studies and numerical results indicate that our
sketching system can help users create face models quickly and effectively. A
significantly expanded face database with diverse identities, expressions and
levels of exaggeration is constructed to promote further research and
evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201
Sketching space
In this paper, we present a sketch modelling system which we call Stilton. The program resembles a desktop VRML browser, allowing a user to navigate a three-dimensional model in a perspective projection, or panoramic photographs, which the program maps onto the scene as a `floor' and `walls'. We place an imaginary two-dimensional drawing plane in front of the user, and any geometric information that user sketches onto this plane may be reconstructed to form solid objects through an optimization process. We show how the system can be used to reconstruct geometry from panoramic images, or to add new objects to an existing model. While panoramic imaging can greatly assist with some aspects of site familiarization and qualitative assessment of a site, without the addition of some foreground geometry they offer only limited utility in a design context. Therefore, we suggest that the system may be of use in `just-in-time' CAD recovery of complex environments, such as shop floors, or construction sites, by recovering objects through sketched overlays, where other methods such as automatic line-retrieval may be impossible. The result of using the system in this manner is the `sketching of space' - sketching out a volume around the user - and once the geometry has been recovered, the designer is free to quickly sketch design ideas into the newly constructed context, or analyze the space around them. Although end-user trials have not, as yet, been undertaken we believe that this implementation may afford a user-interface that is both accessible and robust, and that the rapid growth of pen-computing devices will further stimulate activity in this area
A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation
To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods
Sketch Beautification: Learning Part Beautification and Structure Refinement for Sketches of Man-made Objects
We present a novel freehand sketch beautification method, which takes as
input a freely drawn sketch of a man-made object and automatically beautifies
it both geometrically and structurally. Beautifying a sketch is challenging
because of its highly abstract and heavily diverse drawing manner. Existing
methods are usually confined to the distribution of their limited training
samples and thus cannot beautify freely drawn sketches with rich variations. To
address this challenge, we adopt a divide-and-combine strategy. Specifically,
we first parse an input sketch into semantic components, beautify individual
components by a learned part beautification module based on part-level implicit
manifolds, and then reassemble the beautified components through a structure
beautification module. With this strategy, our method can go beyond the
training samples and handle novel freehand sketches. We demonstrate the
effectiveness of our system with extensive experiments and a perceptive study.Comment: 13 figure
SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction
Reconstructing 3D human shapes from 2D images has received increasing
attention recently due to its fundamental support for many high-level 3D
applications. Compared with natural images, freehand sketches are much more
flexible to depict various shapes, providing a high potential and valuable way
for 3D human reconstruction. However, such a task is highly challenging. The
sparse abstract characteristics of sketches add severe difficulties, such as
arbitrariness, inaccuracy, and lacking image details, to the already badly
ill-posed problem of 2D-to-3D reconstruction. Although current methods have
achieved great success in reconstructing 3D human bodies from a single-view
image, they do not work well on freehand sketches. In this paper, we propose a
novel sketch-driven multi-faceted decoder network termed SketchBodyNet to
address this task. Specifically, the network consists of a backbone and three
separate attention decoder branches, where a multi-head self-attention module
is exploited in each decoder to obtain enhanced features, followed by a
multi-layer perceptron. The multi-faceted decoders aim to predict the camera,
shape, and pose parameters, respectively, which are then associated with the
SMPL model to reconstruct the corresponding 3D human mesh. In learning,
existing 3D meshes are projected via the camera parameters into 2D synthetic
sketches with joints, which are combined with the freehand sketches to optimize
the model. To verify our method, we collect a large-scale dataset of about 26k
freehand sketches and their corresponding 3D meshes containing various poses of
human bodies from 14 different angles. Extensive experimental results
demonstrate our SketchBodyNet achieves superior performance in reconstructing
3D human meshes from freehand sketches.Comment: 9 pages, to appear in Pacific Graphics 202
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
3D VR Sketch Guided 3D Shape Prototyping and Exploration
3D shape modeling is labor-intensive and time-consuming and requires years of
expertise. Recently, 2D sketches and text inputs were considered as conditional
modalities to 3D shape generation networks to facilitate 3D shape modeling.
However, text does not contain enough fine-grained information and is more
suitable to describe a category or appearance rather than geometry, while 2D
sketches are ambiguous, and depicting complex 3D shapes in 2D again requires
extensive practice. Instead, we explore virtual reality sketches that are drawn
directly in 3D. We assume that the sketches are created by novices, without any
art training, and aim to reconstruct physically-plausible 3D shapes. Since such
sketches are potentially ambiguous, we tackle the problem of the generation of
multiple 3D shapes that follow the input sketch structure. Limited in the size
of the training data, we carefully design our method, training the model
step-by-step and leveraging multi-modal 3D shape representation. To guarantee
the plausibility of generated 3D shapes we leverage the normalizing flow that
models the distribution of the latent space of 3D shapes. To encourage the
fidelity of the generated 3D models to an input sketch, we propose a dedicated
loss that we deploy at different stages of the training process. We plan to
make our code publicly available
Sketch-based interaction and modeling: where do we stand?
Sketching is a natural and intuitive communication tool used for expressing concepts or ideas which are difficult to communicate through text or speech alone. Sketching is therefore used for a variety of purposes, from the expression of ideas on two-dimensional (2D) physical media, to object creation, manipulation, or deformation in three-dimensional (3D) immersive environments. This variety in sketching activities brings about a range of technologies which, while having similar scope, namely that of recording and interpreting the sketch gesture to effect some interaction, adopt different interpretation approaches according to the environment in which the sketch is drawn. In fields such as product design, sketches are drawn at various stages of the design process, and therefore, designers would benefit from sketch interpretation technologies which support these differing interactions. However, research typically focuses on one aspect of sketch interpretation and modeling such that literature on available technologies is fragmented and dispersed. In this paper, we bring together the relevant literature describing technologies which can support the product design industry, namely technologies which support the interpretation of sketches drawn on 2D media, sketch-based search interactions, as well as sketch gestures drawn in 3D media. This paper, therefore, gives a holistic view of the algorithmic support that can be provided in the design process. In so doing, we highlight the research gaps and future research directions required to provide full sketch-based interaction support
- …