793 research outputs found
DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling
Face modeling has been paid much attention in the field of visual computing.
There exist many scenarios, including cartoon characters, avatars for social
media, 3D face caricatures as well as face-related art and design, where
low-cost interactive face modeling is a popular approach especially among
amateur users. In this paper, we propose a deep learning based sketching system
for 3D face and caricature modeling. This system has a labor-efficient
sketching interface, that allows the user to draw freehand imprecise yet
expressive 2D lines representing the contours of facial features. A novel CNN
based deep regression network is designed for inferring 3D face models from 2D
sketches. Our network fuses both CNN and shape based features of the input
sketch, and has two independent branches of fully connected layers generating
independent subsets of coefficients for a bilinear face representation. Our
system also supports gesture based interactions for users to further manipulate
initial face models. Both user studies and numerical results indicate that our
sketching system can help users create face models quickly and effectively. A
significantly expanded face database with diverse identities, expressions and
levels of exaggeration is constructed to promote further research and
evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201
Recommended from our members
Recent advances in the user evaluation methods and studies of non-photorealistic visualisation and rendering techniques
Alive Caricature from 2D to 3D
Caricature is an art form that expresses subjects in abstract, simple and
exaggerated view. While many caricatures are 2D images, this paper presents an
algorithm for creating expressive 3D caricatures from 2D caricature images with
a minimum of user interaction. The key idea of our approach is to introduce an
intrinsic deformation representation that has a capacity of extrapolation
enabling us to create a deformation space from standard face dataset, which
maintains face constraints and meanwhile is sufficiently large for producing
exaggerated face models. Built upon the proposed deformation representation, an
optimization model is formulated to find the 3D caricature that captures the
style of the 2D caricature image automatically. The experiments show that our
approach has better capability in expressing caricatures than those fitting
approaches directly using classical parametric face models such as 3DMM and
FaceWareHouse. Moreover, our approach is based on standard face datasets and
avoids constructing complicated 3D caricature training set, which provides
great flexibility in real applications.Comment: Accepted to CVPR 201
Caricaturing buildings for effective visualization
The objective of my research is to identify and analyze the techniques of exaggeration,
simplification, and abstraction used by caricature and cartoon artists. I
apply these techniques to an expressive 3D modelling process which is used to create
building caricatures. This process minimizes the number of unimportant details and
increases the recognizability of the buildings. Additionally, the building caricature
process decreases the time spent modelling the buildings and reduces their overall file
sizes. The building caricature process has been used to create other building caricatures,
as well as interactive visualizations and 3D maps of the Texas A&M University
campus
Example Based Caricature Synthesis
The likeness of a caricature to the original face image is an essential and often overlooked part of caricature
production. In this paper we present an example based caricature synthesis technique, consisting of shape
exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set
of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial
features. The relationship exaggeration step introduces two definitions which facilitate global facial feature
synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an
intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion
form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance
(MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a
number of constraints. The effectiveness of our algorithm is demonstrated with experimental results
Head-tracked stereo viewing with two-handed 3D interaction for animated character construction
In this paper, we demonstrate a new interactive 3D desktop metaphor based on two-handed 3D direct manipulation registered with head-tracked stereo viewing. In our configuration, a six-degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. The user interacts with the simulated 3D environment using both hands simultaneously. The left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor, the objects appearing in front of the screen because of negative parallax. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coordination is made possible by registration between virtual and physical space, allowing a variety of complex 3D tasks to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3D mouse.197-206Pubblicat
- …