21,623 research outputs found
Neural 3D Mesh Renderer
For modeling the 3D world behind 2D images, which 3D representation is most
appropriate? A polygon mesh is a promising candidate for its compactness and
geometric properties. However, it is not straightforward to model a polygon
mesh from 2D images using neural networks because the conversion from a mesh to
an image, or rendering, involves a discrete operation called rasterization,
which prevents back-propagation. Therefore, in this work, we propose an
approximate gradient for rasterization that enables the integration of
rendering into neural networks. Using this renderer, we perform single-image 3D
mesh reconstruction with silhouette image supervision and our system
outperforms the existing voxel-based approach. Additionally, we perform
gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and
3D DeepDream, with 2D supervision for the first time. These applications
demonstrate the potential of the integration of a mesh renderer into neural
networks and the effectiveness of our proposed renderer
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Capture, Learning, and Synthesis of 3D Speaking Styles
Audio-driven 3D facial animation has been widely explored, but achieving
realistic, human-like performance is still unsolved. This is due to the lack of
available 3D datasets, models, and standard evaluation metrics. To address
this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans
captured at 60 fps and synchronized audio from 12 speakers. We then train a
neural network on our dataset that factors identity from facial motion. The
learned model, VOCA (Voice Operated Character Animation) takes any speech
signal as input - even speech in languages other than English - and
realistically animates a wide range of adult faces. Conditioning on subject
labels during training allows the model to learn a variety of realistic
speaking styles. VOCA also provides animator controls to alter speaking style,
identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball
rotations) during animation. To our knowledge, VOCA is the only realistic 3D
facial animation model that is readily applicable to unseen subjects without
retargeting. This makes VOCA suitable for tasks like in-game video, virtual
reality avatars, or any scenario in which the speaker, speech, or language is
not known in advance. We make the dataset and model available for research
purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201
- …