6,904 research outputs found
Visual Object Networks: Image Generation with Disentangled 3D Representation
Recent progress in deep generative models has led to tremendous breakthroughs
in image generation. However, while existing models can synthesize
photorealistic images, they lack an understanding of our underlying 3D world.
We present a new generative model, Visual Object Networks (VON), synthesizing
natural images of objects with a disentangled 3D representation. Inspired by
classic graphics rendering pipelines, we unravel our image formation process
into three conditionally independent factors---shape, viewpoint, and
texture---and present an end-to-end adversarial learning framework that jointly
models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes
that are indistinguishable from real shapes. It then renders the object's 2.5D
sketches (i.e., silhouette and depth map) from its shape under a sampled
viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches
to generate natural images. The VON not only generates images that are more
realistic than state-of-the-art 2D image synthesis methods, but also enables
many 3D operations such as changing the viewpoint of a generated image, editing
of shape and texture, linear interpolation in texture and shape space, and
transferring appearance across different objects and viewpoints.Comment: NeurIPS 2018. Code: https://github.com/junyanz/VON Website:
http://von.csail.mit.edu
Learning single-image 3D reconstruction by generative modelling of shape, pose and shading
We present a unified framework tackling two problems: class-specific 3D
reconstruction from a single image, and generation of new 3D shape samples.
These tasks have received considerable attention recently; however, most
existing approaches rely on 3D supervision, annotation of 2D images with
keypoints or poses, and/or training with multiple views of each object
instance. Our framework is very general: it can be trained in similar settings
to existing approaches, while also supporting weaker supervision. Importantly,
it can be trained purely from 2D images, without pose annotations, and with
only a single view per instance. We employ meshes as an output representation,
instead of voxels used in most prior work. This allows us to reason over
lighting parameters and exploit shading information during training, which
previous 2D-supervised methods cannot. Thus, our method can learn to generate
and reconstruct concave object classes. We evaluate our approach in various
settings, showing that: (i) it learns to disentangle shape from pose and
lighting; (ii) using shading in the loss improves performance compared to just
silhouettes; (iii) when using a standard single white light, our model
outperforms state-of-the-art 2D-supervised methods, both with and without pose
supervision, thanks to exploiting shading cues; (iv) performance improves
further when using multiple coloured lights, even approaching that of
state-of-the-art 3D-supervised methods; (v) shapes produced by our model
capture smooth surfaces and fine details better than voxel-based approaches;
and (vi) our approach supports concave classes such as bathtubs and sofas,
which methods based on silhouettes cannot learn.Comment: Extension of arXiv:1807.09259, accepted to IJCV. Differentiable
renderer available at https://github.com/pmh47/dir
- …