3,343 research outputs found

    Visual Object Networks: Image Generation with Disentangled 3D Representation

    Full text link
    Recent progress in deep generative models has led to tremendous breakthroughs in image generation. However, while existing models can synthesize photorealistic images, they lack an understanding of our underlying 3D world. We present a new generative model, Visual Object Networks (VON), synthesizing natural images of objects with a disentangled 3D representation. Inspired by classic graphics rendering pipelines, we unravel our image formation process into three conditionally independent factors---shape, viewpoint, and texture---and present an end-to-end adversarial learning framework that jointly models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes that are indistinguishable from real shapes. It then renders the object's 2.5D sketches (i.e., silhouette and depth map) from its shape under a sampled viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches to generate natural images. The VON not only generates images that are more realistic than state-of-the-art 2D image synthesis methods, but also enables many 3D operations such as changing the viewpoint of a generated image, editing of shape and texture, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.Comment: NeurIPS 2018. Code: https://github.com/junyanz/VON Website: http://von.csail.mit.edu

    Learning a Disentangled Embedding for Monocular 3D Shape Retrieval and Pose Estimation

    Full text link
    We propose a novel approach to jointly perform 3D shape retrieval and pose estimation from monocular images.In order to make the method robust to real-world image variations, e.g. complex textures and backgrounds, we learn an embedding space from 3D data that only includes the relevant information, namely the shape and pose. Our approach explicitly disentangles a shape vector and a pose vector, which alleviates both pose bias for 3D shape retrieval and categorical bias for pose estimation. We then train a CNN to map the images to this embedding space, and then retrieve the closest 3D shape from the database and estimate the 6D pose of the object. Our method achieves 10.3 median error for pose estimation and 0.592 top-1-accuracy for category agnostic 3D object retrieval on the Pascal3D+ dataset, outperforming the previous state-of-the-art methods on both tasks

    InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

    Full text link
    This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods
    • …
    corecore