698 research outputs found
SG-VAE: Scene Grammar Variational Autoencoder to Generate New Indoor Scenes
Deep generative models have been used in recent years to learn coherent latent representations in order to synthesize high-quality images. In this work, we propose a neural network to learn a generative model for sampling consistent indoor scene layouts. Our method learns the co-occurrences, and appearance parameters such as shape and pose, for different objects categories through a grammar-based auto-encoder, resulting in a compact and accurate representation for scene layouts. In contrast to existing grammar-based methods with a user-specified grammar, we construct the grammar automatically by extracting a set of production rules on reasoning about object co-occurrences in training data. The extracted grammar is able to represent a scene by an augmented parse tree. The proposed auto-encoder encodes these parse trees to a latent code, and decodes the latent code to a parse tree, thereby ensuring the generated scene is always valid. We experimentally demonstrate that the proposed auto-encoder learns not only to generate valid scenes (i.e. the arrangements and appearances of objects), but it also learns coherent latent representations where nearby latent samples decode to similar scene outputs. The obtained generative model is applicable to several computer vision tasks such as 3D pose and layout estimation from RGB-D data
SG-VAE: Scene Grammar Variational Autoencoder to generate new indoor scenes
Deep generative models have been used in recent years to learn coherent
latent representations in order to synthesize high-quality images. In this
work, we propose a neural network to learn a generative model for sampling
consistent indoor scene layouts. Our method learns the co-occurrences, and
appearance parameters such as shape and pose, for different objects categories
through a grammar-based auto-encoder, resulting in a compact and accurate
representation for scene layouts. In contrast to existing grammar-based methods
with a user-specified grammar, we construct the grammar automatically by
extracting a set of production rules on reasoning about object co-occurrences
in training data. The extracted grammar is able to represent a scene by an
augmented parse tree. The proposed auto-encoder encodes these parse trees to a
latent code, and decodes the latent code to a parse tree, thereby ensuring the
generated scene is always valid. We experimentally demonstrate that the
proposed auto-encoder learns not only to generate valid scenes (i.e. the
arrangements and appearances of objects), but it also learns coherent latent
representations where nearby latent samples decode to similar scene outputs.
The obtained generative model is applicable to several computer vision tasks
such as 3D pose and layout estimation from RGB-D data
End-to-End Optimization of Scene Layout
We propose an end-to-end variational generative model for scene layout
synthesis conditioned on scene graphs. Unlike unconditional scene layout
generation, we use scene graphs as an abstract but general representation to
guide the synthesis of diverse scene layouts that satisfy relationships
included in the scene graph. This gives rise to more flexible control over the
synthesis process, allowing various forms of inputs such as scene layouts
extracted from sentences or inferred from a single color image. Using our
conditional layout synthesizer, we can generate various layouts that share the
same structure of the input example. In addition to this conditional generation
design, we also integrate a differentiable rendering module that enables layout
refinement using only 2D projections of the scene. Given a depth and a
semantics map, the differentiable rendering module enables optimizing over the
synthesized layout to fit the given input in an analysis-by-synthesis fashion.
Experiments suggest that our model achieves higher accuracy and diversity in
conditional scene synthesis and allows exemplar-based scene generation from
various input forms.Comment: CVPR 2020 (Oral). Project page: http://3dsln.csail.mit.edu
- …