Generating images from graph-structured inputs, such as scene graphs, is
uniquely challenging due to the difficulty of aligning nodes and connections in
graphs with objects and their relations in images. Most existing methods
address this challenge by using scene layouts, which are image-like
representations of scene graphs designed to capture the coarse structures of
scene images. Because scene layouts are manually crafted, the alignment with
images may not be fully optimized, causing suboptimal compliance between the
generated images and the original scene graphs. To tackle this issue, we
propose to learn scene graph embeddings by directly optimizing their alignment
with images. Specifically, we pre-train an encoder to extract both global and
local information from scene graphs that are predictive of the corresponding
images, relying on two loss functions: masked autoencoding loss and contrastive
loss. The former trains embeddings by reconstructing randomly masked image
regions, while the latter trains embeddings to discriminate between compliant
and non-compliant images according to the scene graph. Given these embeddings,
we build a latent diffusion model to generate images from scene graphs. The
resulting method, called SGDiff, allows for the semantic manipulation of
generated images by modifying scene graph nodes and connections. On the Visual
Genome and COCO-Stuff datasets, we demonstrate that SGDiff outperforms
state-of-the-art methods, as measured by both the Inception Score and Fr\'echet
Inception Distance (FID) metrics. We will release our source code and trained
models at https://github.com/YangLing0818/SGDiff.Comment: Code and models shall be released at
https://github.com/YangLing0818/SGDif