Denoising diffusion models are a powerful type of generative models used to
capture complex distributions of real-world signals. However, their
applicability is limited to scenarios where training samples are readily
available, which is not always the case in real-world applications. For
example, in inverse graphics, the goal is to generate samples from a
distribution of 3D scenes that align with a given image, but ground-truth 3D
scenes are unavailable and only 2D images are accessible. To address this
limitation, we propose a novel class of denoising diffusion probabilistic
models that learn to sample from distributions of signals that are never
directly observed. Instead, these signals are measured indirectly through a
known differentiable forward model, which produces partial observations of the
unknown signal. Our approach involves integrating the forward model directly
into the denoising process. This integration effectively connects the
generative modeling of observations with the generative modeling of the
underlying signals, allowing for end-to-end training of a conditional
generative model over signals. During inference, our approach enables sampling
from the distribution of underlying signals that are consistent with a given
partial observation. We demonstrate the effectiveness of our method on three
challenging computer vision tasks. For instance, in the context of inverse
graphics, our model enables direct sampling from the distribution of 3D scenes
that align with a single 2D input image.Comment: Project page: https://diffusion-with-forward-models.github.io