Recent diffusion probabilistic models (DPMs) have shown remarkable abilities
of generated content, however, they often suffer from complex forward
processes, resulting in inefficient solutions for the reversed process and
prolonged sampling times. In this paper, we aim to address the aforementioned
challenges by focusing on the diffusion process itself that we propose to
decouple the intricate diffusion process into two comparatively simpler process
to improve the generative efficacy and speed. In particular, we present a novel
diffusion paradigm named DDM (Decoupled Diffusion Models) based on the Ito
diffusion process, in which the image distribution is approximated by an
explicit transition probability while the noise path is controlled by the
standard Wiener process. We find that decoupling the diffusion process reduces
the learning difficulty and the explicit transition probability improves the
generative speed significantly. We prove a new training objective for DPM,
which enables the model to learn to predict the noise and image components
separately. Moreover, given the novel forward diffusion equation, we derive the
reverse denoising formula of DDM that naturally supports fewer steps of
generation without ordinary differential equation (ODE) based accelerators. Our
experiments demonstrate that DDM outperforms previous DPMs by a large margin in
fewer function evaluations setting and gets comparable performances in long
function evaluations setting. We also show that our framework can be applied to
image-conditioned generation and high-resolution image synthesis, and that it
can generate high-quality images with only 10 function evaluations