3 research outputs found
IC3D: Image-Conditioned 3D Diffusion for Shape Generation
In the last years, Denoising Diffusion Probabilistic Models (DDPMs) obtained
state-of-the-art results in many generative tasks, outperforming GANs and other
classes of generative models. In particular, they reached impressive results in
various image generation sub-tasks, among which conditional generation tasks
such as text-guided image synthesis. Given the success of DDPMs in 2D
generation, they have more recently been applied to 3D shape generation,
outperforming previous approaches and reaching state-of-the-art results.
However, 3D data pose additional challenges, such as the choice of the 3D
representation, which impacts design choices and model efficiency. While
reaching state-of-the-art results in generation quality, existing 3D DDPM works
make little or no use of guidance, mainly being unconditional or
class-conditional. In this paper, we present IC3D, the first Image-Conditioned
3D Diffusion model that generates 3D shapes by image guidance. It is also the
first 3D DDPM model that adopts voxels as a 3D representation. To guide our
DDPM, we present and leverage CISP (Contrastive Image-Shape Pre-training), a
model jointly embedding images and shapes by contrastive pre-training, inspired
by text-to-image DDPM works. Our generative diffusion model outperforms the
state-of-the-art in 3D generation quality and diversity. Furthermore, we show
that our generated shapes are preferred by human evaluators to a SoTA
single-view 3D reconstruction model in terms of quality and coherence to the
query image by running a side-by-side human evaluation