Recent works on text-to-3d generation show that using only 2D diffusion
supervision for 3D generation tends to produce results with inconsistent
appearances (e.g., faces on the back view) and inaccurate shapes (e.g., animals
with extra legs). Existing methods mainly address this issue by retraining
diffusion models with images rendered from 3D data to ensure multi-view
consistency while struggling to balance 2D generation quality with 3D
consistency. In this paper, we present a new framework Sculpt3D that equips the
current pipeline with explicit injection of 3D priors from retrieved reference
objects without re-training the 2D diffusion model. Specifically, we
demonstrate that high-quality and diverse 3D geometry can be guaranteed by
keypoints supervision through a sparse ray sampling approach. Moreover, to
ensure accurate appearances of different views, we further modulate the output
of the 2D diffusion model to the correct patterns of the template views without
altering the generated object's style. These two decoupled designs effectively
harness 3D information from reference objects to generate 3D objects while
preserving the generation quality of the 2D diffusion model. Extensive
experiments show our method can largely improve the multi-view consistency
while retaining fidelity and diversity. Our project page is available at:
https://stellarcheng.github.io/Sculpt3D/.Comment: Accepted by CVPR 2024. Project Page:
https://stellarcheng.github.io/Sculpt3D