10 research outputs found
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity
Score distillation has emerged as one of the most prevalent approaches for
text-to-3D asset synthesis. Essentially, score distillation updates 3D
parameters by lifting and back-propagating scores averaged over different
views. In this paper, we reveal that the gradient estimation in score
distillation is inherent to high variance. Through the lens of variance
reduction, the effectiveness of SDS and VSD can be interpreted as applications
of various control variates to the Monte Carlo estimator of the distilled
score. Motivated by this rethinking and based on Stein's identity, we propose a
more general solution to reduce variance for score distillation, termed Stein
Score Distillation (SSD). SSD incorporates control variates constructed by
Stein identity, allowing for arbitrary baseline functions. This enables us to
include flexible guidance priors and network architectures to explicitly
optimize for variance reduction. In our experiments, the overall pipeline,
dubbed SteinDreamer, is implemented by instantiating the control variate with a
monocular depth estimator. The results suggest that SSD can effectively reduce
the distillation variance and consistently improve visual quality for both
object- and scene-level generation. Moreover, we demonstrate that SteinDreamer
achieves faster convergence than existing methods due to more stable gradient
updates.Comment: Project page: https://vita-group.github.io/SteinDreamer
Taming Mode Collapse in Score Distillation for Text-to-3D Generation
Despite the remarkable performance of score distillation in text-to-3D
generation, such techniques notoriously suffer from view inconsistency issues,
also known as "Janus" artifact, where the generated objects fake each view with
multiple front faces. Although empirically effective methods have approached
this problem via score debiasing or prompt engineering, a more rigorous
perspective to explain and tackle this problem remains elusive. In this paper,
we reveal that the existing score distillation-based text-to-3D generation
frameworks degenerate to maximal likelihood seeking on each view independently
and thus suffer from the mode collapse problem, manifesting as the Janus
artifact in practice. To tame mode collapse, we improve score distillation by
re-establishing the entropy term in the corresponding variational objective,
which is applied to the distribution of rendered images. Maximizing the entropy
encourages diversity among different views in generated 3D assets, thereby
mitigating the Janus problem. Based on this new objective, we derive a new
update rule for 3D score distillation, dubbed Entropic Score Distillation
(ESD). We theoretically reveal that ESD can be simplified and implemented by
just adopting the classifier-free guidance trick upon variational score
distillation. Although embarrassingly straightforward, our extensive
experiments successfully demonstrate that ESD can be an effective treatment for
Janus artifacts in score distillation.Comment: Project page: https://vita-group.github.io/3D-Mode-Collapse