We introduce GeoWizard, a new generative foundation model designed for
estimating geometric attributes, e.g., depth and normals, from single images.
While significant research has already been conducted in this area, the
progress has been substantially limited by the low diversity and poor quality
of publicly available datasets. As a result, the prior works either are
constrained to limited scenarios or suffer from the inability to capture
geometric details. In this paper, we demonstrate that generative models, as
opposed to traditional discriminative models (e.g., CNNs and Transformers), can
effectively address the inherently ill-posed problem. We further show that
leveraging diffusion priors can markedly improve generalization, detail
preservation, and efficiency in resource usage. Specifically, we extend the
original stable diffusion model to jointly predict depth and normal, allowing
mutual information exchange and high consistency between the two
representations. More importantly, we propose a simple yet effective strategy
to segregate the complex data distribution of various scenes into distinct
sub-distributions. This strategy enables our model to recognize different scene
layouts, capturing 3D geometry with remarkable fidelity. GeoWizard sets new
benchmarks for zero-shot depth and normal prediction, significantly enhancing
many downstream applications such as 3D reconstruction, 2D content creation,
and novel viewpoint synthesis.Comment: Project page: https://fuxiao0719.github.io/projects/geowizard