19,534 research outputs found
Stanford-ORB: A Real-World 3D Object Inverse Rendering Benchmark
We introduce Stanford-ORB, a new real-world 3D Object inverse Rendering
Benchmark. Recent advances in inverse rendering have enabled a wide range of
real-world applications in 3D content generation, moving rapidly from research
and commercial use cases to consumer devices. While the results continue to
improve, there is no real-world benchmark that can quantitatively assess and
compare the performance of various inverse rendering methods. Existing
real-world datasets typically only consist of the shape and multi-view images
of objects, which are not sufficient for evaluating the quality of material
recovery and object relighting. Methods capable of recovering material and
lighting often resort to synthetic data for quantitative evaluation, which on
the other hand does not guarantee generalization to complex real-world
environments. We introduce a new dataset of real-world objects captured under a
variety of natural scenes with ground-truth 3D scans, multi-view images, and
environment lighting. Using this dataset, we establish the first comprehensive
real-world evaluation benchmark for object inverse rendering tasks from
in-the-wild scenes, and compare the performance of various existing methods.Comment: NeurIPS 2023 Datasets and Benchmarks Track. The first two authors
contributed equally to this work. Project page:
https://stanfordorb.github.io
CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering
Intrinsic image decomposition is a challenging, long-standing computer vision
problem for which ground truth data is very difficult to acquire. We explore
the use of synthetic data for training CNN-based intrinsic image decomposition
models, then applying these learned models to real-world images. To that end,
we present \ICG, a new, large-scale dataset of physically-based rendered images
of scenes with full ground truth decompositions. The rendering process we use
is carefully designed to yield high-quality, realistic images, which we find to
be crucial for this problem domain. We also propose a new end-to-end training
method that learns better decompositions by leveraging \ICG, and optionally IIW
and SAW, two recent datasets of sparse annotations on real-world images.
Surprisingly, we find that a decomposition network trained solely on our
synthetic data outperforms the state-of-the-art on both IIW and SAW, and
performance improves even further when IIW and SAW data is added during
training. Our work demonstrates the suprising effectiveness of
carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through
Physically-Based Rendering' published in ECCV, 201
- …