90 research outputs found
Virtual Home Staging: Inverse Rendering and Editing an Indoor Panorama under Natural Illumination
We propose a novel inverse rendering method that enables the transformation
of existing indoor panoramas with new indoor furniture layouts under natural
illumination. To achieve this, we captured indoor HDR panoramas along with
real-time outdoor hemispherical HDR photographs. Indoor and outdoor HDR images
were linearly calibrated with measured absolute luminance values for accurate
scene relighting. Our method consists of three key components: (1) panoramic
furniture detection and removal, (2) automatic floor layout design, and (3)
global rendering with scene geometry, new furniture objects, and a real-time
outdoor photograph. We demonstrate the effectiveness of our workflow in
rendering indoor scenes under different outdoor illumination conditions.
Additionally, we contribute a new calibrated HDR (Cali-HDR) dataset that
consists of 137 calibrated indoor panoramas and their associated outdoor
photographs
Recommended from our members
A Practical Approach to 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus
Global or indirect illumination effects such as interreflections and subsurface scattering severely degrade the performance of structured light-based 3D scanning. In this paper, we analyze the errors in structured light, caused by both long-range (interreflections) and short-range (subsurface scattering) indirect illumination. The errors depend on the frequency of the projected patterns, and the nature of indirect illumination. In particular, we show that long-range effects cause decoding errors for low-frequency patterns, whereas short-range effects affect high-frequency patterns. Based on this analysis, we present a practical 3D scanning system which works in the presence of a broad range of indirect illumination. First, we design binary structured light patterns that are resilient to individual indirect illumination effects using simple logical operations and tools from combinatorial mathematics. Scenes exhibiting multiple phenomena are handled by combining results from a small ensemble of such patterns. This combination also allows detecting any residual errors that are corrected by acquiring a few additional images. Our methods can be readily incorporated into existing scanning systems without significant overhead in terms of capture time or hardware. We show results for several scenes with complex shape and material properties
WALT3D: Generating Realistic Training Data from Time-Lapse Imagery for Reconstructing Dynamic Objects under Occlusion
Current methods for 2D and 3D object understanding struggle with severe
occlusions in busy urban environments, partly due to the lack of large-scale
labeled ground-truth annotations for learning occlusion. In this work, we
introduce a novel framework for automatically generating a large, realistic
dataset of dynamic objects under occlusions using freely available time-lapse
imagery. By leveraging off-the-shelf 2D (bounding box, segmentation, keypoint)
and 3D (pose, shape) predictions as pseudo-groundtruth, unoccluded 3D objects
are identified automatically and composited into the background in a clip-art
style, ensuring realistic appearances and physically accurate occlusion
configurations. The resulting clip-art image with pseudo-groundtruth enables
efficient training of object reconstruction methods that are robust to
occlusions. Our method demonstrates significant improvements in both 2D and 3D
reconstruction, particularly in scenarios with heavily occluded objects like
vehicles and people in urban scenes.Comment: To appear in CVPR 2024. Homepage: https://www.cs.cmu.edu/~walt3
Novel Depth Cues from Uncalibrated Near-field Lighting
We present the first method to compute depth cues from images taken solely under uncalibrated near point lighting. A stationary scene is illuminated by a point source that is moved approximately along a line or in a plane. We observe the brightness profile at each pixel and demonstrate how to obtain three novel cues: plane-scene intersections, depth ordering and mirror symmetries. These cues are defined with respect to the line/plane in which the light source moves, and not the camera viewpoint. Plane-Scene Intersections are detected by finding those scene points that are closest to the light source path at some time instance. Depth Ordering for scenes with homogeneous BRDFs is obtained by sorting pixels according to their shortest distances from a plane containing the light source. Mirror Symmetry pairs for scenes with homogeneou
- …