Combining Full Spherical Depth and HDR Images to Implement a Virtual Camera

Abstract

Based on a real-world scene description consisting of N spherical depth images from a laser range scanner and M spherical (high dynamic range) texture images we present a method that allows to place a virtual camera at an arbitrary position in the scene. The proposed method considers the position and orientation of each depth and texture scan respectively to be unknown. A possible appli-cation is to “capture ” background images or light probes for photorealistic rendering. The first step to achieve this goal is to register the depth images relative to each other and mesh them, which is done by means of a combination of well known standard techniques. The next step is to register the texture scans relative to the geometry. For this purpose, we developed an error model, based on image coordinates of identified features in the depth and texture images, together with an appropriate minimization algorithm. The final step of “capturing ” the new image is implemented with a ray-tracing based algorithm. The capabilities and remaining problems of this approach are discussed and demonstrated with high resolution real-world data.

Similar works

Full text

thumbnail-image

CiteSeerX

redirect
Last time updated on 28/10/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.