2 research outputs found

    A Survey of Methods for Volumetric Scene Reconstruction from Photographs

    Get PDF
    Scene reconstruction, the task of generating a 3D model of a scene given multiple 2D photographs taken of the scene, is an old and difficult problem in computer vision. Since its introduction, scene reconstruction has found application in many fields, including robotics, virtual reality, and entertainment. Volumetric models are a natural choice for scene reconstruction. Three broad classes of volumetric reconstruction techniques have been developed based on geometric intersections, color consistency, and pair-wise matching. Some of these techniques have spawned a number of variations and undergone considerable refinement. This paper is a survey of techniques for volumetric scene reconstruction

    3-D Object Reconstruction Using Spatially Extended Voxels and Multi-Hypothesis Voxel Coloring

    No full text
    In this paper we describe a voxel-based 3-D reconstruction technique from multiple calibrated camera views that makes explicit use of the finite size footprint of a voxel when projected into the image plane. We derive a class of computationally efficient axis-aligned volume traversal orders that ensure that a processed voxel cannot occlude previously processed voxels. For each view, one out of 79 different cases of volume traversal is identified depending on the relative position between camera and voxel volume. Views belonging to the same visibility class can be processed simultaneously. Our voxel coloring strategy is based on a color hypothesis test that ensures the consistency of the projected reconstruction with the original images. A surface voxel list is constantly updated during reconstruction ensuring that only a minimum number of voxels has to be processed. Experimental results that compare the reconstruction quality for voxels with and without spatial extent underline that it is worthwhile taking into account the exact footprint of the projected voxels
    corecore